WO2018214066A1 - Reply message comprising content of an input field - Google Patents
Reply message comprising content of an input field Download PDFInfo
- Publication number
- WO2018214066A1 WO2018214066A1 PCT/CN2017/085748 CN2017085748W WO2018214066A1 WO 2018214066 A1 WO2018214066 A1 WO 2018214066A1 CN 2017085748 W CN2017085748 W CN 2017085748W WO 2018214066 A1 WO2018214066 A1 WO 2018214066A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- input field
- selectable option
- content
- presented
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0489—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
- G06F3/04895—Guidance during keyboard input operation, e.g. prompting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/107—Computer-aided management of electronic mailing [e-mailing]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/046—Interoperability with other network applications or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/224—Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72433—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72436—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/42—Mailbox-related aspects, e.g. synchronisation of mailboxes
Definitions
- the present application relates generally to enabling a user-selectable option to send a reply message comprising content of an input field.
- Multifunction communication devices such as computers, mobile telephones, tablets and even wearable devices have become increasingly prevalent in everyday business and social life. They are frequently equipped to receive messages of various different types, such as textual messages, picture messages, audio messages, and messages that combine different types of content such as a textual message with an attached or embedded image.
- a first example aspect provides a method comprising: causing an input field to be presented; enabling a first user-selectable option for performing a first function using content of the input field; in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- an apparatus comprising: means for causing an input field to be presented; means for enabling a first user-selectable option for performing a first function using content of the input field; and means for, in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- a second example aspect provides apparatus comprising: a processor; and memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: causing an input field to be presented; enabling a first user-selectable option for performing a first function using content of the input field; and in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- a third example aspect provides a computer program product comprising a computer-readable medium bearing computer program code embodied therein for use with a computer, the computer program code comprising: code for causing an input field to be presented; code for enabling a first user-selectable option for performing a first function using content of the input field; and code for enabling, in response to the reception of a message from a sender, a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- FIGURE 1 illustrates an apparatus
- FIGURE 2 illustrates an apparatus that may comprise the apparatus of FIGURE 1;
- FIGURES 3A-D illustrate the apparatus of FIGURE 3 displaying a user interface
- FIGURES 4A-E illustrate user-selectable options that may form part of a user interface
- FIGURES 5A-D illustrate a text input field
- FIGURE 6 illustrates a different text input field
- FIGURE 7 illustrates a trace input field
- FIGURE 8 illustrates an audio input field
- FIGURE 9 is a flow chart illustrating a method.
- FIGURES 1 through 9 of the drawings Examples and their potential advantages are understood by referring to FIGURES 1 through 9 of the drawings.
- FIGURE 1 illustrates an apparatus 100 according to an example.
- the apparatus 100 may comprise at least one antenna 105 that may be communicatively coupled to a transmitter and/or receiver component 110.
- the apparatus 100 may also comprise a volatile memory 115, such as volatile Random Access Memory (RAM) that may include a cache area for the temporary storage of data.
- RAM volatile Random Access Memory
- the apparatus 100 may also comprise other memory, for example, non-volatile memory 120, which may be embedded and/or be removable.
- the non-volatile memory 120 may comprise an EEPROM, flash memory, or the like.
- the memories may store any of a number of pieces of information, and data -for example an operating system for controlling the device, application programs that can be run on the operating system, and user and/or system data.
- the apparatus may comprise a processor 125 that can use the stored information and data to implement one or more functions of the apparatus 100, such as the functions described hereinafter.
- the processor 125 and at least one of volatile 115 or non-volatile 120 memories may be present in the form of an Application Specific Integrated Circuit (ASIC) , a Field Programmable Gate Array (FPGA) , or any other application-specific component.
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- processor is used in the singular, it may refer either to a processor (e.g. an FPGA or a single CPU) , or an arrangement of more than one processor that cooperate to provide an overall processing function (e.g. two or more FPGAs or CPUs that operate in a parallel processing arrangement) .
- the apparatus 100 may comprise one or more User Identity Modules (UIMs) 130.
- Each UIM 130 may comprise a memory device having a built-in processor.
- Each UIM 130 may comprise, for example, a subscriber identity module, a universal integrated circuit card, a universal subscriber identity module, a removable user identity module, and/or the like.
- Each UIM 130 may store information elements related to a subscriber, an operator, a user account, and/or the like.
- a UIM 130 may store subscriber information, message information, contact information, security information, program information, and/or the like.
- the apparatus 100 may comprise a number of user interface components. For example, a microphone 135 and an audio output device such as a speaker 140.
- the apparatus 100 may comprise one or more hardware controls, for example a plurality of keys laid out in a keypad 145.
- a keypad 145 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *) , alphabetic keys, and/or the like for operating the apparatus 100.
- the keypad 145 may comprise a conventional QWERTY (or local equivalent) keypad arrangement.
- the keypad may instead comprise a different layout, such as E. 161 standard mapping recommended by the Telecommunication Standardization Sector (ITU-T) .
- the keypad 145 may also comprise one or more soft keys with associated functions that may change depending on the input of the device.
- the apparatus 100 may comprise an interface device such as a joystick, trackball, or other user-selectable option.
- the apparatus 100 may comprise one or more display devices such as a screen 150.
- the screen 150 may be a touchscreen, in which case it may be configured to receive input from a single point of contact, multiple points of contact, and/or the like.
- the touchscreen may determine input based on position, motion, speed, contact area, and/or the like.
- Suitable touchscreens may involve those that employ resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch.
- a “touch” input may comprise any input that is detected by a touchscreen including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touchscreen, such as a result of the proximity of the selection object to the touchscreen.
- the touchscreen may be controlled by the processor 125 to implement an on-screen keyboard.
- displays of other types may be used.
- a projector may be used to project a display onto a surface such as a wall.
- the user may interact with the projected display, for example by touching projected user interface elements.
- FIGURE 2 illustrates an example of a computing device 200.
- FIGURE 2 may comprise the apparatus 100 of FIGURE 1.
- the device has a touch screen 210 and hardware buttons 220, although different hardware features may be present.
- the device 200 may have a non-touch display upon which a cursor can be presented, the cursor being movable by the user according to inputs received from the hardware buttons 220, a trackball, a mouse, or any other suitable user interface component.
- Non-exhaustive examples of other devices including apparatus, implementing methods, or running or storing computer program code according to examples may include a mobile telephone or other mobile communication device, a personal digital assistant, a laptop computer, a tablet computer, a games console, a personal media player, a wearable computing device such as a smart watch, an internet terminal, a jukebox, or any other computing device.
- Suitable apparatus may have all, some, or none of the features described above.
- FIGURE 3A shows device 200 with an example of a UI 300 that might be displayed on the display of such a device.
- the UI 300 in this example comprises an application UI 310 and a virtual keyboard UI 320.
- this particular UI configuration is shown purely by way of example -for example the virtual keyboard UI 320 may not be shown, and more than one application UI 310 may be shown simultaneously, for example in respective windows.
- device 200 is also illustrated purely by way of example and that such a UI 300 might be displayed on devices of different types and capabilities (for example, a device that does not have a touch screen and instead relies on other forms of user input) .
- the application UI 310 illustrated in FIGURE 3A includes a text input field 330.
- the text input field has a primary function that is associated with the application that provides the application UI and is used to receive text from the user that can be used by an application presenting the application UI 310.
- the application is a calendar application and the application UI 310 is a UI for entering a new calendar event and comprises text input field 330 in order to receive from the user a textual description of details of the event to include in that event’s entry in the user’s calendar.
- the application could be any other application and the text input field 330 could be provided for receiving text for any purpose.
- the application UI 310 includes a “SAVE” button 335 that is selectable by the user to perform a function using the content of the text input field 330, in this case saving a calendar event including the details that the user has typed into that field 330.
- the “SAVE” button is an example of a user-selectable option. However, in other examples other types of user-selectable option may be used than a button, for example a different type of UI element such as a dial, slider, or similar.
- the function performed using the inputted text will of course depend on the particular example.
- the user selectable option may not be provided as part of the application UI 300 itself, but might instead be otherwise provided -for example at an operating system level, i.e. in a part of the UI 300 that is not part of the application UI 310.
- An application UI may be generated by an application working in cooperation with the operating system (e.g. where the operating system presents the application UI according to instructions that are provided by the application) .
- references are made herein to a characteristic of an application UI, they will be understood as referring to the user interface in so much as it is instructed by an application.
- the applications instructions are modified (e.g. through the inclusion of a user-selectable option) by the operating system in a way that is not specified by the application itself, such modifications will be considered to be performed at a system level.
- FIGURE 3B shows the UI of FIGURE 3A adapted to include a notification 340 relating to an incoming message.
- the notification is a so-called ‘toast’ notification in the form of a banner across the top edge of the UI 300.
- a toast notification is a visual notification on a display that remains visible only temporarily until it automatically ceases to be displayed unless e.g. a user input instructs it to be dismissed earlier or to remain longer.
- a toast banner notification 340 is illustrated in FIGURE 3B, in practice other forms of notification may be used, for example a pop-up notification elsewhere on the screen, a notification displayed within an always-visible status area such as a status bar, and/or any other suitable notification including non-visual notifications such as audible notifications and haptic notifications.
- the notification may be a toast notification (i.e. it may expire and cease to be presented to the user automatically) or not -for example the notification may be a persistent notification that is continually or repeatedly presented to the user until e.g. the user actively cancels it.
- the notification 340 shown in FIGURE 3B includes a preview of the incoming message to which it relates.
- the preview includes identification of the sender (John Smith) , along with the type of message (Short Messaging Service -SMS) , and the body of the message ( “Hello. Is everything okay? ” ) .
- Not all this information is necessarily presented as part of the notification, and it also may be presented in a different form to that shown in FIGURE 3B.
- notification 340 relates to an SMS message from a sender whose identity is known to the device 200 or at obtainable by it and the identity of the sender can therefore be presented to the user as a name; however, in other cases the sender’s identity may be presented in another form -for example as a telephone number.
- Notification 340 may include a preview of the message content such as that shown in Figure 3B.
- the preview comprises the entirety of the content of the SMS message.
- the displayed content may be selected from the message in any suitable way; for example, a predetermined number of characters or words may be selected from the beginning of the message, or the message may be parsed to identify the most important content to be presented.
- salutations may be identified in the message and excluded from the preview as unimportant, whereas questions (e.g. expressions terminating in a question mark) may be identified as important and included in the preview.
- questions e.g. expressions terminating in a question mark
- a user viewing the notification 340 in FIGURE 3B may wish to respond to the incoming message without leaving the application UI 310. Doing so provides the user with a number of advantages -for example it minimises the disruption to the task that he is performing with the application UI 310 both in terms of the level of user input required to respond to the message and also the mental disruption and distraction experienced by the user in doing so.
- the user may need to navigate away from application UI 310 to a main menu, locate and then select an option to launch the messaging UI, find the incoming message, select an option to reply to it, enter a response, send the response, leave the messaging UI, re-open application UI 310 and then restore the context of application UI 310 to where he left off (e.g. by scrolling, bringing the correct part of the UI into focus, etc. )
- shortcuts exist to navigate more quickly to the messaging UI and return to application UI 310, the change in the UI presented to the user will be disruptive to his chain of thought in interacting with application UI 310.
- Text input field 330 is part of the application UI 310 and is not a text input field that is normally used for responding to messages but instead has a different primary function within the application.
- the user Once the user has entered text into the incoming message within the text input field 330, he is able to send this text as a reply to the sender of the incoming message. Once the reply is sent, the entered text is removed from the text input field 330 and subsequent text entered into will be used by the application presenting the text input field’s 330 primary function.
- the primary purpose is for defining textual details of a new calendar event.
- the user has entered text into the text input field 330 and then selected the additional key 350 to send that text as a reply to the incoming message.
- the notification 340 has been removed from the display and the additional key 350 has been removed from the keyboard UI 320.
- the additional key 350 may not be actually removed from the keyboard UI 320 when it is not usable, and could instead be e.g. greyed-out or otherwise shown to be disabled or remapped to a different function (e.g. entry of a character) .
- the user may cause the entered text to be sent as a reply using means other than additional key 350.
- means other than additional key 350 There are a number of possibilities.
- the keyboard UI 320 shown in FIGURE 3A-D is not part of the application UI 310 but is instead presented by the operating system of device 200.
- the author of the application presenting the application UI 310 need make no special allowance for either the use of the text input field 330 to enter the text used for the reply or for receiving a user input that causes the reply to be sent.
- the author may not even need to be aware that such functionality is possible, since the operation may be performed entirely at an operating system level and not based on program code written for the application.
- a button such as additional button 340 is just one UI element that can be used to cause a reply to be sent to the incoming message, and any suitable user input may be mapped to this function. More generally, the user may be presented with an suitable form ofuser-selectable option to cause the reply to be sent.
- an alternative user-selectable option may be located elsewhere in the user interface such as within the notification itself.
- An example of this is shown in FIGURE 4A where a reply button 350a is presented as part of the notification 340. Selecting this reply button after entering text into the text input field 330 will reply to the notified incoming message using the textual content of the text input field 330.
- FIGURE 4B A different example is shown in FIGURE 4B.
- an existing key 350b of the keyboard UI 320 has been modified to have an additional function, in this example “Enter” and “Reply” .
- Selecting the key in different manners may select the function that is performed, for example short-pressing the key to activate the original function (in this case “Enter” ) and long-pressing the key to activate the additional function (replying to the incoming message) .
- Different levels of force may also be used to different between the different manners of selection.
- the enter key has been used as an example in this figure, any suitable key or other user-selectable option anywhere in the UI 300 may be repurposed in this way. The repurposing may not retain the original functionality -for example the original function may be disabled and entirely replaced by the reply functionality.
- FIGURE 4C shows an example where having entered textual content for a reply into the text input field 330, the user traces the letter “R” 350c over the UI 300 in order to cause the reply to be sent.
- the application UI 310 may be modified in order to allow the user to cause the reply to be sent by interacting with the application UI.
- FIGURE 4D shows an example of a text input field 330 a modified application that has been modified in response to an incoming message to allow the textual content of the text input field to be sent in response to the incoming message.
- a “Reply” button 350d has been added to the text input field 330.
- the application UI 310 may be given the appearance of modification whilst the actual modification is performed at an operating system level.
- the operating system may modify the display of the application UI 300 without informing the application presenting it that such modification has taken place.
- the modification may be made in an otherwise transparent layer displayed on top of the application UI.
- the “Reply” button 350d shown in FIGURE 4D may be superimposed over or close to the text input field by the operating system using a transparent layer that is displayed over the application UI 310.
- a new user-selectable option is added to the UI 300 or an existing user-selectable option is modified to allow the user to cause a reply message to be sent, this may in some examples only be done when certain criteria are met. For example, when the user has entered new text into a text input field 330 since the reception of the incoming message, and/or whilst the current focus of the UI is on the text input field 330 (e.g. whilst a text input cursor is displayed within it) . This may be advantageous in the case where an user-selectable option is added or modified and it is displayed in connection with a specific text input field (e.g.
- buttons 350d shown in FIGURE 4D which is displayed in connection with text input field 330) , since several text input fields may be displayed and the new or modified user-selectable option can then be displayed in connection with only the text input field where the user has e.g. most recently entered text.
- a user-selectable option may be labelled in such a way to convey information to the user.
- the user-selectable option may be labelled to inform the user two whom the reply message will be sent.
- FIGURE 4E shows an example of such a user-selectable option, being a button 350e that is labelled “Reply to John Smith” wherein John Smith is the sender of the incoming message and thus will be the recipient of the reply message should one be sent.
- Other information can be similarly conveyed to the user; for example the label may indicate the type of message that will be used in the reply ( “Reply by SMS to John Smith” ) .
- the user interface 300 is adapted to allow the user to enter text into the text input field 330 and cause this to be sent as a reply message whilst the notification is being displayed.
- this adaption may be reverse and the UI 300 returned to its previous state.
- the adaption of the UI 300 may be based on the reception of incoming message and/or by the presentation of the notification 340, the adaption may remain in effect until different criteria are met.
- the UI 300 may remain adapted for a predetermined period of time that does not correspond to the length of time for which the notification is displayed, or until a particular event is detected.
- the user’s device receives a first message from John Smith and displays a notification 340 as shown in FIGURE 3B.
- the user begins to compose a reply in text input field 330 but before he has a chance to cause the reply to be sent his device receives a second message from Jane Doe.
- the notification 340 of John Smith’s message may be replaced by one relating to Jane Doe’s, but if the UI 300 is immediately adapted to respond to Jane Doe’s message rather than John Smith’s then the user may not be readily able to send to John Smith the text that he has just entered.
- the adaptation of the UI 300 may be locked based on the context of the user interface, so that the adaptation remains for as long as the focus of the UI 300 is on the text input field 330 (e.g. a text input cursor is present in the text input field) and the adaptation is unlocked only once the focus leaves the text input field 330 or a predetermined time after that occurs.
- a text input field may already contain textual content before the user begins to compose the reply.
- FIGURE 5A-D shows an example of such a case where a text input field 530 contains the original text “Meeting with the boss” (560) .
- a notification of an incoming message is then received and the user moves the focus to the text input field to compose a response.
- the original text 560 is removed from the text input field 500 and a text input cursor 570 is displayed.
- the user enters the reply message text “Hi John. Everything is fine. You? ” 580 and causes the reply message to be sent.
- the original text 560 is returned to the text input field 500.
- the composition of the reply message in the text input field 500 is cancelled (e.g. by deleting the reply message text 580 or performing another user input to cancel the reply) then the contents of the text input field 500 may be restored to the original text 560 then.
- FIGURE 6 shows an alternative example where the original text 660 of a text input field 600 may be retained as reply message text 680 is entered.
- the original text 660 and the reply message text 680 may be differentiated visually.
- the reply message text 680 is emboldened relative to the original text 660 and underlined, although other visual differentiations may be applied to the original text 660 and/or the reply message text 680 such as variations in colour and/or other formatting styles. Any suitable variation may be chosen that allows the user to differentiate between the original text 660 and the reply message text 680.
- the user may enter text into a text input field whilst the UI 300 is adapted, but without the intention of using that text in a reply message.
- the newly entered text may be retained in the text input field, and may be combined with whatever original text was already present in the text input field prior to the adaptation.
- the differentiation may be removed.
- the system may determine that the user does not wish to use the newly entered text as a reply message from the fact that the user does not select the user-selectable option to cause the reply message to be sent (e.g. within a threshold period of time) , and instead e.g. cancels the notification, selects the user-selectable option to cause the text input field’s primary function to be performed, or based on any other suitable criteria.
- the application UI 300 has not been the UI of a messaging application.
- the user is provided with the ability to reply to messages directly from within the UI of an application it has not been designed to compose and send messages.
- a similar approach also may be taken within messaging application UIs, for example to permit the user to respond to an e-mail message from within a UI designed solely for SMS messaging.
- the user may be provided with a way to compose reply messages within a messaging application where the type of reply message is not a type that is offered by that messaging application.
- Examples of messaging applications may include SMS applications, e-mail applications, instant messaging applications, and similar.
- the user may be provided with a means of composing messages from within a messaging application that is designed to send messages of the appropriate type, but using user-selectable options that do not have the composition of such messages as their primary function -for example the user of an SMS messaging application may be provided with a way to respond to incoming SMS messages from within a settings menu of the SMS application without needing to first exit that menu and navigate to a message composition UI.
- a text input field has been used by the user to enter the content that is used to reply to the incoming message.
- other forms of input field may be used to capture other forms of content that can be included in the reply message.
- FIGURE 7 shows an input field 700 that is not specifically a text input field.
- the input field receives and marks a trace input from a user (e.g. via a touch screen or mouse) , allowing the user to draw shapes within the input field 700.
- Shapes drawn by the user in the field can be used as content to be comprised by the reply message when it is sent.
- the user has traced a smiley face 710 into the input field, and there are many ways in which this content could be comprised by the reply message.
- an image could be generated representing the entire input field 700 and attached or embedded within the reply message in order that it is viewable by the recipient.
- an image could be generated that is cropped around the content that the user has entered 710 (i.e.
- the content of the input field 700 could be processed before it is added to the reply message, for example object recognition could be used to detect the smiley face 710 and include it in the reply message by adding a representation of it in one or more characters (e.g. or “:-) ” ) .
- any input field that is capable of receiving a user inputted content may be used to compose a reply to the incoming message and that such content is not limited to ‘visual’ content such as text or drawings.
- suitable content may include sound, haptic content (e.g. a pattern of taps input by the user) , and any other suitable content.
- Fig. 8 illustrates an audio input field through with a primary function of recording a voice memo for storage on a user’s device.
- the audio input field comprises a visual representation of inputted audio 810 and a button 820 that can be pressed by the user to start and stop recording.
- the button 820 When the user is notified of an incoming message, he can press the button 820 to record an audio message (for example) that he can subsequently cause to be sent in a reply message.
- This audio message could be sent as e.g. a sound file attachment to the message, or it may be automatically transcribed in order to generate textual content to send in the reply message.
- FIGURE 9 illustrates a method 900 that may be implemented by the apparatus described above or any other suitable means.
- the method begins at step 910. The following step is to cause 920 an input field to be presented. A first user-selectable option is enabled 930 for performing a first function using content of the input field. Finally, in response to the reception of a message from a sender, a second user-selectable option is enabled 940 for sending a reply message to the sender, the reply message comprising content of the input field. The method then ends 950.
- a technical effect of one or more of the example embodiments disclosed herein is that a user who receives an incoming message and wishes to reply to it immediately is able to do so with the minimum of user inputs and with the minimum of disruption to his existing train of thought and workflow. If he is already working with an application when he is notified of the incoming message, he does not even need to leave or obscure that application’s user interface in order to compose a response. In some different approaches where the response is composed in a pop-up window that overlays the application’s UI, there would be a significantly greater level of interruption and disruption to the user.
- Example embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
- the software, application logic and/or hardware may reside on a removable memory, within internal memory or on a communication server.
- the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
- a “computer-readable medium” may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with examples of a computer described and depicted in FIGURE 1.
- a computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
- the invention may be implemented as an apparatus or device, for example a mobile communication device (e.g. a mobile telephone) , a PDA, a computer or other computing device, or a video game console.
- a mobile communication device e.g. a mobile telephone
- PDA personal digital assistant
- computer or other computing device e.g. a personal digital assistant
- video game console e.g. a video game console
- the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Strategic Management (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Input From Keyboards Or The Like (AREA)
Abstract
A method, comprising: causing an input field to be presented; enabling a first user-selectable option for performing a first function using content of the input field; and in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field. Also, a related apparatus and computer program product.
Description
The present application relates generally to enabling a user-selectable option to send a reply message comprising content of an input field.
Multifunction communication devices such as computers, mobile telephones, tablets and even wearable devices have become increasingly prevalent in everyday business and social life. They are frequently equipped to receive messages of various different types, such as textual messages, picture messages, audio messages, and messages that combine different types of content such as a textual message with an attached or embedded image.
There has developed, amongst users, an expectation of near-instant responses to certain messages, and it is common for communication devices to provide a notification to a user that a new message has been received even when the user is engaged in tasks on the device that are not primarily concerned with viewing and responding to messages of that type.
SUMMARY
A first example aspect provides a method comprising: causing an input field to be presented; enabling a first user-selectable option for performing a first function using content of the input field; in response to the reception of a message
from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
There is also described an apparatus comprising means for performing the method of the first example aspect.
There is also described an apparatus comprising: means for causing an input field to be presented; means for enabling a first user-selectable option for performing a first function using content of the input field; and means for, in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
A second example aspect provides apparatus comprising: a processor; and memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: causing an input field to be presented; enabling a first user-selectable option for performing a first function using content of the input field; and in response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
A third example aspect provides a computer program product comprising a computer-readable medium bearing computer program code embodied therein for use with a computer, the computer program code comprising: code for causing an input field to be presented; code for enabling a first user-selectable option for performing a first function using content of the input field; and code for enabling, in response to the reception of a message from a sender, a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
For a more complete understanding of examples, reference is now made to the following description taken in connection with the accompanying drawings in which:
FIGURE 1 illustrates an apparatus;
FIGURE 2 illustrates an apparatus that may comprise the apparatus of FIGURE 1;
FIGURES 3A-D illustrate the apparatus of FIGURE 3 displaying a user interface;
FIGURES 4A-E illustrate user-selectable options that may form part of a user interface;
FIGURES 5A-D illustrate a text input field;
FIGURE 6 illustrates a different text input field;
FIGURE 7 illustrates a trace input field;
FIGURE 8 illustrates an audio input field; and
FIGURE 9 is a flow chart illustrating a method.
DETAILED DESCRIPTON OF THE DRAWINGS
Examples and their potential advantages are understood by referring to FIGURES 1 through 9 of the drawings.
FIGURE 1 illustrates an apparatus 100 according to an example. The apparatus 100 may comprise at least one antenna 105 that may be communicatively
coupled to a transmitter and/or receiver component 110. The apparatus 100 may also comprise a volatile memory 115, such as volatile Random Access Memory (RAM) that may include a cache area for the temporary storage of data. The apparatus 100 may also comprise other memory, for example, non-volatile memory 120, which may be embedded and/or be removable. The non-volatile memory 120 may comprise an EEPROM, flash memory, or the like. The memories may store any of a number of pieces of information, and data -for example an operating system for controlling the device, application programs that can be run on the operating system, and user and/or system data. The apparatus may comprise a processor 125 that can use the stored information and data to implement one or more functions of the apparatus 100, such as the functions described hereinafter. In some example embodiments, the processor 125 and at least one of volatile 115 or non-volatile 120 memories may be present in the form of an Application Specific Integrated Circuit (ASIC) , a Field Programmable Gate Array (FPGA) , or any other application-specific component. Although the term “processor” is used in the singular, it may refer either to a processor (e.g. an FPGA or a single CPU) , or an arrangement of more than one processor that cooperate to provide an overall processing function (e.g. two or more FPGAs or CPUs that operate in a parallel processing arrangement) .
The apparatus 100 may comprise one or more User Identity Modules (UIMs) 130. Each UIM 130 may comprise a memory device having a built-in processor. Each UIM 130 may comprise, for example, a subscriber identity module, a universal integrated circuit card, a universal subscriber identity module, a removable user identity module, and/or the like. Each UIM 130 may store information elements related to a subscriber, an operator, a user account, and/or the like. For example, a UIM 130 may store subscriber information, message information, contact information, security information, program information, and/or the like.
The apparatus 100 may comprise a number of user interface components. For example, a microphone 135 and an audio output device such as a speaker 140. The apparatus 100 may comprise one or more hardware controls, for example a
plurality of keys laid out in a keypad 145. Such a keypad 145 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *) , alphabetic keys, and/or the like for operating the apparatus 100. For example, the keypad 145 may comprise a conventional QWERTY (or local equivalent) keypad arrangement. The keypad may instead comprise a different layout, such as E. 161 standard mapping recommended by the Telecommunication Standardization Sector (ITU-T) . The keypad 145 may also comprise one or more soft keys with associated functions that may change depending on the input of the device. In addition, or alternatively, the apparatus 100 may comprise an interface device such as a joystick, trackball, or other user-selectable option.
The apparatus 100 may comprise one or more display devices such as a screen 150. The screen 150 may be a touchscreen, in which case it may be configured to receive input from a single point of contact, multiple points of contact, and/or the like. In such an example embodiment, the touchscreen may determine input based on position, motion, speed, contact area, and/or the like. Suitable touchscreens may involve those that employ resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch. A “touch” input may comprise any input that is detected by a touchscreen including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touchscreen, such as a result of the proximity of the selection object to the touchscreen. The touchscreen may be controlled by the processor 125 to implement an on-screen keyboard.
In other examples, displays of other types may be used. For example, a projector may be used to project a display onto a surface such as a wall. In some further examples, the user may interact with the projected display, for example by touching projected user interface elements. Various technologies exist for
implementing such an arrangement, for example by analysing video of the user interacting with the display in order to identify touches and related user inputs.
FIGURE 2 illustrates an example of a computing device 200. FIGURE 2 may comprise the apparatus 100 of FIGURE 1. The device has a touch screen 210 and hardware buttons 220, although different hardware features may be present. For example, instead of a touchscreen 210 the device 200 may have a non-touch display upon which a cursor can be presented, the cursor being movable by the user according to inputs received from the hardware buttons 220, a trackball, a mouse, or any other suitable user interface component.
Non-exhaustive examples of other devices including apparatus, implementing methods, or running or storing computer program code according to examples may include a mobile telephone or other mobile communication device, a personal digital assistant, a laptop computer, a tablet computer, a games console, a personal media player, a wearable computing device such as a smart watch, an internet terminal, a jukebox, or any other computing device. Suitable apparatus may have all, some, or none of the features described above.
Examples will be described with reference to the apparatus 100 and device 200 shown in FIGURES 1 and 2. However, it will be understood that these are not necessarily limited by the inclusion of all of the elements described in relation to the drawings, and that the scope of protection is instead defined by the claims.
FIGURE 3A shows device 200 with an example of a UI 300 that might be displayed on the display of such a device. The UI 300 in this example comprises an application UI 310 and a virtual keyboard UI 320. However, it is to be understood that this particular UI configuration is shown purely by way of example -for example the virtual keyboard UI 320 may not be shown, and more than one application UI 310 may be shown simultaneously, for example in respective windows. It is also to be understood that device 200 is also illustrated purely by way of example and that such
a UI 300 might be displayed on devices of different types and capabilities (for example, a device that does not have a touch screen and instead relies on other forms of user input) .
The application UI 310 illustrated in FIGURE 3A includes a text input field 330. The text input field has a primary function that is associated with the application that provides the application UI and is used to receive text from the user that can be used by an application presenting the application UI 310. In the example of FIGURE 3A, the application is a calendar application and the application UI 310 is a UI for entering a new calendar event and comprises text input field 330 in order to receive from the user a textual description of details of the event to include in that event’s entry in the user’s calendar. However, the application could be any other application and the text input field 330 could be provided for receiving text for any purpose.
The application UI 310 includes a “SAVE” button 335 that is selectable by the user to perform a function using the content of the text input field 330, in this case saving a calendar event including the details that the user has typed into that field 330. The “SAVE” button is an example of a user-selectable option. However, in other examples other types of user-selectable option may be used than a button, for example a different type of UI element such as a dial, slider, or similar. The function performed using the inputted text will of course depend on the particular example. In some examples the user selectable option may not be provided as part of the application UI 300 itself, but might instead be otherwise provided -for example at an operating system level, i.e. in a part of the UI 300 that is not part of the application UI 310.
An application UI may be generated by an application working in cooperation with the operating system (e.g. where the operating system presents the application UI according to instructions that are provided by the application) . Where references are made herein to a characteristic of an application UI, they will be
understood as referring to the user interface in so much as it is instructed by an application. To the extent to which the applications instructions are modified (e.g. through the inclusion of a user-selectable option) by the operating system in a way that is not specified by the application itself, such modifications will be considered to be performed at a system level.
FIGURE 3B shows the UI of FIGURE 3A adapted to include a notification 340 relating to an incoming message. In the example illustrated the notification is a so-called ‘toast’ notification in the form of a banner across the top edge of the UI 300. A toast notification is a visual notification on a display that remains visible only temporarily until it automatically ceases to be displayed unless e.g. a user input instructs it to be dismissed earlier or to remain longer. Although a toast banner notification 340 is illustrated in FIGURE 3B, in practice other forms of notification may be used, for example a pop-up notification elsewhere on the screen, a notification displayed within an always-visible status area such as a status bar, and/or any other suitable notification including non-visual notifications such as audible notifications and haptic notifications. The notification may be a toast notification (i.e. it may expire and cease to be presented to the user automatically) or not -for example the notification may be a persistent notification that is continually or repeatedly presented to the user until e.g. the user actively cancels it.
The notification 340 shown in FIGURE 3B includes a preview of the incoming message to which it relates. The preview includes identification of the sender (John Smith) , along with the type of message (Short Messaging Service -SMS) , and the body of the message ( “Hello. Is everything okay? ” ) . Not all this information is necessarily presented as part of the notification, and it also may be presented in a different form to that shown in FIGURE 3B. For example, notification 340 relates to an SMS message from a sender whose identity is known to the device 200 or at obtainable by it and the identity of the sender can therefore be presented to the user as a name; however, in other cases the sender’s identity may be presented in another form -for example as a telephone number.
A user viewing the notification 340 in FIGURE 3B may wish to respond to the incoming message without leaving the application UI 310. Doing so provides the user with a number of advantages -for example it minimises the disruption to the task that he is performing with the application UI 310 both in terms of the level of user input required to respond to the message and also the mental disruption and distraction experienced by the user in doing so. For example, if the user is required to enter a dedicated messaging UI in order to respond then he may need to navigate away from application UI 310 to a main menu, locate and then select an option to launch the messaging UI, find the incoming message, select an option to reply to it, enter a response, send the response, leave the messaging UI, re-open application UI 310 and then restore the context of application UI 310 to where he left off (e.g. by scrolling, bringing the correct part of the UI into focus, etc. ) Even where shortcuts exist to navigate more quickly to the messaging UI and return to application UI 310, the change in the UI presented to the user will be disruptive to his chain of thought in interacting with application UI 310.
In Figure 3C the user has begun to compose a response to the incoming message in text input field 330. Text input field 330 is part of the application UI 310 and is not a text input field that is normally used for responding to messages but instead has a different primary function within the application. Once the user has entered text into the incoming message within the text input field 330, he is able to send this text as a reply to the sender of the incoming message. Once the reply is sent, the entered text is removed from the text input field 330 and subsequent text entered into will be used by the application presenting the text input field’s 330 primary function. In this example the primary purpose is for defining textual details of a new calendar event.
There are various possible ways in which the user can instruct that textual content entered into the text input field 330 should be used to respond to the incoming message rather than for the text input field’s 330 primary function. In the example shown in FIGURE 3C an additional key 350 is added to the keyboard UI 320 and selection of that key causes text that has been entered into the text input field 330 to be send as a reply to the incoming message rather than used for the primary function. If the user enters text into the text input field 330 but does not select the additional key 350 then the text will not be sent as a reply to the incoming message but may instead be used for the primary function.
In Figure 3D the user has entered text into the text input field 330 and then selected the additional key 350 to send that text as a reply to the incoming message. In response, the notification 340 has been removed from the display and the additional key 350 has been removed from the keyboard UI 320. Of course, the additional key 350 may not be actually removed from the keyboard UI 320 when it is not usable, and could instead be e.g. greyed-out or otherwise shown to be disabled or remapped to a different function (e.g. entry of a character) .
In some examples, the user may cause the entered text to be sent as a reply using means other than additional key 350. There are a number of possibilities.
For example, the keyboard UI 320 shown in FIGURE 3A-D is not part of the application UI 310 but is instead presented by the operating system of device 200. As a result, the author of the application presenting the application UI 310 need make no special allowance for either the use of the text input field 330 to enter the text used for the reply or for receiving a user input that causes the reply to be sent. Indeed, the author may not even need to be aware that such functionality is possible, since the operation may be performed entirely at an operating system level and not based on program code written for the application.
A button such as additional button 340 is just one UI element that can be used to cause a reply to be sent to the incoming message, and any suitable user input may be mapped to this function. More generally, the user may be presented with an suitable form ofuser-selectable option to cause the reply to be sent.
For example, an alternative user-selectable option may be located elsewhere in the user interface such as within the notification itself. An example of this is shown in FIGURE 4A where a reply button 350a is presented as part of the notification 340. Selecting this reply button after entering text into the text input field 330 will reply to the notified incoming message using the textual content of the text input field 330.
A different example is shown in FIGURE 4B. Here an existing key 350b of the keyboard UI 320 has been modified to have an additional function, in this example “Enter” and “Reply” . Selecting the key in different manners may select the function that is performed, for example short-pressing the key to activate the original function (in this case “Enter” ) and long-pressing the key to activate the additional function (replying to the incoming message) . Different levels of force may also be used to different between the different manners of selection. Although the enter key has been used as an example in this figure, any suitable key or other user-selectable option anywhere in the UI 300 may be repurposed in this way. The repurposing may
not retain the original functionality -for example the original function may be disabled and entirely replaced by the reply functionality.
It is not necessarily the case that a new user-selectable option is created or that an existing one is repurposed to allow the user to cause the reply message to be sent, since any suitable user input may be used. For example, the user could shake the device to cause the reply to be sent, or trace a predefined pattern on a touch screen. FIGURE 4C shows an example where having entered textual content for a reply into the text input field 330, the user traces the letter “R” 350c over the UI 300 in order to cause the reply to be sent.
Although in some examples it may be desirable for a user input causing the reply message to be sent to be handled at an operating system rather than application level, this is not necessarily always the case. For example, the application UI 310 may be modified in order to allow the user to cause the reply to be sent by interacting with the application UI.
FIGURE 4D shows an example of a text input field 330 a modified application that has been modified in response to an incoming message to allow the textual content of the text input field to be sent in response to the incoming message. Specifically, a “Reply” button 350d has been added to the text input field 330.
In some cases, the application UI 310 may be given the appearance of modification whilst the actual modification is performed at an operating system level. For example, the operating system may modify the display of the application UI 300 without informing the application presenting it that such modification has taken place. Alternatively, the modification may be made in an otherwise transparent layer displayed on top of the application UI. For example, the “Reply” button 350d shown in FIGURE 4D may be superimposed over or close to the text input field by the operating system using a transparent layer that is displayed over the application UI 310.
Where a new user-selectable option is added to the UI 300 or an existing user-selectable option is modified to allow the user to cause a reply message to be sent, this may in some examples only be done when certain criteria are met. For example, when the user has entered new text into a text input field 330 since the reception of the incoming message, and/or whilst the current focus of the UI is on the text input field 330 (e.g. whilst a text input cursor is displayed within it) . This may be advantageous in the case where an user-selectable option is added or modified and it is displayed in connection with a specific text input field (e.g. the button 350d shown in FIGURE 4D which is displayed in connection with text input field 330) , since several text input fields may be displayed and the new or modified user-selectable option can then be displayed in connection with only the text input field where the user has e.g. most recently entered text.
Where a user-selectable option is added or modified to cause a reply to the incoming text message to be sent, it may be labelled in such a way to convey information to the user. For example, the user-selectable option may be labelled to inform the user two whom the reply message will be sent. FIGURE 4E shows an example of such a user-selectable option, being a button 350e that is labelled “Reply to John Smith” wherein John Smith is the sender of the incoming message and thus will be the recipient of the reply message should one be sent. Other information can be similarly conveyed to the user; for example the label may indicate the type of message that will be used in the reply ( “Reply by SMS to John Smith” ) .
In the examples described above, it has been assumed that the user interface 300 is adapted to allow the user to enter text into the text input field 330 and cause this to be sent as a reply message whilst the notification is being displayed. When the notification ceases to be displayed, this adaption may be reverse and the UI 300 returned to its previous state. However, whilst the adaption of the UI 300 may be based on the reception of incoming message and/or by the presentation of the notification 340, the adaption may remain in effect until different criteria are met. For example, the UI 300 may remain adapted for a predetermined period of time that
does not correspond to the length of time for which the notification is displayed, or until a particular event is detected.
For example, suppose that the user’s device receives a first message from John Smith and displays a notification 340 as shown in FIGURE 3B. The user begins to compose a reply in text input field 330 but before he has a chance to cause the reply to be sent his device receives a second message from Jane Doe. The notification 340 of John Smith’s message may be replaced by one relating to Jane Doe’s, but if the UI 300 is immediately adapted to respond to Jane Doe’s message rather than John Smith’s then the user may not be readily able to send to John Smith the text that he has just entered. For this reason, it may be helpful to lock the adaptation of the UI in the event that the user begins to enter text into a text input field 330 until such time as the user causes that text to be send as a reply message or performs another task with it (e.g. the primary function of the text input field) . Alternatively, or additionally, the adaptation of the UI 300 may be locked based on the context of the user interface, so that the adaptation remains for as long as the focus of the UI 300 is on the text input field 330 (e.g. a text input cursor is present in the text input field) and the adaptation is unlocked only once the focus leaves the text input field 330 or a predetermined time after that occurs.
In some cases, a text input field may already contain textual content before the user begins to compose the reply. FIGURE 5A-D shows an example of such a case where a text input field 530 contains the original text “Meeting with the boss” (560) . A notification of an incoming message is then received and the user moves the focus to the text input field to compose a response. In response, the original text 560 is removed from the text input field 500 and a text input cursor 570 is displayed. The user then enters the reply message text “Hi John. Everything is fine. You? ” 580 and causes the reply message to be sent. Once the message is sent and the adaptation of the UI 300 is reversed, the original text 560 is returned to the text input field 500. Of course if the composition of the reply message in the text input field 500 is cancelled (e.g. by deleting the reply message text 580 or performing another
user input to cancel the reply) then the contents of the text input field 500 may be restored to the original text 560 then.
FIGURE 6 shows an alternative example where the original text 660 of a text input field 600 may be retained as reply message text 680 is entered. In such cases (and as illustrated in FIGURE 6) the original text 660 and the reply message text 680 may be differentiated visually. In the example of FIGURE 6 the reply message text 680 is emboldened relative to the original text 660 and underlined, although other visual differentiations may be applied to the original text 660 and/or the reply message text 680 such as variations in colour and/or other formatting styles. Any suitable variation may be chosen that allows the user to differentiate between the original text 660 and the reply message text 680.
In some examples, the user may enter text into a text input field whilst the UI 300 is adapted, but without the intention of using that text in a reply message. In such cases, when the adaptation is reverted the newly entered text may be retained in the text input field, and may be combined with whatever original text was already present in the text input field prior to the adaptation. Where the original and newly entered text are differentiated from one another (for example by different formatting styles) the differentiation may be removed. The system may determine that the user does not wish to use the newly entered text as a reply message from the fact that the user does not select the user-selectable option to cause the reply message to be sent (e.g. within a threshold period of time) , and instead e.g. cancels the notification, selects the user-selectable option to cause the text input field’s primary function to be performed, or based on any other suitable criteria.
In the examples described above the application UI 300 has not been the UI of a messaging application. Thus, the user is provided with the ability to reply to messages directly from within the UI of an application it has not been designed to compose and send messages. However, a similar approach also may be taken within messaging application UIs, for example to permit the user to respond to an e-mail
message from within a UI designed solely for SMS messaging. Thus the user may be provided with a way to compose reply messages within a messaging application where the type of reply message is not a type that is offered by that messaging application. Examples of messaging applications may include SMS applications, e-mail applications, instant messaging applications, and similar. Similarly, the user may be provided with a means of composing messages from within a messaging application that is designed to send messages of the appropriate type, but using user-selectable options that do not have the composition of such messages as their primary function -for example the user of an SMS messaging application may be provided with a way to respond to incoming SMS messages from within a settings menu of the SMS application without needing to first exit that menu and navigate to a message composition UI.
In the above examples, a text input field has been used by the user to enter the content that is used to reply to the incoming message. However, in some examples other forms of input field may be used to capture other forms of content that can be included in the reply message.
FIGURE 7 shows an input field 700 that is not specifically a text input field. In this example the input field receives and marks a trace input from a user (e.g. via a touch screen or mouse) , allowing the user to draw shapes within the input field 700. Shapes drawn by the user in the field can be used as content to be comprised by the reply message when it is sent. The user has traced a smiley face 710 into the input field, and there are many ways in which this content could be comprised by the reply message. For example, an image could be generated representing the entire input field 700 and attached or embedded within the reply message in order that it is viewable by the recipient. Alternatively, an image could be generated that is cropped around the content that the user has entered 710 (i.e. a portion of the text input field that contains the smiley face 710) . Again alternatively, the content of the input field 700 could be processed before it is added to the reply message, for example object recognition could be used to detect the smiley face 710
and include it in the reply message by adding a representation of it in one or more characters (e.g. or “:-) ” ) .
It is to be understood that any input field that is capable of receiving a user inputted content may be used to compose a reply to the incoming message and that such content is not limited to ‘visual’ content such as text or drawings. Other examples of suitable content may include sound, haptic content (e.g. a pattern of taps input by the user) , and any other suitable content.
As a further example, Fig. 8 illustrates an audio input field through with a primary function of recording a voice memo for storage on a user’s device. The audio input field comprises a visual representation of inputted audio 810 and a button 820 that can be pressed by the user to start and stop recording. When the user is notified of an incoming message, he can press the button 820 to record an audio message (for example) that he can subsequently cause to be sent in a reply message. This audio message could be sent as e.g. a sound file attachment to the message, or it may be automatically transcribed in order to generate textual content to send in the reply message.
FIGURE 9 illustrates a method 900 that may be implemented by the apparatus described above or any other suitable means. The method begins at step 910. The following step is to cause 920 an input field to be presented. A first user-selectable option is enabled 930 for performing a first function using content of the input field. Finally, in response to the reception of a message from a sender, a second user-selectable option is enabled 940 for sending a reply message to the sender, the reply message comprising content of the input field. The method then ends 950.
Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is that a user who receives an incoming message and wishes to reply to it immediately is able to do so with the minimum of user inputs and
with the minimum of disruption to his existing train of thought and workflow. If he is already working with an application when he is notified of the incoming message, he does not even need to leave or obscure that application’s user interface in order to compose a response. In some different approaches where the response is composed in a pop-up window that overlays the application’s UI, there would be a significantly greater level of interruption and disruption to the user.
Example embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on a removable memory, within internal memory or on a communication server. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “computer-readable medium” may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with examples of a computer described and depicted in FIGURE 1. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
In some example embodiments, the invention may be implemented as an apparatus or device, for example a mobile communication device (e.g. a mobile telephone) , a PDA, a computer or other computing device, or a video game console.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.
Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described example embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims. Furthermore, although particular combinations of features have been described in the context of specific examples, it should be understood that any of the described features may be present in any combination that falls within the scope of the claims.
Claims (25)
- A method, comprising:causing an input field to be presented;enabling a first user-selectable option for performing a first function using content of the input field; andin response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- The method of claim 1, wherein the input field comprises a text input field and the content of the input field comprises textual content.
- The method of any preceding claim, wherein the input field is presented by an application other than a messaging application.
- The method of any preceding claim, wherein the second option is presented at an operating system level.
- The method of any preceding claim wherein the second user-selectable option is presented to the user and wherein the presentation of the second user-selectable option identifies the sender.
- The method of any preceding claim, further comprising removing content from the input field in response to selection of the second user-selectable option.
- The method of any preceding claim, further comprising disabling the second user-selectable option if the second user-selectable option is not performed within a predetermined period of time after the enablement of the second user-selectable option.
- The method of any preceding claim, further comprising causing the second user-selectable option to be presented to the user only when it is enabled.
- Apparatus comprising:a processor; andmemory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following:causing an input field to be presented;enabling a first user-selectable option for performing a first function using content of the input field; andin response to the reception of a message from a sender, enabling a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- The apparatus of claim 9, wherein the input field comprises a text input field and the content of the input field comprises textual content.
- The apparatus of claim 9 or 10, wherein the input field is presented by an application other than a messaging application.
- The apparatus of any of claims 9 to 11, wherein the second option is presented at an operating system level.
- The apparatus of any of claims 9 to 12, wherein the second user-selectable option is presented to the user and wherein the presentation of the second user-selectable option identifies the sender.
- The apparatus of any of claims 9 to 13, wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to remove content from the input field in response to selection of the second user-selectable option.
- The apparatus of any of claims 9 to 14, wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to disable the second user-selectable option if the second user-selectable option is not performed within a predetermined period of time after the enablement of the second user-selectable option.
- The apparatus of any of claims 9 to 15, wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to cause the second user-selectable option to be presented to the user only when it is enabled.
- The apparatus of any of claims 9 to 16, being a mobile communication device.
- A computer program product comprising a computer-readable medium bearing computer program code embodied therein for use with a computer, the computer program code comprising:code for causing an input field to be presented;code for enabling a first user-selectable option for performing a first function using content of the input field; andcode for enabling, in response to the reception of a message from a sender, a second user-selectable option for sending a reply message to the sender, the reply message comprising content of the input field.
- The computer program product of claim 18, wherein the input field comprises a text input field and the content of the input field comprises textual content.
- The computer program product of claim 18 or 19, wherein the input field is presented by an application other than a messaging application.
- The computer program product of any of claims 18-20, wherein the second option is presented at an operating system level.
- The computer program product of any of claims 18-21, wherein the second user-selectable option is presented to the user and wherein the presentation of the second user-selectable option identifies the sender.
- The computer program product of any of claims 18-22, further comprising code for removing content from the input field in response to selection of the second user-selectable option.
- The computer program product of any of claims 18-23, further comprising code for disabling the second user-selectable option if the second user-selectable option is not performed within a predetermined period of time after the enablement of the second user-selectable option.
- The computer program product of any of claims 18-24, further comprising code for causing the second user-selectable option to be presented to the user only when it is enabled.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/085748 WO2018214066A1 (en) | 2017-05-24 | 2017-05-24 | Reply message comprising content of an input field |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/085748 WO2018214066A1 (en) | 2017-05-24 | 2017-05-24 | Reply message comprising content of an input field |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018214066A1 true WO2018214066A1 (en) | 2018-11-29 |
Family
ID=64396176
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/085748 Ceased WO2018214066A1 (en) | 2017-05-24 | 2017-05-24 | Reply message comprising content of an input field |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018214066A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010118079A1 (en) * | 2009-04-10 | 2010-10-14 | Cellco Partnership D/B/A Verizon Wireless | Smart object based graphical user interface for a mobile terminal having a touch panel display |
| CN103092516A (en) * | 2013-01-11 | 2013-05-08 | 东莞宇龙通信科技有限公司 | Interface display method based on edit mode and communication terminal |
| CN105988657A (en) * | 2015-02-12 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Message replying method and apparatus |
| CN106325682A (en) * | 2016-09-13 | 2017-01-11 | 宇龙计算机通信科技(深圳)有限公司 | Method and system for replying messages |
-
2017
- 2017-05-24 WO PCT/CN2017/085748 patent/WO2018214066A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010118079A1 (en) * | 2009-04-10 | 2010-10-14 | Cellco Partnership D/B/A Verizon Wireless | Smart object based graphical user interface for a mobile terminal having a touch panel display |
| CN103092516A (en) * | 2013-01-11 | 2013-05-08 | 东莞宇龙通信科技有限公司 | Interface display method based on edit mode and communication terminal |
| CN105988657A (en) * | 2015-02-12 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Message replying method and apparatus |
| CN106325682A (en) * | 2016-09-13 | 2017-01-11 | 宇龙计算机通信科技(深圳)有限公司 | Method and system for replying messages |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12210730B2 (en) | User interface for multi-user communication session | |
| US11762547B2 (en) | Portable electronic device for instant messaging | |
| US11347943B2 (en) | Mail application features | |
| US11249633B2 (en) | Device, method, and graphical user interface for managing electronic communications | |
| US11922518B2 (en) | Managing contact information for communication applications | |
| US11153235B1 (en) | User interfaces for messages | |
| US20220391603A1 (en) | Content translation user interfaces | |
| US10430078B2 (en) | Touch screen device, and graphical user interface for inserting a character from an alternate keyboard | |
| US8686955B2 (en) | Device, method, and graphical user interface for performing character entry | |
| US9442654B2 (en) | Apparatus and method for conditionally enabling or disabling soft buttons | |
| US20110167339A1 (en) | Device, Method, and Graphical User Interface for Attachment Viewing and Editing | |
| US20110175826A1 (en) | Automatically Displaying and Hiding an On-screen Keyboard | |
| CN113795819B (en) | How to add punctuation keys to a touch-sensitive keyboard | |
| WO2018214066A1 (en) | Reply message comprising content of an input field | |
| US20250004617A1 (en) | User interfaces for displaying, transmitting, and receiving communications | |
| HK1189404B (en) | Portable electronic device for instant messaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17910832 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17910832 Country of ref document: EP Kind code of ref document: A1 |