WO2011079437A1 - Method and apparatus for receiving input - Google Patents
Method and apparatus for receiving input Download PDFInfo
- Publication number
- WO2011079437A1 WO2011079437A1 PCT/CN2009/076209 CN2009076209W WO2011079437A1 WO 2011079437 A1 WO2011079437 A1 WO 2011079437A1 CN 2009076209 W CN2009076209 W CN 2009076209W WO 2011079437 A1 WO2011079437 A1 WO 2011079437A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input
- region
- input mode
- virtual screen
- mode
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0236—Character input methods using selection techniques to select from displayed items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Definitions
- the present application relates generally to receiving input.
- An apparatus comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
- a method comprising determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining by a processor a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
- FIGURES 1 A - 1 C are diagrams illustrating input mode regions of a virtual screen according to an example embodiment
- FIGURES 2A - 2C are diagrams illustrating a display arrangement for receiving input according to an example embodiment
- FIGURE 3 is a flow diagram showing a set of operations for generating a text image according to an example embodiment
- FIGURE 4 is a flow diagram showing a set of operations for generating a text image according to an example embodiment
- FIGURES 5A - 5J are diagrams illustrating character recognition according to an example embodiment
- FIGURES 6A - 6D are diagrams illustrating a visual representation of a virtual keypad according to an example embodiment of the invention.
- FIGURES 7A - 7E are diagrams illustrating input associated with a touch display according to an example embodiment
- FIGURES 8A - 8D are diagrams illustrating a virtual screen according to an example embodiment.
- FIGURE 9 is a block diagram showing an apparatus according to an example embodiment.
- FIGURES 1 through 9 of the drawings An embodiment of the invention and its potential advantages are understood by referring to FIGURES 1 through 9 of the drawings.
- a user may desire to provide input to an apparatus using various input modes.
- An input mode relates to the manner in which a user provides input to the apparatus.
- input mode may relate to voice input, writing input, touch input, keypad input, motion input, gesture input, optical input, and/or the like.
- the user may desire to change input mode.
- the user may desire to provide voice input, then to provide writing input.
- the user may desire to switch the input modes with minimal effort.
- the use may desire to utilize more than one input mode together.
- the user may desire to utilize keypad input and handwriting input together.
- the user may desire to perform a transition to the dual mode input with minimal effort.
- FIGURES 1 A - 1 C are diagrams illustrating input mode regions of a virtual screen, for example virtual screen 802 of FIGURE 8A, according to an example embodiment.
- the examples of FIGURES 1 A - 1 C are merely examples of input mode regions of a virtual screen, and do not limit the scope of the claims, For example, number of input mode regions may vary, arrangement of the input mode regions may vary, position of input mode regions may vary, and/or the like.
- An input mode may relate to writing input, optical input, keypad input, voice input, gesture input, motion input, and/or the like.
- Writing input mode may comprise handwriting recognition, continuous handwriting recognition, character recognition, and/or the like, similar as described with reference to FIGURES 5A-5H.
- Optical input mode may comprise performing an operation on an image to, at least partially, generate at least one character, for example using optical character recognition (OCR).
- OCR optical character recognition
- the image may be received from a camera, a file, for example stored in non-volatile memory 42 of FIGURE 9, a receiver, such as receiver 16 of FIGURE 9, and/or the like.
- Keypad input may relate to use of a keypad, a virtual keypad, similar as described with reference to FIGURES 6A-6D, and/or the like.
- Voice input mode may relate to speaker dependant voice recognition, speaker independent voice recognition, and/or the like.
- Gesture input may relate to interpretation of user hand gestures, facial expressions, and/or the like. Gesture input may be from a camera, a video, proximity sensor, and/or the like.
- Motion input may relate to a user's movement of the apparatus. Motion input may be from a motion sensor, a visual sensor, and/or the like.
- an input mode region relates to a region associated with an input mode.
- the user may be able to perform input associated with the input mode region when the input mode region is caused to be displayed.
- the user may perform keypad input when at least part of a keypad input mode region is caused to be displayed.
- the user may perform writing recognition when a writing recognition input mode region is caused to be displayed.
- the user may perform handwriting on a touch screen in relation to position of input mode region, in relation to a position outside of the input mode region, and/or the like.
- FIGURE 1A is a diagram illustrating input mode regions of virtual screen 100, for example virtual screen 802 of FIGURE 8A, according to an example embodiment.
- Virtual screen 100 comprises input mode regions 101 , 102, and 103.
- each input mode region may relate to a different input mode.
- FIGURE IB is a diagram illustrating input mode regions of virtual screen 120, for example virtual screen S22 of FIGURE 8B, according to an example embodiment.
- Virtual screen 120 comprises input mode regions 121 , 122, and 123.
- each input mode region may relate to a different input mode.
- FIGURE 1 C is a diagram illustrating input mode regions of virtual screen 140, for example virtual screen 842 of FIGURE 8C, according to an example embodiment.
- Virtual screen 140 comprises input mode regions 141, 142, and 143.
- each input mode region may relate to a different input mode.
- arrangement and/or configuration of the input mode regions and/or virtual screen may be predetermined, automatically determined, user determined, and/or the like.
- an apparatus may receive indication of an input indicating a change in arrangement of the virtual screen.
- a user may provide input to change the shape, size, position, orientation, and/or the like, of an input region and/or the virtual screen.
- technical effects of user configuration of the virtual screen can, in some examples, include allowing the user to determine transitioning behavior between input modes and allowing the user to determine how input modes may be combined.
- FIGURES 2A - 2C are diagrams illustrating a display arrangement for receiving input according to an example embodiment.
- the examples of FIGURES 2A - 2C are merely examples of display arrangements, and do not limit the scope of the claims.
- the display arrangement may vary in size, shape, orientation, content, color, font, language, and/or the like.
- the example display arrangements of FIGURES 2A-2C comprise an input region.
- an input region relates to a region, such as region 804 of FIGURE 8 A, that relates to a part of the virtual screen, such as virtual screen 100 of FIGURE 1 A, that comprises one or more input mode regions.
- the input region may indicate one or more input mode regions.
- the input region may indicate a single input mode region, two input mode regions simultaneously, three input mode regions simultaneously, and/or the like.
- configuration of an input mode may be based, at least in part, on a determination of size of the part of the input region associated with the input mode region.
- an apparatus may determine that a full keypad would be too small if provided in its entirety in the part of the input region associated with keypad input mode, In such an example, the apparatus may configure the keypad provided in the keypad input mode region so that a subset of keys are provided in the input region.
- the example display arrangements of FIGURES 2A-2C comprise an input mode indication region.
- an embodiment may omit the input mode indication region.
- the input mode indication region indicates the input mode associated with at least part of the input region.
- the input mode indication region may indicate the multiple input mode regions.
- the input mode indication region may also indicate delineation between the input mode regions. Without l imiting the scope of the invention in any way, some examples of an input mode indication region can have a technical effect of providing a user with a simple and intuitive way of recognizing input mode regions.
- the example display arrangements of FIGURES 2A-2C comprise a text editor region.
- the text editor region may indicate characters that have been input, characters that are comprised in a document being edited, and/or the like.
- the user may modify edit position, character selection, and/or the like.
- the user may perform input to transition the input mode region indicated in the input region.
- the input may relate to a touch input, keypad input, motion input, and/or the l ike. Such transition may relate to a position change of the part of the virtual screen indicated by the input region.
- the input may be performed in relation to the input region, the input mode indication region, and/or the like.
- the user may perform a touch input, such as touch input 740 of FIGURE 7C in relation to position of the input region for transition of the input mode region,
- a gesture input such as a scrolling related input, for example a tilting related input, a flicking related input, and/or the like, in association with either input region and/or the input mode indication region for the transition.
- the user may perform a keypad input, such as a directional button press, in association with the input mode indication region for the transition input.
- the input modes associated with the indicated input mode regions may be used in conjunction with each other.
- a keypad input mode may be used to enhance recognition results of writing input mode.
- a key press may indicate part of a character, part of a word, and/or the like.
- voice input mode may be used to enhance optical input mode.
- the results of optical recognition may be compared to results of speech recognition to improve accuracy of character determination.
- FIGURE 2A is a drawing illustrating a display arrangement 200 comprising a text editor region 201 , an input region 202, and an input mode indication region 203.
- Text editor region 201 indicates 4 characters. The characters may relate to existing text, text associated with input, and/or the like.
- Input region 202 indicates a single input mode, which is a keypad input mode. Even though the keypad indicated in the input region is a WuBi keypad, other keypads may be utilized, such as a qwerty keypad, numeric keypad, and/or the like.
- Input mode indication region 203 indicates keypad input.
- FIGURE 2B is a drawing illustrating a display arrangement 220 comprising a text editor region 221 , an input region 222, and an input mode indication region 223.
- Text editor region 221 indicates 2 characters. The characters may relate to existing text, text associated with input, and/or the like.
- Input region 222 indicates keypad input mode region 224 and writing input mode region 225,
- the virtual keypad indicated in input mode region 224 relates to the 5 basic WuBi strokes.
- the apparatus may determine to provide the reduced WuBi keypad based on the reduced size of the part of input region 222 associated with keypad input mode region 224.
- Input mode indication region 223 indicates keypad input and writing input with a delineation corresponding to the delineation of input region 222 between keypad input mode region 224 and writing input mode region 225.
- the apparatus may utilize the keypad input mode in conjunction with the writing input mode. For example, input associated with the keypad may be utilized to improve the recognition of one or more written characters. In such an example, the apparatus may reduce the number of possible character matches associated with a written character by identifying only characters that are based on a stroke indicated by keypad input.
- FIGURE 2C is a drawing illustrating a display arrangement 240 comprising a text editor region 241 , an input region 242, and an input mode indication region 243.
- Text editor region 241 indicates 4 words comprising characters. The characters may relate to existing text, text associated with input, and/or the like.
- Input region 242 indicates keypad input mode region 244 and optical input mode region 245.
- the virtual keypad indicated in input mode region 244 may relate to a qwerty keypad, WuBi keypad, numeric keypad, and/or the like.
- the information indicated in optical input mode region 245 may relate to information from a camera, scanner, a stored image, a received image, and/or the like.
- Input mode indication region 243 indicates keypad input and optical input with a delineation corresponding to the delineation of input region 242 between keypad input mode region 244 and optical input mode region 245.
- the apparatus may utilize the keypad in conjunction with the optical information to determine characters.
- the apparatus may utilize keypad related input to improve recognition accuracy of the optica! input mode.
- the word "guaranteed" may be determined even where the quality of the optical information is poor, with keypad related input.
- FIGURE 3 is a flow diagram showing a set of operations 300 for receiving input according to an example embodiment.
- An apparatus for example electronic device 10 of FIGURE 9 or a portion thereof, may utilize the set of operations 300.
- the apparatus may comprise means, including, for example processor 20 of FIGURE 9, for performing the operations of FIGURE 3.
- an apparatus, for example device 10 of FIGURE 9, is transformed by having memory, for example memory 42 of FIGURE 9, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 9, cause the apparatus to perform set of operations 300.
- the apparatus determines a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode similar as described with reference to FIGURES 1 A- l C.
- the apparatus causes display of a text editor region and an input region indicating at least part of the virtual screen similar as described with reference to FIGURES 2A-2C.
- the apparatus receives indication of a first input associated with the first region.
- the apparatus may receive indication of the first input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 9, receiving one or more indications of the first input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 9, receiving indication of the first input from a receiver, such as receiver 16 of FIGURE 9, receiving the first input from a keypad, such as keypad 30 of FIGURE 9, receiving the first input from a camera module, such as camera module 36 of FIGURE 9, and/or the like.
- the apparatus may receive the indication of the first input from a different apparatus, such as a mouse, a keyboard, an external touch display, an external camera, and/or the like.
- the apparatus determi nes a first input operation based, at least in part, on the first input and the first input mode.
- the input operation may relate to recognizing input, determining at least one character based, at least in part, on the input, storing the input for later use, and/or the like.
- the determination of the input operation may be similar as described with reference to FIGURES 1 A-1 C and FIGURES 2A-2C.
- the apparatus receives indication of a second input associated with the second region.
- the receiving of the second input may be similar as described with reference to block 303.
- the apparatus determines a second input operation based, at least in part, on the second input and the second input mode. Determining the second operation may be similar as described with reference to block 304. Determination of the second input operation may depend upon the first input operation similar as described with reference to FIGURES 2B- 2C. For example, the apparatus may determine the second input operation based at least in part on the first input operation and the second input, such as using input modes in conjunction with each other.
- the apparatus causes display of at least one character in the text editor region based, at least in part, on the first operation and the second operation.
- Display of the at least one character may be based upon the first operation and the second operation independently and/or in conjunction,
- the display of the at least one character may supplement, modify, replace, and/or the l ike, characters already present in the text editor region, if any.
- FIGURE 4 is a flow diagram showing a set of operations 400 for receiving input according to an example embodiment.
- An apparatus for example electronic device 10 of FIGURE 9 or a portion thereof, may utilize the set of operations 340.
- the apparatus may comprise means, including, for example processor 20 of FIGURE 9, for performing the operations of FIGURE 4.
- an apparatus, for example device 10 of FIGURE 9, is transformed by having memory, for example memory 42 of FIGURE 9, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 9, cause the apparatus to perform set of operations 400.
- the apparatus determines a virtual screen comprising a first region associated with a first input mode, a second region associated with a second input mode, and a third region associated with a third input mode similar as described with reference to block 301 of FIGURE 3.
- the apparatus causes display of a text editor region and an input region indicating at least part of the virtual screen simi lar as described with reference to block 302 of FIGURE 3,
- the apparatus receives indication of a transition input associated with at least part of the first region and at least part of the second region.
- the apparatus may receive indication of the transition input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 9, receiving one or more indications of the first input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 9, a receiver, such as receiver 16 of FIGURE 9, a keypad, such as keypad 30 of FIGURE 9, and/or the like.
- the apparatus may receive the indication of the transition input from a different apparatus, such as a mouse, a keyboard, an external touch display, an external camera, and/or the like.
- the transition input may be similar as described with reference to FIGURES 2A-2C.
- the apparatus causes display of at least part of the virtual screen corresponding to the transition input similar as described with reference to FIGURES 2A-2C.
- the apparatus receives indication of a first input associated with the first region similar as described with reference to block 303 of FIGURE 3.
- the apparatus determines a first input operation based, at least in part, on the first input and the first input mode similar as described with reference to block 304 of FIGURE 3.
- the apparatus receives indication of a second input associated with the second region similar as described with reference to block 305 of FIGURE 3.
- the apparatus determines a second input operation based, at least in part, on the second input and the second input mode similar as described with reference to block 306 of FIGURE 3.
- the apparatus receives indication of a third input associated with the third region. The receiving of the third input may be similar as described with reference to block 407.
- the apparatus determines a third input operation based, at least in part, on the third input and the third input mode. Determining the third operation may be similar as described with reference to block 408. Determination of the third input operation may depend upon the first input operation and/or the second operation similar as described with reference to FIGURES 2B-2C. For example, the apparatus may determine the third input operation based at least in part on the first input operation and/or the second input operation, such as using input modes in conjunction with each other.
- the apparatus causes display of at least one character in the text editor region based, at least in part, on the first operation, the second operation, and the third operation.
- Display of the at least one character may be based upon the first operation, the second operation, and the third operation independently and/or in conjunction.
- the display of the at least one character may supplement, modify, replace, and/or the like, characters already present in the text editor region, if any.
- FIGURES 5A - 5J are diagrams illustrating character recognition according to an example embodiment.
- the examples of FIGURES 5A - 5J are merely examples of character recognition, and do not limit the scope of the clai ms.
- character images may vary with respect to language, characters, orientation, size, alignment, and/or the like.
- the characters may relate to Arabic characters, Latin characters, Indie characters, Japanese characters, and/or the like.
- a character image relates to graphical information that represents at least one character.
- a character image may relate to an image of a written word.
- a character image may relate to a part and/or the entirety of an image.
- the character image may relate to one or more written characters, copied characters, photographed characters, scanned characters, and/or the like.
- At least one character may be determined based, at least in part on the character image.
- an apparatus may perform handwriting recognition, continuous handwriting recognition, optical character recognition (OCR), and/or the like on a character image to determine one or more characters.
- OCR optical character recognition
- the accuracy of determination of the at least one character may vary across apparatuses and does not limit the claims set forth herein.
- a first apparatus may have less accurate OCR than a second apparatus.
- FIGURE 5A is a diagram illustrating character recognition according to an example embodiment.
- the character image represents three letters that form the word "Big.”
- Figure 5B is a diagram of recognized characters represented by the character image of FIGURE 5A.
- the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
- the characters are "Big.”
- FIGURE 5C is a diagram illustrating a character image according to an example embodiment.
- the character image represents letters and punctuation that form "The dog is big.”
- Figure 5D is a diagram of characters represented by the character image of FIGURE 5C.
- the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
- the characters are "The dog is big.”
- FIGURE 5E is a diagram illustrating a character image according to an example embodiment.
- the character image represents five script letters that form the word "hello.”
- Figure 5F is a diagram of characters represented by the character image of FIGURE 5E.
- the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
- the characters are "hello.”
- FIGURE 5G is a diagram illustrating a character image according to an example embodiment.
- the character image represents three letters that form the word " ⁇ ".
- Figure 5H is a diagram of characters represented by the character image of FIGURE 5G.
- the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
- the characters are
- FIGURES 6A - 6D are diagrams illustrating a visual representation of a virtual keypad according to an example embodiment of the invention.
- a virtual keypad is a representation of one or more virtual keys.
- a virtual key may relate to a character, such as a number, letter, symbol, and/or the like, a part of a character, a control, such as shift, alt, command, function, and/or the like, or something similar.
- the position of touch display input in relation to position of one or more virtual keys may influence input information associated with the touch display input.
- a tap input such as tap input 700 of FIGURE 7A
- a touch display input at a position associated with a virtual key for a "Z" character may provide input information associated with the "Z" character.
- the number, shape, position, and/or the like, of virtual keys within a virtual keypad may vary. For example, one virtual keypad may have 17 round adjacent virtual keys, while a different virtual keypad may have 50 rectangular non-adjacent virtual keys.
- the size of virtual keys may vary. For example, one virtual key of a virtual keypad may be larger than a different virtual key of the same virtual keypad.
- FIGURE 6A illustrates a virtual keypad 600 according to an example embodiment of the invention.
- virtual keypad 600 comprises 48 adjacent square virtual keys.
- virtual keys 602, 604, and 606 relate to characters and/or controls.
- virtual key 602 may relate to a "4" character
- virtual key 604 may relate to an "I” character
- virtual key 606 may relate to an "Enter" control.
- FIGURE 6B illustrates a virtual keypad 620 according to an example embodiment of the invention.
- virtual keypad 620 comprises 12 adjacent square virtual keys.
- virtual keys 622, 624, and 626 relate to characters and/or controls.
- virtual key 622 may relate to a "4" character
- virtual key 624 may relate to an "8" character
- virtual key 626 may relate to a "#" character.
- FIGURE 6C illustrates a virtual keypad 640 according to an example embodiment of the invention.
- virtual keypad 640 comprises 30 adjacent circular virtual keys.
- virtual keys 642, 644, and 646 relate to characters and/or controls.
- virtual key 642 may relate to a "D" character
- virtual key 644 may relate to a "G” character
- virtual key 646 may relate to a "?” character.
- FIGURE 6D illustrates a virtual keypad 660 according to an example embodiment of the invention.
- virtual keypad 660 comprises 8 non- adjacent unevenly distributed octagonal virtual keys.
- virtual keys 662, 664, and 666 relate to characters and/or controls.
- virtual key 662 may relate to a "+" character
- virtual key 664 may relate to a "$" character
- virtual key 646 may relate to a "*" character.
- FIGURES 7A - 7E are diagrams illustrating input associated with a touch display, for example from display 28 of FIGURE 8, according to an example embodiment.
- FIGURES 7A - 7E a circle represents an input related to contact with a touch display
- two crossed lines represent an input related to releasing a contact from a touch display
- a line represents input related to movement on a touch display.
- the examples of FIGURES 7A - 7E indicate continuous contact with a touch display, there may be a part of the input that fails to make direct contact with the touch display. Under such circumstances, the apparatus may, nonetheless, determine that the input is a continuous stroke input. For example, the apparatus may utilize proximity information, for example information relating to nearness of an input implement to the touch display, to determine part of a touch input.
- input 700 relates to receiving contact input 702 and receiving a release input 704.
- contact input 702 and release input 704 occur at the same position.
- an apparatus utilizes the time between receiving contact input 702 and release input 704, For example, the apparatus may interpret input 700 as a tap for a short time between contact input 702 and release input 704, as a press for a longer time between contact input 702 and release input 704, and/or the like.
- a tap input may induce one operation, such as selecting an item
- a press input may induce another operation, such as performing an operation on an item.
- a tap and/or press may relate to a user selected text position.
- input 720 relates to receiving contact input 722, a movement input 724, and a release input 726.
- Input 720 relates to a continuous stroke input.
- contact input 722 and release input 726 occur at different positions.
- Input 720 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
- an apparatus interprets input 720 based at least in part on the speed of movement 724. For example, if input 720 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
- an apparatus interprets input 720 based at least in part on the distance between contact input 722 and release input 726. For example, if input 720 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 722 and release input 726.
- An apparatus may interpret the input before receiving release input 726. For example, the apparatus may evaluate a change in the input, such as speed, position, and/or the like, In such an example, the apparatus may perform one or more determinations based upon the change in the touch input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
- input 740 relates to receiving contact input 742, a movement input 744, and a release input 746 as shown.
- Input 740 relates to a continuous stroke input.
- contact input 742 and release input 746 occur at different positions.
- Input 740 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
- an apparatus interprets input 740 based at least in part on the speed of movement 744. For example, if input 740 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
- an apparatus interprets input 740 based at least in part on the distance between contact input 742 and release input 746. For example, if input 740 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 742 and release input 746. In still another example embodiment, the apparatus interprets the position of the release input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
- input 760 relates to receiving contact input 762, and a movement input 764, where contact is released during movement.
- Input 760 relates to a continuous stroke input.
- Input 760 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
- an apparatus interprets input 760 based at least in part on the speed of movement 764. For example, if input 760 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
- an apparatus interprets input 760 based at least in part on the distance associated with the movement input 764. For example, if input 760 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance of the movement input 764 from the contact input 762 to the release of contact during movement.
- an apparatus may receive multiple touch inputs at coinciding times. For example, there may be a tap input at a position and a different tap input at a different location during the same time. In another example there may be a tap input at a position and a drag input at a different position.
- An apparatus may interpret the multiple touch inputs separately, together, and/or a combination thereof. For example, an apparatus may interpret the multiple touch inputs in relation to each other, such as the distance between them, the speed of movement with respect to each other, and/or the like.
- input 7S0 relates to receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792.
- Input 720 relates to two continuous stroke inputs. In this example, contact input 782 and 788, and release input 786 and 792 occur at different positions.
- Input 780 may be characterized as a multiple touch input. Input 780 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, to indicating one or more user selected text positions and/or the like.
- an apparatus interprets input 780 based at least in part on the speed of movements 784 and 790.
- an apparatus interprets input 780 based at least in part on the distance between contact inputs 782 and 788 and release inputs 786 and 792. For example, if input 780 relates to a scaling operation, such as resizing a box, the scaling may relate to the collective distance between contact inputs 782 and 788 and release inputs 786 and 792.
- the timing associated with the apparatus receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792 varies.
- the apparatus may receive contact input 782 before contact input 788, after contact input 788, concurrent to contact input 788, and/or the like.
- the apparatus may or may not utilize the related timing associated with the receiving of the inputs.
- the apparatus may utilize an input received first by associating the input with a preferential status, such as a primary selection point, a starting position, and/or the like.
- the apparatus may utilize non-concurrent inputs as if the apparatus received the inputs concurrently.
- the apparatus may utilize a release input received first the same way that the apparatus would utilize the same input if the apparatus had received the input second.
- a first touch input comprising a contact input, a movement input, and a release input
- a second touch input comprising a contact input, a movement input, and a release input, even though they may differ in the position of the contact input, and the position of the release input.
- FIGURES 8A - 8D are diagrams illustrating a virtual screen according to an example embodiment.
- the examples of FIGURES 8A-8D are merely examples of possible virtual screens and regions caused to be displayed, and do not limit the scope of the claims.
- a virtual screen and/or a region caused to be displayed may vary by size, shape, orientation, and/or the like.
- FIGURE 8A is a diagram illustrating a virtual screen wider than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7.
- region 804 relates to a part of virtual screen 802 that is caused to be displayed.
- the virtual screen 802 may represent an image, text, a group of items, a list, a work area, map information, and/or the like.
- virtual screen 802 may be used for the image.
- region 804 may be panned left or right to change the part of the virtual screen 802 that is caused to be displayed.
- changing the part of the virtual screen 802 that is caused to be displayed may be performed when input is received.
- region 804 may be prevented from panning beyond one or more boundary of virtual screen 802.
- FIGURE 8B is a diagram illustrating a virtual screen taller than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7.
- region 824 relates to a part of virtual screen 822 that is caused to be displayed.
- the virtual screen 822 may represent an image, text, a group of items, a list, a work area, map information, and/or the like.
- group of items such as a group of icons
- virtual screen 822 may be used for the group of icons.
- region 824 may be panned up or down to change the part of the virtual screen 822 that is caused to be displayed.
- FIGURE 8C is a diagram illustrating a virtual screen wider and taller than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7,
- region 844 relates to a part of virtual screen 842 that is caused to be displayed.
- the virtual screen 842 may represent an image, text, a group of items, a list, a work area, map information, and/or the like.
- virtual screen 842 may be used for the map information,
- region 844 may be panned left, right, up, and/or down to change the part of the virtual screen 842 that is caused to be displayed.
- changing the part of the virtual screen 842 that is caused to be displayed may be performed when input is received.
- region 844 may be prevented from panning beyond one or more boundary of virtual screen 842.
- FIGURE 8D is a diagram illustrating a virtual screen is the same size as the part of the virtual screen caused to be displayed.
- region 864 relates to a part of virtual screen 862 that is caused to be displayed.
- the virtual screen 862 may represent an image, text, a group of items, a list, a work area, map information, and/or the like. For example, if it is determined to be caused to display an entire work area, virtual screen 862 may be used for the work area.
- FIGURE 9 is a block diagram showing an apparatus, such as an electronic device 10, according to an example embodiment.
- an electronic device as illustrated and hereinafter described is merely illustrative of an electronic device that could benefit from embodiments of the invention and, therefore, should not be taken to limit the scope of the invention.
- While one embodiment of the electronic device 10 is illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as, but not limited to, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, media players, cameras, video recorders, global positioning system (GPS) devices and other types of electronic systems, may readily employ embodiments of the invention.
- PDAs portable digital assistants
- GPS global positioning system
- the apparatus of an example embodiment need not be the entire electronic device, but may be a component or group of components of the electronic device in other example embodiments.
- devices may readi ly employ embod iments of the invention regardless of their intent to provide mobility.
- embodiments of the invention are described in conjunction with mobile communications applications, it should be understood that embodiments of the invention may be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
- the electronic device 10 may comprise an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter 14 and a receiver 16,
- the electronic device 10 may further comprise a processor 20 or other processing circuitry that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively.
- the signals may comprise signaling information in accordance with a communications interface standard, user speech, received data, user generated data, and/or the l ike.
- the electronic device 10 may operate with one or more air interface standards, communication protocols, modulation types, and access types, By way of illustration, the electronic device 10 may operate in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like.
- the electronic device 10 may operate in accordance with wireline protocols, such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS- 136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and iS-95 (code division multiple access (CDMA)), with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA2000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), or with fourth-generation (4G) wireless communication protocols, wireless networking protocols, such as 802, 1 1 , short-range wireless protocols, such as Bluetooth, and/or the like.
- wireline protocols such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS- 136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and iS-95 (code division multiple access (CDMA)
- the term 'circuitry' refers to all of the following: hardware-only implementations (such as implementations in only analog and/or digital circuitry) and to combinations of circuits and software and/or firmware such as to a combination of processor(s) or portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and to circuits, such as a microprocessor(s) or portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
- This definition of 'circuitry' applies to all uses of this term in this application, including in any claims.
- circuitry would also cover an implementation of merely a processor, multiple processors, or portion of a processor and its (or their) accompanying software and/or firmware.
- circuitry would also cover, for example, a baseband integrated circuit or applications processor integrated circuit for a mobi le phone or a similar integrated circuit in a cellular network device or other network device.
- Processor 20 may comprise means, such as circuitry, for implementing audio, video, communication, navigation, logic functions, and/or the like, as well as for implementing embodiments of the invention including, for example, one or more of the functions described in conjunction with FIGURES 1-9.
- processor 20 may comprise means, such as a digital signal processor device, a microprocessor device, various analog to digital converters, digital to analog converters, processing circuitry and other support circuits, for performing various functions including, for example, one or more of the functions described in conjunction with FIGURES 1 -9.
- the apparatus may perform control and signal processing functions of the electronic device 10 among these devices according to their respective capabilities.
- the processor 20 thus may comprise the functionality to encode and interleave message and data prior to modulation and transmission.
- the processor 20 may additionally comprise an internal voice coder, and may comprise an internal data modem. Further, the processor 20 may comprise functionality to operate one or more software programs, which may be stored in memory and which may, among other things, cause the processor 20 to implement at least one embodiment including, for example, one or more of the functions described in conjunction with FIGURES 1 - 9. For example, the processor 20 may operate a connectivity program, such as a conventional internet browser.
- the connectivity program may allow the electronic device 10 to transmit and receive internet content, such as location-based content and/or other web page content, according to a Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP), and/or the like, for example.
- TCP Transmission Control Protocol
- IP Internet Protocol
- UDP User Datagram Protocol
- IMAP Internet Message Access Protocol
- POP Post Office Protocol
- Simple Mail Transfer Protocol SMTP
- WAP Wireless Application Protocol
- HTTP Hypertext Transfer Protocol
- the electronic device 10 may comprise a user interface for providing output and/or receiving input.
- the electronic device 1 0 may comprise an output device such as a ringer, a conventional earphone and/or speaker 24, a microphone 26, a display 28, and/or a user input interface, which are coupled to the processor 20.
- the user input interface which allows the electronic device 10 to receive data, may comprise means, such as one or more devices that may allow the electronic device 10 to receive data, such as a keypad 30, a touch display, for example if display 28 comprises touch capability, and/or the like.
- the touch display may be configured to receive input from a single point of contact, multiple points of contact, and/or the like.
- the touch display and/or the processor may determine input based on position, motion, speed, contact area, and/or the like.
- the electronic device 10 may include any of a variety of touch displays including those that are configured to enable touch recognition by any of resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch. Additionally, the touch display may be configured to receive an indication of an input in the form of a touch event which may be defined as an actual physical contact between a selection object (e.g., a finger, stylus, pen, pencil, or other pointing device) and the touch display.
- a selection object e.g., a finger, stylus, pen, pencil, or other pointing device
- a touch event may be defined as bringing the selection object in proximity to the touch display, hovering over a displayed object or approaching an object within a predefined distance, even though physical contact is not made with the touch display.
- a touch input may comprise any input that is detected by a touch display including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touch display, such as a result of the proximity of the selection object to the touch display.
- Display 28 may be display two- dimensional information, three-dimensional information and/or the like.
- the keypad 30 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *), alphabetic keys, and/or the like for operating the electronic device 10,
- the keypad 30 may comprise a conventional QWERTY keypad arrangement.
- the keypad 30 may also comprise various soft keys with associated functions,
- the electronic device 10 may comprise an interface device such as a joystick or other user input interface.
- the electronic device 10 further comprises a battery 34, such as a vibrating battery pack, for powering various circuits that are required to operate the electronic device 10, as well as optionally providing mechanical vibration as a detectable output.
- the electronic device 10 comprises a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20,
- the media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
- the camera module 36 may comprise a digital camera which may form a digital image file from a captured image.
- the camera module 36 may comprise hardware, such as a lens or other optical component(s), and/or software necessary for creating a digital image file from a captured image.
- the camera module 36 may comprise only the hardware for viewing an image, while a memory device of the electronic device 10 stores instructions for execution by the processor 20 in the form of software for creating a digital image file from a captured image.
- the camera module 36 may further comprise a processing element such as a coprocessor that assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data.
- the encoder and/or decoder may encode and/or decode according to a standard format, for example, a Joint Photographic Experts Group (JPEG) standard format.
- JPEG Joint Photographic Experts Group
- the electronic device 10 may comprise one or more user identity modules (UIM) 38.
- the UIM may comprise information stored in memory of electronic device 10, a part of electronic device 10, a device coupled with electronic device 10, and/or the like.
- the UIM 38 may comprise a memory device having a built-in processor.
- the UIM 38 may comprise, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), and/or the like.
- SIM subscriber identity module
- UICC universal integrated circuit card
- USIM universal subscriber identity module
- R-UIM removable user identity module
- the UIM 38 may store information elements related to a subscriber, an operator, a user account, and/or the like.
- UIM 38 may store subscriber information, message information, contact information, security information, program information, and/or the like. Usage of one or more UIM 38 may be enabled and/or disabled.
- electronic device 10 may enable usage
- electronic device 10 comprises a single UIM 38.
- at least part of subscriber information may be stored on the UIM 38.
- electronic device 10 comprises a plurality of
- electronic device 10 may comprise two UIM 38 blocks.
- electronic device 10 may utilize part of subscriber information of a first UIM 38 under some circumstances and part of subscriber information of a second UIM 38 under other circumstances.
- electronic device 10 may enable usage of the first UIM 38 and disable usage of the second UIM 38.
- electronic device 10 may disable usage of the first UIM 38 and enable usage of the second UIM 38.
- electronic device 10 may utilize subscriber information from the first UIM 38 and the second UIM 38.
- Electronic device 10 may comprise a memory device including, in one embodiment, volatile memory 40, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data,
- volatile memory 40 such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data
- the electronic device 10 may also comprise other memory, for example, non-volatile memory 42, which may be embedded and/or may be removable.
- the non-volatile memory 42 may comprise an EEPROM, flash memory or the like.
- the memories may store any of a number of pieces of information, and data. The information and data may be used by the electronic device 10 to implement one or more functions of the electronic device 10, such as the functions described in conjunction with FIGURES 1 -9.
- the memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, which may uniquely identify the electronic device 10.
- IMEI international mobile equipment identification
- Electronic device 10 may comprise one or more sensor 37.
- Sensor 37 may comprise a light sensor, a proximity sensor, a motion sensor, a location sensor, and/or the like.
- sensor 37 may comprise one or more light sensors at various locations on the device.
- sensor 37 may provide sensor information indicating an amount of light perceived by one or more light sensors.
- Such light sensors may comprise a photovoltaic element, a photoresistive element, a charge coupled device (CCD), and/or the like.
- sensor 37 may comprise one or more proximity sensors at various locations on the device.
- sensor 37 may provide sensor information indicating proximity of an object, a user, a part of a user, and/or the like, to the one or more proximity sensors.
- Such proximity sensors may comprise capacitive measurement, sonar measurement, radar measurement, and/or the like.
- FIGURE 9 illustrates an example of an electronic device that may utilize embodiments of the invention including those described and depicted, for example, in FIGURES 1-9
- electronic device 10 of FIGURE 9 is merely an example of a device that may utilize embodiments of the invention.
- Embodiments of the invention may be implemented in software, hardware, application logic or a combination of software, hardware, and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device, or a plurality of separate devices.
- part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plural ity of separate devices.
- the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
- a "computer-readable medium” may be any tangible media or means that can contain, or store the instmctions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 9,
- a computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
- blocks 403 and 404 of FIGURE 4 may be performed after block 405.
- block 305 of FIGURE 3 may be performed before block 303.
- one or more of the above-described functions may be optional or may be combined.
- block 403 of FIGURE 4 may be optional or combined with block 404.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An apparatus, comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
Description
METHOD AND APPARATUS FOR RECEIVING INPUT
TECHNICAL FIELD
[0001] The present application relates generally to receiving input.
BACKGROUND
[0002] There has been a recent surge in the use of electronic devices that receive input based on multiple input modes. SUMMARY
[0003] Various aspects of examples of the invention are set out in the claims.
[0004] An apparatus, comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
[0005] A method, comprising determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining by a processor a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
[0006] A computer-readable medium encoded with instructions that, when executed by a computer, perform: determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode, causing display of a text editor region and an input region that indicates at least part of the virtual screen, receiving indication of a first input associated with the first region, determining a first input operation based, at least in part, on the first input and the first input mode, receiving indication of a second input associated with the second region, determining a second input operation based, at least in part, on the second input and the second input mode, and causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation is disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] For a more complete understanding of embodiments of the invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
[0008] FIGURES 1 A - 1 C are diagrams illustrating input mode regions of a virtual screen according to an example embodiment;
[0009] FIGURES 2A - 2C are diagrams illustrating a display arrangement for receiving input according to an example embodiment;
[0010] FIGURE 3 is a flow diagram showing a set of operations for generating a text image according to an example embodiment;
[0011] FIGURE 4 is a flow diagram showing a set of operations for generating a text image according to an example embodiment;
[0012] FIGURES 5A - 5J are diagrams illustrating character recognition according to an example embodiment;
[0013] FIGURES 6A - 6D are diagrams illustrating a visual representation of a virtual keypad according to an example embodiment of the invention;
[0014] FIGURES 7A - 7E are diagrams illustrating input associated with a touch display according to an example embodiment;
[0015] FIGURES 8A - 8D are diagrams illustrating a virtual screen according to an example embodiment; and
[0016] FIGURE 9 is a block diagram showing an apparatus according to an example embodiment.
DETAILED DESCRIPTION OF THE DRAWINGS
[0017] An embodiment of the invention and its potential advantages are understood by referring to FIGURES 1 through 9 of the drawings.
[0018] In an example embodiment, a user may desire to provide input to an apparatus using various input modes. An input mode relates to the manner in which a user provides input to the apparatus. For example, input mode may relate to voice input, writing input, touch input, keypad input, motion input, gesture input, optical input, and/or the like. The user may desire to change input mode. For example, the user may desire to provide voice input, then to provide writing input. In such an example, the user may desire to switch the input modes with minimal effort. In another example, the use may desire to utilize more than one input mode together. In such an example the user may desire to utilize keypad input and handwriting input together. In such an example, the user may desire to perform a transition to the dual mode input with minimal effort.
[0019] FIGURES 1 A - 1 C are diagrams illustrating input mode regions of a virtual screen, for example virtual screen 802 of FIGURE 8A, according to an example embodiment. The examples of FIGURES 1 A - 1 C are merely examples of input mode regions of a virtual screen, and do not limit the scope of the claims, For example, number of input mode regions may vary, arrangement of the input mode regions may vary, position of input mode regions may vary, and/or the like.
[0020] An input mode may relate to writing input, optical input, keypad input, voice input, gesture input, motion input, and/or the like. Writing input mode may comprise handwriting recognition, continuous handwriting recognition, character recognition, and/or the like, similar as described with reference to FIGURES 5A-5H. Optical input mode may comprise performing an operation on an image to, at least partially, generate at least one character, for example using optical character recognition (OCR). The image may be received from a camera, a file, for example stored in non-volatile memory 42 of FIGURE 9, a receiver, such as receiver 16 of FIGURE 9, and/or the like. Keypad input may relate to use of a keypad, a virtual keypad, similar as described with reference to FIGURES 6A-6D, and/or the like. Voice input mode may
relate to speaker dependant voice recognition, speaker independent voice recognition, and/or the like. Gesture input may relate to interpretation of user hand gestures, facial expressions, and/or the like. Gesture input may be from a camera, a video, proximity sensor, and/or the like. Motion input may relate to a user's movement of the apparatus. Motion input may be from a motion sensor, a visual sensor, and/or the like.
[0021] In an example embodiment, an input mode region relates to a region associated with an input mode. The user may be able to perform input associated with the input mode region when the input mode region is caused to be displayed. For example, the user may perform keypad input when at least part of a keypad input mode region is caused to be displayed. In another example, the user may perform writing recognition when a writing recognition input mode region is caused to be displayed. In such an example, the user may perform handwriting on a touch screen in relation to position of input mode region, in relation to a position outside of the input mode region, and/or the like. Without limiting the clai ms in any way, at least one of the technical advantages associated with such an embodiment is providing a user with a simple and intuitive way to recognize available input modes.
[0022] FIGURE 1A is a diagram illustrating input mode regions of virtual screen 100, for example virtual screen 802 of FIGURE 8A, according to an example embodiment. Virtual screen 100 comprises input mode regions 101 , 102, and 103. In an example embodiment, each input mode region may relate to a different input mode.
[0023] FIGURE IB is a diagram illustrating input mode regions of virtual screen 120, for example virtual screen S22 of FIGURE 8B, according to an example embodiment. Virtual screen 120 comprises input mode regions 121 , 122, and 123. In an example embodiment, each input mode region may relate to a different input mode.
[0024] FIGURE 1 C is a diagram illustrating input mode regions of virtual screen 140, for example virtual screen 842 of FIGURE 8C, according to an example embodiment. Virtual screen 140 comprises input mode regions 141, 142, and 143. In an example embodiment, each input mode region may relate to a different input mode.
[0025] In an example embodiment, arrangement and/or configuration of the input mode regions and/or virtual screen may be predetermined, automatically determined, user determined, and/or the like. For example, an apparatus may receive indication of an input indicating a change in arrangement of the virtual screen. For example, a user may provide input to change
the shape, size, position, orientation, and/or the like, of an input region and/or the virtual screen. Without limiting the scope of the claims in any way, technical effects of user configuration of the virtual screen can, in some examples, include allowing the user to determine transitioning behavior between input modes and allowing the user to determine how input modes may be combined.
[0026] FIGURES 2A - 2C are diagrams illustrating a display arrangement for receiving input according to an example embodiment. The examples of FIGURES 2A - 2C are merely examples of display arrangements, and do not limit the scope of the claims. For example, the display arrangement may vary in size, shape, orientation, content, color, font, language, and/or the like.
[0027] The example display arrangements of FIGURES 2A-2C comprise an input region. In an example embodiment, an input region relates to a region, such as region 804 of FIGURE 8 A, that relates to a part of the virtual screen, such as virtual screen 100 of FIGURE 1 A, that comprises one or more input mode regions. The input region may indicate one or more input mode regions. For example, the input region may indicate a single input mode region, two input mode regions simultaneously, three input mode regions simultaneously, and/or the like. In an example embodiment, configuration of an input mode may be based, at least in part, on a determination of size of the part of the input region associated with the input mode region. For example, an apparatus may determine that a full keypad would be too small if provided in its entirety in the part of the input region associated with keypad input mode, In such an example, the apparatus may configure the keypad provided in the keypad input mode region so that a subset of keys are provided in the input region.
[0028] The example display arrangements of FIGURES 2A-2C comprise an input mode indication region. However, it should be understood that an embodiment may omit the input mode indication region. In an example embodiment, the input mode indication region indicates the input mode associated with at least part of the input region, In circumstances where the input region indicates multiple input mode regions, the input mode indication region may indicate the multiple input mode regions. In such circumstances, the input mode indication region may also indicate delineation between the input mode regions. Without l imiting the scope of the invention in any way, some examples of an input mode indication region can have a
technical effect of providing a user with a simple and intuitive way of recognizing input mode regions.
[0029] The example display arrangements of FIGURES 2A-2C comprise a text editor region. The text editor region may indicate characters that have been input, characters that are comprised in a document being edited, and/or the like. The user may modify edit position, character selection, and/or the like.
[0030] In an example embodiment, the user may perform input to transition the input mode region indicated in the input region. The input may relate to a touch input, keypad input, motion input, and/or the l ike. Such transition may relate to a position change of the part of the virtual screen indicated by the input region. The input may be performed in relation to the input region, the input mode indication region, and/or the like. For example, the user may perform a touch input, such as touch input 740 of FIGURE 7C in relation to position of the input region for transition of the input mode region, In another example, the user may perform a gesture input, such as a scrolling related input, for example a tilting related input, a flicking related input, and/or the like, in association with either input region and/or the input mode indication region for the transition. In still another example, the user may perform a keypad input, such as a directional button press, in association with the input mode indication region for the transition input. Without limiting the scope of the claims in any way, at least one technical advantage of such an embodiment can be to provide the user with a simple and intuitive way to transition input mode.
[0031] When the input region indicates more than one input mode region, the input modes associated with the indicated input mode regions may be used in conjunction with each other. For example, a keypad input mode may be used to enhance recognition results of writing input mode. In such an example, a key press may indicate part of a character, part of a word, and/or the like. In another example, voice input mode may be used to enhance optical input mode. In such an example, the results of optical recognition may be compared to results of speech recognition to improve accuracy of character determination. Without limiting the scope of the claims in any way, possible technical advantages of such an embodiment are improving the accuracy of user input and reducing the likelihood that the user will need to re-perform the input.
[0032] FIGURE 2A is a drawing illustrating a display arrangement 200 comprising a text editor region 201 , an input region 202, and an input mode indication region 203. Text editor region 201 indicates 4 characters. The characters may relate to existing text, text associated with input, and/or the like. Input region 202 indicates a single input mode, which is a keypad input mode. Even though the keypad indicated in the input region is a WuBi keypad, other keypads may be utilized, such as a qwerty keypad, numeric keypad, and/or the like. Input mode indication region 203 indicates keypad input.
[0033] FIGURE 2B is a drawing illustrating a display arrangement 220 comprising a text editor region 221 , an input region 222, and an input mode indication region 223. Text editor region 221 indicates 2 characters. The characters may relate to existing text, text associated with input, and/or the like. Input region 222 indicates keypad input mode region 224 and writing input mode region 225, The virtual keypad indicated in input mode region 224 relates to the 5 basic WuBi strokes. The apparatus may determine to provide the reduced WuBi keypad based on the reduced size of the part of input region 222 associated with keypad input mode region 224. Even though the keypad indicated in keypad input region 224 is a reduced WuBi keypad, other keypads may be utilized, such as a qwerty keypad, numeric keypad,, and/or the like. Writing input mode region 225 indicates two written characters. The characters may have been written according to received input, such as touch input740 of FIGURE 7C. The determination of characters based on the writing input may be similar as described with reference to FIGURES 5A-5H. Input mode indication region 223 indicates keypad input and writing input with a delineation corresponding to the delineation of input region 222 between keypad input mode region 224 and writing input mode region 225. In the example of FIGURE 2B, the apparatus may utilize the keypad input mode in conjunction with the writing input mode. For example, input associated with the keypad may be utilized to improve the recognition of one or more written characters. In such an example, the apparatus may reduce the number of possible character matches associated with a written character by identifying only characters that are based on a stroke indicated by keypad input.
[0034] FIGURE 2C is a drawing illustrating a display arrangement 240 comprising a text editor region 241 , an input region 242, and an input mode indication region 243. Text editor region 241 indicates 4 words comprising characters. The characters may relate to existing text, text associated with input, and/or the like. Input region 242 indicates keypad input mode region
244 and optical input mode region 245. The virtual keypad indicated in input mode region 244 may relate to a qwerty keypad, WuBi keypad, numeric keypad, and/or the like. The information indicated in optical input mode region 245 may relate to information from a camera, scanner, a stored image, a received image, and/or the like. Input mode indication region 243 indicates keypad input and optical input with a delineation corresponding to the delineation of input region 242 between keypad input mode region 244 and optical input mode region 245. The apparatus may utilize the keypad in conjunction with the optical information to determine characters. For example, the apparatus may utilize keypad related input to improve recognition accuracy of the optica! input mode. In such an example, the word "guaranteed" may be determined even where the quality of the optical information is poor, with keypad related input.
[0035] FIGURE 3 is a flow diagram showing a set of operations 300 for receiving input according to an example embodiment. An apparatus, for example electronic device 10 of FIGURE 9 or a portion thereof, may utilize the set of operations 300. The apparatus may comprise means, including, for example processor 20 of FIGURE 9, for performing the operations of FIGURE 3. In an example embodiment, an apparatus, for example device 10 of FIGURE 9, is transformed by having memory, for example memory 42 of FIGURE 9, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 9, cause the apparatus to perform set of operations 300.
|0036] At block 301, the apparatus determines a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode similar as described with reference to FIGURES 1 A- l C.
10037] At block 302, the apparatus causes display of a text editor region and an input region indicating at least part of the virtual screen similar as described with reference to FIGURES 2A-2C.
[0038] At block 303, the apparatus receives indication of a first input associated with the first region. The apparatus may receive indication of the first input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 9, receiving one or more indications of the first input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 9, receiving indication of the first input from a receiver, such as receiver 16 of FIGURE 9, receiving the first input from a keypad, such as keypad 30 of FIGURE 9, receiving the first input from a camera module, such as camera module 36 of FIGURE 9,
and/or the like. In an example embodiment, the apparatus may receive the indication of the first input from a different apparatus, such as a mouse, a keyboard, an external touch display, an external camera, and/or the like.
[0039] At block 304, the apparatus determi nes a first input operation based, at least in part, on the first input and the first input mode. The input operation may relate to recognizing input, determining at least one character based, at least in part, on the input, storing the input for later use, and/or the like. The determination of the input operation may be similar as described with reference to FIGURES 1 A-1 C and FIGURES 2A-2C.
[0040] At block 305, the apparatus receives indication of a second input associated with the second region. The receiving of the second input may be similar as described with reference to block 303.
[0041] At block 306, the apparatus determines a second input operation based, at least in part, on the second input and the second input mode. Determining the second operation may be similar as described with reference to block 304. Determination of the second input operation may depend upon the first input operation similar as described with reference to FIGURES 2B- 2C. For example, the apparatus may determine the second input operation based at least in part on the first input operation and the second input, such as using input modes in conjunction with each other.
[0042] At block 307, the apparatus causes display of at least one character in the text editor region based, at least in part, on the first operation and the second operation. Display of the at least one character may be based upon the first operation and the second operation independently and/or in conjunction, The display of the at least one character may supplement, modify, replace, and/or the l ike, characters already present in the text editor region, if any.
[0043] FIGURE 4 is a flow diagram showing a set of operations 400 for receiving input according to an example embodiment. An apparatus, for example electronic device 10 of FIGURE 9 or a portion thereof, may utilize the set of operations 340. The apparatus may comprise means, including, for example processor 20 of FIGURE 9, for performing the operations of FIGURE 4. In an example embodiment, an apparatus, for example device 10 of FIGURE 9, is transformed by having memory, for example memory 42 of FIGURE 9, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 9, cause the apparatus to perform set of operations 400.
[0044] At block 401 , the apparatus determines a virtual screen comprising a first region associated with a first input mode, a second region associated with a second input mode, and a third region associated with a third input mode similar as described with reference to block 301 of FIGURE 3.
[0045] At block 402, the apparatus causes display of a text editor region and an input region indicating at least part of the virtual screen simi lar as described with reference to block 302 of FIGURE 3,
[0046] At block 403, the apparatus receives indication of a transition input associated with at least part of the first region and at least part of the second region. The apparatus may receive indication of the transition input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 9, receiving one or more indications of the first input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 9, a receiver, such as receiver 16 of FIGURE 9, a keypad, such as keypad 30 of FIGURE 9, and/or the like. In an example embodiment, the apparatus may receive the indication of the transition input from a different apparatus, such as a mouse, a keyboard, an external touch display, an external camera, and/or the like. The transition input may be similar as described with reference to FIGURES 2A-2C.
[0047] At block 404, the apparatus causes display of at least part of the virtual screen corresponding to the transition input similar as described with reference to FIGURES 2A-2C.
[0048] At block 405, the apparatus receives indication of a first input associated with the first region similar as described with reference to block 303 of FIGURE 3.
[0049] At block 406, the apparatus determines a first input operation based, at least in part, on the first input and the first input mode similar as described with reference to block 304 of FIGURE 3.
[0050] At block 407, the apparatus receives indication of a second input associated with the second region similar as described with reference to block 305 of FIGURE 3.
[0051] At block 408, the apparatus determines a second input operation based, at least in part, on the second input and the second input mode similar as described with reference to block 306 of FIGURE 3.
[0052] At block 409, the apparatus receives indication of a third input associated with the third region. The receiving of the third input may be similar as described with reference to block 407.
[0053] At block 410, the apparatus determines a third input operation based, at least in part, on the third input and the third input mode. Determining the third operation may be similar as described with reference to block 408. Determination of the third input operation may depend upon the first input operation and/or the second operation similar as described with reference to FIGURES 2B-2C. For example, the apparatus may determine the third input operation based at least in part on the first input operation and/or the second input operation, such as using input modes in conjunction with each other.
[0054] At block 41 1 , the apparatus causes display of at least one character in the text editor region based, at least in part, on the first operation, the second operation, and the third operation. Display of the at least one character may be based upon the first operation, the second operation, and the third operation independently and/or in conjunction. The display of the at least one character may supplement, modify, replace, and/or the like, characters already present in the text editor region, if any.
[0055] FIGURES 5A - 5J are diagrams illustrating character recognition according to an example embodiment. The examples of FIGURES 5A - 5J are merely examples of character recognition, and do not limit the scope of the clai ms. For example, character images may vary with respect to language, characters, orientation, size, alignment, and/or the like. The characters may relate to Arabic characters, Latin characters, Indie characters, Japanese characters, and/or the like.
[0056] In an example embodiment, a character image relates to graphical information that represents at least one character. For example, a character image may relate to an image of a written word. A character image may relate to a part and/or the entirety of an image. The character image may relate to one or more written characters, copied characters, photographed characters, scanned characters, and/or the like.
[0057] In an example embodiment, at least one character may be determined based, at least in part on the character image. For example, an apparatus may perform handwriting recognition, continuous handwriting recognition, optical character recognition (OCR), and/or the like on a character image to determine one or more characters. The accuracy of determination of
the at least one character may vary across apparatuses and does not limit the claims set forth herein. For example, a first apparatus may have less accurate OCR than a second apparatus.
[0058] FIGURE 5A is a diagram illustrating character recognition according to an example embodiment. In the example of FIGURE 5A, the character image represents three letters that form the word "Big."
[0059] Figure 5B is a diagram of recognized characters represented by the character image of FIGURE 5A. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 5B, the characters are "Big."
[0060] FIGURE 5C is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 5C, the character image represents letters and punctuation that form "The dog is big."
[0061] Figure 5D is a diagram of characters represented by the character image of FIGURE 5C. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 5B, the characters are "The dog is big."
[0062] FIGURE 5E is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 5E, the character image represents five script letters that form the word "hello."
[0063] Figure 5F is a diagram of characters represented by the character image of FIGURE 5E. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 5F, the characters are "hello."
[0064] FIGURE 5G is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 5G, the character image represents three letters that form the word "^Ξ^".
[0065] Figure 5H is a diagram of characters represented by the character image of FIGURE 5G. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 5H, the characters are
[0066] FIGURES 6A - 6D are diagrams illustrating a visual representation of a virtual keypad according to an example embodiment of the invention. In an example embodiment, a virtual keypad is a representation of one or more virtual keys. A virtual key may relate to a character, such as a number, letter, symbol, and/or the like, a part of a character, a control, such as shift, alt, command, function, and/or the like, or something similar. The position of touch
display input in relation to position of one or more virtual keys may influence input information associated with the touch display input. For example, a tap input, such as tap input 700 of FIGURE 7A, a touch display input at a position associated with a virtual key for a "Z" character may provide input information associated with the "Z" character. The number, shape, position, and/or the like, of virtual keys within a virtual keypad may vary. For example, one virtual keypad may have 17 round adjacent virtual keys, while a different virtual keypad may have 50 rectangular non-adjacent virtual keys. The size of virtual keys may vary. For example, one virtual key of a virtual keypad may be larger than a different virtual key of the same virtual keypad.
[0067] FIGURE 6A illustrates a virtual keypad 600 according to an example embodiment of the invention. In the example embodiment, virtual keypad 600 comprises 48 adjacent square virtual keys. In an example embodiment, virtual keys 602, 604, and 606 relate to characters and/or controls. For example, virtual key 602 may relate to a "4" character, virtual key 604 may relate to an "I" character, and virtual key 606 may relate to an "Enter" control.
[0068] FIGURE 6B illustrates a virtual keypad 620 according to an example embodiment of the invention. In the example embodiment, virtual keypad 620 comprises 12 adjacent square virtual keys. In an example embodiment, virtual keys 622, 624, and 626 relate to characters and/or controls. For example, virtual key 622 may relate to a "4" character, virtual key 624 may relate to an "8" character, and virtual key 626 may relate to a "#" character.
[0069] FIGURE 6C illustrates a virtual keypad 640 according to an example embodiment of the invention. In the example embodiment, virtual keypad 640 comprises 30 adjacent circular virtual keys. In an example embodiment, virtual keys 642, 644, and 646 relate to characters and/or controls. For example, virtual key 642 may relate to a "D" character, virtual key 644 may relate to a "G" character, and virtual key 646 may relate to a "?" character.
[0070] FIGURE 6D illustrates a virtual keypad 660 according to an example embodiment of the invention. In the example embodiment, virtual keypad 660 comprises 8 non- adjacent unevenly distributed octagonal virtual keys. In an example embodiment, virtual keys 662, 664, and 666 relate to characters and/or controls. For example, virtual key 662 may relate to a "+" character, virtual key 664 may relate to a "$" character, and virtual key 646 may relate to a "*" character.
[0071] FIGURES 7A - 7E are diagrams illustrating input associated with a touch display, for example from display 28 of FIGURE 8, according to an example embodiment. In FIGURES 7A - 7E, a circle represents an input related to contact with a touch display, two crossed lines represent an input related to releasing a contact from a touch display, and a line represents input related to movement on a touch display. Although the examples of FIGURES 7A - 7E indicate continuous contact with a touch display, there may be a part of the input that fails to make direct contact with the touch display. Under such circumstances, the apparatus may, nonetheless, determine that the input is a continuous stroke input. For example, the apparatus may utilize proximity information, for example information relating to nearness of an input implement to the touch display, to determine part of a touch input.
[0072] In the example of FIGURE 7A, input 700 relates to receiving contact input 702 and receiving a release input 704. In this example, contact input 702 and release input 704 occur at the same position. In an example embodiment, an apparatus utilizes the time between receiving contact input 702 and release input 704, For example, the apparatus may interpret input 700 as a tap for a short time between contact input 702 and release input 704, as a press for a longer time between contact input 702 and release input 704, and/or the like. In such an example, a tap input may induce one operation, such as selecting an item, and a press input may induce another operation, such as performing an operation on an item. In another example, a tap and/or press may relate to a user selected text position.
[0073] In the example of FIGURE 7Bf input 720 relates to receiving contact input 722, a movement input 724, and a release input 726. Input 720 relates to a continuous stroke input. In this example, contact input 722 and release input 726 occur at different positions. Input 720 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 720 based at least in part on the speed of movement 724. For example, if input 720 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 720 based at least in part on the distance between contact input 722 and release input 726. For example, if input 720 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 722 and release input 726. An apparatus may interpret the input before receiving release input 726. For example, the apparatus may
evaluate a change in the input, such as speed, position, and/or the like, In such an example, the apparatus may perform one or more determinations based upon the change in the touch input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
[0074] In the example of FIGURE 7C, input 740 relates to receiving contact input 742, a movement input 744, and a release input 746 as shown. Input 740 relates to a continuous stroke input. In this example, contact input 742 and release input 746 occur at different positions. Input 740 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 740 based at least in part on the speed of movement 744. For example, if input 740 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 740 based at least in part on the distance between contact input 742 and release input 746. For example, if input 740 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 742 and release input 746. In still another example embodiment, the apparatus interprets the position of the release input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
[0075] In the example of FIGURE 7D, input 760 relates to receiving contact input 762, and a movement input 764, where contact is released during movement. Input 760 relates to a continuous stroke input. Input 760 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 760 based at least in part on the speed of movement 764. For example, if input 760 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 760 based at least in part on the distance associated with the movement input 764. For example, if input 760 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance of the movement input 764 from the contact input 762 to the release of contact during movement.
[0076] In an example embodiment, an apparatus may receive multiple touch inputs at coinciding times. For example, there may be a tap input at a position and a different tap input at
a different location during the same time. In another example there may be a tap input at a position and a drag input at a different position. An apparatus may interpret the multiple touch inputs separately, together, and/or a combination thereof. For example, an apparatus may interpret the multiple touch inputs in relation to each other, such as the distance between them, the speed of movement with respect to each other, and/or the like.
[0077] In the example of FIGURE 7E, input 7S0 relates to receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792. Input 720 relates to two continuous stroke inputs. In this example, contact input 782 and 788, and release input 786 and 792 occur at different positions. Input 780 may be characterized as a multiple touch input. Input 780 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, to indicating one or more user selected text positions and/or the like. In an example embodiment, an apparatus interprets input 780 based at least in part on the speed of movements 784 and 790. For example, if input 780 relates to zooming a virtual screen, the zooming motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 780 based at least in part on the distance between contact inputs 782 and 788 and release inputs 786 and 792. For example, if input 780 relates to a scaling operation, such as resizing a box, the scaling may relate to the collective distance between contact inputs 782 and 788 and release inputs 786 and 792.
[0078] In an example embodiment, the timing associated with the apparatus receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792 varies. For example, the apparatus may receive contact input 782 before contact input 788, after contact input 788, concurrent to contact input 788, and/or the like. The apparatus may or may not utilize the related timing associated with the receiving of the inputs. For example, the apparatus may utilize an input received first by associating the input with a preferential status, such as a primary selection point, a starting position, and/or the like. In another example, the apparatus may utilize non-concurrent inputs as if the apparatus received the inputs concurrently. In such an example, the apparatus may utilize a release input received first the same way that the apparatus would utilize the same input if the apparatus had received the input second.
[0079] Even though an aspect related to two touch inputs may differ, such as the direction of movement, the speed of movement, the position of contact input, the position of
release input, and/or the like, the touch inputs may be similar. For example, a first touch input comprising a contact input, a movement input, and a release input, may be similar to a second touch input comprising a contact input, a movement input, and a release input, even though they may differ in the position of the contact input, and the position of the release input.
[0080] FIGURES 8A - 8D are diagrams illustrating a virtual screen according to an example embodiment. The examples of FIGURES 8A-8D are merely examples of possible virtual screens and regions caused to be displayed, and do not limit the scope of the claims. For example, a virtual screen and/or a region caused to be displayed may vary by size, shape, orientation, and/or the like.
[0081] FIGURE 8A is a diagram illustrating a virtual screen wider than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7. In the example of FIGURE 8A, region 804 relates to a part of virtual screen 802 that is caused to be displayed. The virtual screen 802 may represent an image, text, a group of items, a list, a work area, map information, and/or the like. For example, if an image is wider than what is determined to be caused to display, virtual screen 802 may be used for the image. In such an example, region 804 may be panned left or right to change the part of the virtual screen 802 that is caused to be displayed. In an example embodiment, changing the part of the virtual screen 802 that is caused to be displayed may be performed when input is received. In an example embodiment, region 804 may be prevented from panning beyond one or more boundary of virtual screen 802.
[0082] FIGURE 8B is a diagram illustrating a virtual screen taller than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7. In the example of FIGURE 8B, region 824 relates to a part of virtual screen 822 that is caused to be displayed. The virtual screen 822 may represent an image, text, a group of items, a list, a work area, map information, and/or the like. For example, if group of items, such as a group of icons, is taller than what is determined to be caused to display, virtual screen 822 may be used for the group of icons. In such an example, region 824 may be panned up or down to change the part of the virtual screen 822 that is caused to be displayed. In an example embodiment, changing the part of the virtual screen 822 that is caused to be displayed may be performed when input is received. In an example embodiment, region 824 may be prevented from panning beyond one or more boundary of virtual screen 822.
[0083] FIGURE 8C is a diagram illustrating a virtual screen wider and taller than the part of the virtual screen caused to be displayed, for example on display 28 of FIGURE 7, In the example of FIGURE 8C, region 844 relates to a part of virtual screen 842 that is caused to be displayed. The virtual screen 842 may represent an image, text, a group of items, a list, a work area, map information, and/or the like. For example, if map information is wider and taller than what is determined to be caused to display, virtual screen 842 may be used for the map information, In such an example, region 844 may be panned left, right, up, and/or down to change the part of the virtual screen 842 that is caused to be displayed. In an example embodiment, changing the part of the virtual screen 842 that is caused to be displayed may be performed when input is received. In an example embodiment, region 844 may be prevented from panning beyond one or more boundary of virtual screen 842.
|0084] FIGURE 8D is a diagram illustrating a virtual screen is the same size as the part of the virtual screen caused to be displayed. In the example of FIGURE 8D, region 864 relates to a part of virtual screen 862 that is caused to be displayed. The virtual screen 862 may represent an image, text, a group of items, a list, a work area, map information, and/or the like. For example, if it is determined to be caused to display an entire work area, virtual screen 862 may be used for the work area.
[0085] FIGURE 9 is a block diagram showing an apparatus, such as an electronic device 10, according to an example embodiment. It should be understood, however, that an electronic device as illustrated and hereinafter described is merely illustrative of an electronic device that could benefit from embodiments of the invention and, therefore, should not be taken to limit the scope of the invention. While one embodiment of the electronic device 10 is illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as, but not limited to, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, media players, cameras, video recorders, global positioning system (GPS) devices and other types of electronic systems, may readily employ embodiments of the invention. Moreover, the apparatus of an example embodiment need not be the entire electronic device, but may be a component or group of components of the electronic device in other example embodiments.
[0086] Furthermore, devices may readi ly employ embod iments of the invention regardless of their intent to provide mobility. In this regard, even though embodiments of the
invention are described in conjunction with mobile communications applications, it should be understood that embodiments of the invention may be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
[0087] The electronic device 10 may comprise an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter 14 and a receiver 16, The electronic device 10 may further comprise a processor 20 or other processing circuitry that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively. The signals may comprise signaling information in accordance with a communications interface standard, user speech, received data, user generated data, and/or the l ike. The electronic device 10 may operate with one or more air interface standards, communication protocols, modulation types, and access types, By way of illustration, the electronic device 10 may operate in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like. For example, the electronic device 10 may operate in accordance with wireline protocols, such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS- 136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and iS-95 (code division multiple access (CDMA)), with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA2000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), or with fourth-generation (4G) wireless communication protocols, wireless networking protocols, such as 802, 1 1 , short-range wireless protocols, such as Bluetooth, and/or the like.
[0088] As used in this application, the term 'circuitry' refers to all of the following: hardware-only implementations (such as implementations in only analog and/or digital circuitry) and to combinations of circuits and software and/or firmware such as to a combination of processor(s) or portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and to circuits, such as a microprocessor(s) or portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the
term "circuitry" would also cover an implementation of merely a processor, multiple processors, or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example, a baseband integrated circuit or applications processor integrated circuit for a mobi le phone or a similar integrated circuit in a cellular network device or other network device.
[0089] Processor 20 may comprise means, such as circuitry, for implementing audio, video, communication, navigation, logic functions, and/or the like, as well as for implementing embodiments of the invention including, for example, one or more of the functions described in conjunction with FIGURES 1-9. For example, processor 20 may comprise means, such as a digital signal processor device, a microprocessor device, various analog to digital converters, digital to analog converters, processing circuitry and other support circuits, for performing various functions including, for example, one or more of the functions described in conjunction with FIGURES 1 -9. The apparatus may perform control and signal processing functions of the electronic device 10 among these devices according to their respective capabilities. The processor 20 thus may comprise the functionality to encode and interleave message and data prior to modulation and transmission. The processor 20 may additionally comprise an internal voice coder, and may comprise an internal data modem. Further, the processor 20 may comprise functionality to operate one or more software programs, which may be stored in memory and which may, among other things, cause the processor 20 to implement at least one embodiment including, for example, one or more of the functions described in conjunction with FIGURES 1 - 9. For example, the processor 20 may operate a connectivity program, such as a conventional internet browser. The connectivity program may allow the electronic device 10 to transmit and receive internet content, such as location-based content and/or other web page content, according to a Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP), and/or the like, for example.
[0090] The electronic device 10 may comprise a user interface for providing output and/or receiving input. The electronic device 1 0 may comprise an output device such as a ringer, a conventional earphone and/or speaker 24, a microphone 26, a display 28, and/or a user input interface, which are coupled to the processor 20. The user input interface, which allows the
electronic device 10 to receive data, may comprise means, such as one or more devices that may allow the electronic device 10 to receive data, such as a keypad 30, a touch display, for example if display 28 comprises touch capability, and/or the like. In an embodiment comprising a touch display, the touch display may be configured to receive input from a single point of contact, multiple points of contact, and/or the like. In such an embodiment, the touch display and/or the processor may determine input based on position, motion, speed, contact area, and/or the like.
[0091] The electronic device 10 may include any of a variety of touch displays including those that are configured to enable touch recognition by any of resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch. Additionally, the touch display may be configured to receive an indication of an input in the form of a touch event which may be defined as an actual physical contact between a selection object (e.g., a finger, stylus, pen, pencil, or other pointing device) and the touch display. Alternatively, a touch event may be defined as bringing the selection object in proximity to the touch display, hovering over a displayed object or approaching an object within a predefined distance, even though physical contact is not made with the touch display. As such, a touch input may comprise any input that is detected by a touch display including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touch display, such as a result of the proximity of the selection object to the touch display. Display 28 may be display two- dimensional information, three-dimensional information and/or the like.
[0092] In embodiments including the keypad 30, the keypad 30 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *), alphabetic keys, and/or the like for operating the electronic device 10, For example, the keypad 30 may comprise a conventional QWERTY keypad arrangement. The keypad 30 may also comprise various soft keys with associated functions, In addition, or alternatively, the electronic device 10 may comprise an interface device such as a joystick or other user input interface. The electronic device 10 further comprises a battery 34, such as a vibrating battery pack, for powering various circuits that are required to operate the electronic device 10, as well as optionally providing mechanical vibration as a detectable output.
[0093] In an example embodiment, the electronic device 10 comprises a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20, The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. For example, in an example embodiment in which the media capturing element is a camera module 36, the camera module 36 may comprise a digital camera which may form a digital image file from a captured image. As such, the camera module 36 may comprise hardware, such as a lens or other optical component(s), and/or software necessary for creating a digital image file from a captured image. Alternatively, the camera module 36 may comprise only the hardware for viewing an image, while a memory device of the electronic device 10 stores instructions for execution by the processor 20 in the form of software for creating a digital image file from a captured image. In an example embodiment, the camera module 36 may further comprise a processing element such as a coprocessor that assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data. The encoder and/or decoder may encode and/or decode according to a standard format, for example, a Joint Photographic Experts Group (JPEG) standard format.
[0094] The electronic device 10 may comprise one or more user identity modules (UIM) 38. The UIM may comprise information stored in memory of electronic device 10, a part of electronic device 10, a device coupled with electronic device 10, and/or the like. The UIM 38 may comprise a memory device having a built-in processor. The UIM 38 may comprise, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), and/or the like. The UIM 38 may store information elements related to a subscriber, an operator, a user account, and/or the like. For example, UIM 38 may store subscriber information, message information, contact information, security information, program information, and/or the like. Usage of one or more UIM 38 may be enabled and/or disabled. For example, electronic device 10 may enable usage of a first UIM and disable usage of a second UIM.
[0095] In an example embodiment, electronic device 10 comprises a single UIM 38. In such an embodiment, at least part of subscriber information may be stored on the UIM 38.
[0096] In another example embodiment, electronic device 10 comprises a plurality of
UIM 38. For example, electronic device 10 may comprise two UIM 38 blocks. In such an
example, electronic device 10 may utilize part of subscriber information of a first UIM 38 under some circumstances and part of subscriber information of a second UIM 38 under other circumstances. For example, electronic device 10 may enable usage of the first UIM 38 and disable usage of the second UIM 38. In another example, electronic device 10 may disable usage of the first UIM 38 and enable usage of the second UIM 38. In still another example, electronic device 10 may utilize subscriber information from the first UIM 38 and the second UIM 38.
|0097j Electronic device 10 may comprise a memory device including, in one embodiment, volatile memory 40, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data, The electronic device 10 may also comprise other memory, for example, non-volatile memory 42, which may be embedded and/or may be removable. The non-volatile memory 42 may comprise an EEPROM, flash memory or the like. The memories may store any of a number of pieces of information, and data. The information and data may be used by the electronic device 10 to implement one or more functions of the electronic device 10, such as the functions described in conjunction with FIGURES 1 -9. For example, the memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, which may uniquely identify the electronic device 10.
[0098] Electronic device 10 may comprise one or more sensor 37. Sensor 37 may comprise a light sensor, a proximity sensor, a motion sensor, a location sensor, and/or the like. For example, sensor 37 may comprise one or more light sensors at various locations on the device. In such an example, sensor 37 may provide sensor information indicating an amount of light perceived by one or more light sensors. Such light sensors may comprise a photovoltaic element, a photoresistive element, a charge coupled device (CCD), and/or the like. In another example, sensor 37 may comprise one or more proximity sensors at various locations on the device. In such an example, sensor 37 may provide sensor information indicating proximity of an object, a user, a part of a user, and/or the like, to the one or more proximity sensors. Such proximity sensors may comprise capacitive measurement, sonar measurement, radar measurement, and/or the like.
[0099] Although FIGURE 9 illustrates an example of an electronic device that may utilize embodiments of the invention including those described and depicted, for example, in FIGURES 1-9, electronic device 10 of FIGURE 9 is merely an example of a device that may utilize embodiments of the invention.
[00100] Embodiments of the invention may be implemented in software, hardware, application logic or a combination of software, hardware, and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device, or a plurality of separate devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plural ity of separate devices. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any tangible media or means that can contain, or store the instmctions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 9, A computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
[00101] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. For example, blocks 403 and 404 of FIGURE 4 may be performed after block 405. In another example, block 305 of FIGURE 3 may be performed before block 303. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. For example, block 403 of FIGURE 4 may be optional or combined with block 404.
[00102] Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
[00103] It is also noted herein that while the above describes example embod iments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims.
Claims
1. An apparatus, comprising:
a processor;
memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following:
determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode;
causing display of a text editor region and an input region that indicates at least part of the virtual screen;
receiving indication of a first input associated with the first region; determining a first input operation based, at least in part, on the first input and the first input mode;
receiving indication of a second input associated with the second region;
determining a second input operation based, at least in part, on the second input and the second input mode; and
causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation.
2. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform receiving indication of a transition input associated with at least part of the first region and at least part of the second region and causing display of at least part of the virtual screen corresponding to the transition input.
3. The apparatus of claim 1 , wherein at least one of the first input mode and the second input mode relates to optical character recognition.
4. The apparatus of claim 3, wherein the first input mode relates to optical character recognition and the second input mode relates to writing recognition.
5. The apparatus of claim 1 , wherein at least one of the first input mode and the second input mode relates to writing recognition.
6. The apparatus of claim 1 , wherein at least one of the first input mode and the second input mode relates to a virtual keypad.
7. The apparatus of claim 6, wherein configuration of the virtual keypad is based, at least in part, on a determination of size of the part of the input region associated with the virtual keypad.
8. The apparatus of claim 6, wherein the first input mode relates to a virtual keypad and the second input mode relates to writing recognition.
9. The apparatus of claim 1 , wherein the input region simultaneously indicates the first region and the second region.
10. The apparatus of claim 9, wherein the at least one character relates to the first operation used in conjunction with the second operation.
1 1. The apparatus of claim 1 , wherein the virtual screen further comprises a third region associated with a third input mode.
1 2. The apparatus of claim 1 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform receiving indication of a third input associated with the third region, and determining a second input operation based, at least in part, on the third input and the third input mode.
1 3. The apparatus of claim 1 1 , wherein the input region simultaneously indicates at least two of the first region, the second region, and the third region.
14. The apparatus of claim 13, wherein the input region simultaneously indicates the first region, the second region, and the third region.
15. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform receiving indication of an input indicating a change in arrangement of the virtual screen.
1 6. The apparatus of claim 1 , wherein at least one of the first region and the second region relate to a voice input mode.
1 7. The apparatus of claim 1 , wherein the apparatus further comprises a touch display.
1 8. The apparatus of claim 1 , wherein the apparatus is a mobile terminal.
A method, comprising:
determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode;
causing display of a text editor region and an input region that indicates at least part of the virtual screen;
receiving indication of a first input associated with the first region; determining by a processor a first input operation based, at least in part, on the first input and the first input mode;
receiving indication of a second input associated with the second region;
determining a second input operation based, at least in part, on the second input and the second input mode; and
causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation. 20, A computer-readable medium encoded with instructions that, when executed by a computer, perform:
determining a virtual screen comprising a first region associated with a first input mode and a second region associated with a second input mode;
causing display of a text editor region and an input region that indicates at least part of the virtual screen;
receiving indication of a first input associated with the first region; determining a first input operation based, at least in part, on the first input and the first input mode;
receiving indication of a second input associated with the second region;
determining a second input operation based, at least in part, on the second input and the second input mode; and
causing display of at least one character in the text editor region based, at least in part, on the first operation and the second operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2009/076209 WO2011079437A1 (en) | 2009-12-29 | 2009-12-29 | Method and apparatus for receiving input |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2009/076209 WO2011079437A1 (en) | 2009-12-29 | 2009-12-29 | Method and apparatus for receiving input |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011079437A1 true WO2011079437A1 (en) | 2011-07-07 |
Family
ID=44226117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2009/076209 WO2011079437A1 (en) | 2009-12-29 | 2009-12-29 | Method and apparatus for receiving input |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2011079437A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013149403A1 (en) | 2012-04-07 | 2013-10-10 | Motorola Mobility, Inc. | Text select and enter |
JPWO2013024530A1 (en) * | 2011-08-15 | 2015-03-05 | 富士通株式会社 | Portable electronic device and key display program |
CN104679723A (en) * | 2013-11-29 | 2015-06-03 | 北京壹人壹本信息科技有限公司 | Text contrast display method, system and device |
US9104261B2 (en) | 2009-12-29 | 2015-08-11 | Nokia Technologies Oy | Method and apparatus for notification of input environment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101506867A (en) * | 2005-06-30 | 2009-08-12 | 微软公司 | Keyboard with input-sensitive display device |
CN201289634Y (en) * | 2006-10-11 | 2009-08-12 | 苹果公司 | Input device |
-
2009
- 2009-12-29 WO PCT/CN2009/076209 patent/WO2011079437A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101506867A (en) * | 2005-06-30 | 2009-08-12 | 微软公司 | Keyboard with input-sensitive display device |
CN201289634Y (en) * | 2006-10-11 | 2009-08-12 | 苹果公司 | Input device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9104261B2 (en) | 2009-12-29 | 2015-08-11 | Nokia Technologies Oy | Method and apparatus for notification of input environment |
JPWO2013024530A1 (en) * | 2011-08-15 | 2015-03-05 | 富士通株式会社 | Portable electronic device and key display program |
WO2013149403A1 (en) | 2012-04-07 | 2013-10-10 | Motorola Mobility, Inc. | Text select and enter |
CN104541239A (en) * | 2012-04-07 | 2015-04-22 | 摩托罗拉移动有限责任公司 | Text select and enter |
EP2834725A4 (en) * | 2012-04-07 | 2015-12-09 | Motorola Mobility Llc | Text select and enter |
CN104679723A (en) * | 2013-11-29 | 2015-06-03 | 北京壹人壹本信息科技有限公司 | Text contrast display method, system and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9274646B2 (en) | Method and apparatus for selecting text information | |
US9524094B2 (en) | Method and apparatus for causing display of a cursor | |
EP2399187B1 (en) | Method and apparatus for causing display of a cursor | |
US9104261B2 (en) | Method and apparatus for notification of input environment | |
US20100199226A1 (en) | Method and Apparatus for Determining Input Information from a Continuous Stroke Input | |
US20090002324A1 (en) | Method, Apparatus and Computer Program Product for Providing a Scrolling Mechanism for Touch Screen Devices | |
US9229615B2 (en) | Method and apparatus for displaying additional information items | |
US20130205262A1 (en) | Method and apparatus for adjusting a parameter | |
US20110057885A1 (en) | Method and apparatus for selecting a menu item | |
US20100265185A1 (en) | Method and Apparatus for Performing Operations Based on Touch Inputs | |
US20110148739A1 (en) | Method and Apparatus for Determining Information for Display | |
US20100194694A1 (en) | Method and Apparatus for Continuous Stroke Input | |
US20110148934A1 (en) | Method and Apparatus for Adjusting Position of an Information Item | |
US20110154267A1 (en) | Method and Apparatus for Determining an Operation Associsated with a Continuous Stroke Input | |
WO2011079437A1 (en) | Method and apparatus for receiving input | |
US20100333015A1 (en) | Method and apparatus for representing text information | |
US20130076622A1 (en) | Method and apparatus for determining input | |
EP2548107B1 (en) | Method and apparatus for determining a selection region | |
WO2011079432A1 (en) | Method and apparatus for generating a text image | |
HK1179017A (en) | Method and apparatus for determining a selection region | |
WO2012059647A1 (en) | Method and apparatus for generating a visual representation of information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09852722 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09852722 Country of ref document: EP Kind code of ref document: A1 |