[go: up one dir, main page]

WO2011079432A1 - Method and apparatus for generating a text image - Google Patents

Method and apparatus for generating a text image Download PDF

Info

Publication number
WO2011079432A1
WO2011079432A1 PCT/CN2009/076183 CN2009076183W WO2011079432A1 WO 2011079432 A1 WO2011079432 A1 WO 2011079432A1 CN 2009076183 W CN2009076183 W CN 2009076183W WO 2011079432 A1 WO2011079432 A1 WO 2011079432A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
character
input
text
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2009/076183
Other languages
French (fr)
Inventor
Ning Yang
Kuifei Yu
Liangfeng Xu
Biao Ren
Juntao Zhen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Inc
Original Assignee
Nokia Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Inc filed Critical Nokia Inc
Priority to PCT/CN2009/076183 priority Critical patent/WO2011079432A1/en
Publication of WO2011079432A1 publication Critical patent/WO2011079432A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40062Discrimination between different image types, e.g. two-tone, continuous tone
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/28Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
    • G06V30/287Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of Kanji, Hiragana or Katakana characters

Definitions

  • the present application relates generally to generating a text image.
  • the devices may generate images based on information captured by a camera, provided by user input, received from another apparatus, and/or the like.
  • An apparatus comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: identify a first element of an image as a character image, determine a first character group comprising at least one character represented by the character image, identify a second element of the image that is a non character image, determine a second character group comprising at least one character indicative of the second element, and generate a first text image comprising the determined character and the determined character representation is disclosed.
  • a method comprising identifying a first element of an image as a character image, determining a first character group comprising at least one character represented by the character image, identifying a second element of the image that is a non character image, determining a second character group comprising at least one character indicative of the second element, and generating by a processor a first text image comprising the determined character and the determined character representation is disclosed.
  • a computer-readable medium encoded with instructions that, when executed by a computer, perform: identifying a first element of an image as a character image, determining a first character group comprising at least one character represented by the character image, identifying a second element of the image that is a non character image, determining a second character group comprising at least one character indicative of the second element, and generating a first text image comprising the determined character and the determined character representation is disclosed.
  • An apparatus comprising means for identifying a first element of an image as a character image, means for determining a first character group comprising at least one character represented by the character image, means for identifying a second element of the image that is a non character image, means for determining a second character group comprising at least one character indicative of the second element, and means for generating by a processor a first text image comprising the determined character and the determined character representation is disclosed.
  • FIGURES 1A - 1 H are diagrams illustrating character images according to an example embodiment
  • FIGURES 2A - 2D are diagrams illustrating non character image according to an example embodiment
  • FIGURES 3A - 3E are diagrams illustrating an image comprising a character image and a non character image according to an example embodiment
  • FIGURES 4A - 4C are diagrams illustrating text images according to an example embodiment
  • FIGURE 5 is a flow diagram showing a set of operations for generating a text image according to an example embodiment
  • FIGURE 6 is a flow diagram showing a set of operations 600 for generating a text image according to an example embodiment
  • FIGURES 7A - 7E are diagrams illustrating input associated with a touch display according to an example embodiment.
  • FIGURE 8 is a block diagram showing an apparatus according to an example embodiment. DETAILED DESCRIPTON OF THE DRAWINGS
  • FIGURES 1 through 8 of the drawings An embodiment of the invention and its potential advantages are understood by referring to FIGURES 1 through 8 of the drawings.
  • a user of an apparatus may desire to generate an image.
  • the user may desire that the amount of computer readable information representing the image be small.
  • the user may desire to store the computer readable information on the apparatus, for example in non-volatile memory 42 of FIGURE 8, volatile memory 40 of FIGURE 8, and/or the like.
  • the user may desire to send the image to another device, for example using, a message, an upload, a transfer, and/or the like, while reducing the amount of computer readable
  • the user may desire to display the image on a device with limited capability for displaying the image.
  • the device may have limited storage capability, limited display capability, limited image processing capability, and/or the like.
  • an apparatus may represent an image by generating a text image.
  • a text image relates to a group of characters arranged to convey graphical and/or text information utilizing characters, such as American Standard Code for Information
  • the apparatus may generate the text image based, at least in part on a graphical image.
  • the graphical image may be an image represented in a graphical format, such as bitmap, Joint Photographic Experts Group (JPEG), and/or the like.
  • JPEG Joint Photographic Experts Group
  • the apparatus may generate the text image, based, at least in part, on input information, such as information associated with a user drawing, writing, and/or the like.
  • FIGURES 1 A - 1H are diagrams illustrating character images according to an example embodiment.
  • the examples of FIGURES 1A - 1H are merely examples of character images, and do not limit the scope of the claims.
  • character images may vary with respect to language, characters, orientation, size, alignment, and/or the like.
  • the characters may relate to Arabic characters, Latin characters, Indic characters, Japanese characters, and/or the like.
  • a character image relates to graphical information that represents at least one character.
  • a character image may relate to an image of a written character, a typed character, a printed character, and/or the like.
  • a character image may relate to a part and/or the entirety of an image.
  • the character image may relate to one or more written characters, copied characters, photographed characters, scanned characters, and/or the like.
  • At least one character may be determined based, at least in part on the character image.
  • an apparatus may perform handwriting recognition, continuous handwriting recognition, optical character recognition (OCR), and/or the like on a character image to determine one or more characters.
  • OCR optical character recognition
  • the accuracy of determination of the at least one character may vary across apparatuses and does not limit the claims set forth herein.
  • a first apparatus may have less accurate OCR than a second apparatus.
  • FIGURE 1 A is a diagram illustrating a character image according to an example embodiment.
  • the character image represents three letters that form the word "Big.”
  • Figure IB is a diagram of characters represented by the character image of FIGURE 1 A.
  • the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
  • the characters are "Big.”
  • FIGURE 1C is a diagram illustrating a character image according to an example embodiment.
  • the character image represents letters and punctuation that form "The dog is big.”
  • Figure ID is a diagram of characters represented by the character image of FIGURE 1C.
  • the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
  • the characters are "The dog is big.”
  • FIGURE IE is a diagram illustrating a character image according to an example embodiment.
  • the character image represents five script letters that form the word "hello.”
  • Figure IF is a diagram of characters represented by the character image of FIGURE IE.
  • the characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
  • the characters are "hello.”
  • FIGURE 1 G is a diagram illustrating a character image according to an example embodiment.
  • the character image represents characters that form the word "3 ⁇ 4_3 ⁇ 43-f.
  • Figure 1H is a diagram of characters represented by the character image of FIGURE IG The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 1H, the characters are
  • FIGURES 2A - 2D are diagrams illustrating non character image according to an example embodiment.
  • the examples of FIGURES 2 A - 2D are merely examples of non character image, and do not limit the scope of the claims.
  • non character images may vary with respect to orientation, size, alignment, and/or the like.
  • a non character image relates to graphical information that represents at least one element, but does not represent a character.
  • a non character image may relate to an image that does not comprise a representation of a word.
  • a non character image may relate to a part and/or the entirety of an image.
  • the non character image may relate to one or more lines, arcs, curves, and/or the like.
  • An apparatus may determine a character representation indicative of a non character image.
  • the apparatus may utilize a liquid flow algorithm, a block analysis algorithm, and/or the like.
  • a liquid flow algorithm may evaluate along the path of a line associated with the non character image.
  • a block algorithm may evaluate a non character image by partitioning it into portions and determining a character that most closely approximates each portion, and combines the characters in accordance with the portions.
  • FIGURE 2A is a diagram illustrating a non character image according to an example embodiment.
  • the example of FIGURE 2A may relate to input indicating handwriting, a scanned image, a received image, a photograph, and/or the like.
  • Figure 2B is a diagram of a character representation of the non character image of FIGURE 2 A.
  • the character representation may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
  • FIGURE 2C is a diagram illustrating a non character image according to an example embodiment.
  • the example of FIGURE 2C may relate to input indicating handwriting, a scanned image, a received image, a photograph, and/or the like.
  • FIGURES 3 A - 3E are diagrams illustrating an image comprising a character image, similar as described with reference to FIGURES 1 A-1 H, and a non character image, similar as described with reference to FIGURES 2A-2D, according to an example embodiment.
  • the examples of FIGURES 3A - 3E are merely examples of images, and do not limit the scope of the claims.
  • the character image and/or non character image may vary with respect to orientation, size, alignment, and/or the like.
  • the image may comprise more than one character image and/or more than one non character image.
  • an apparatus may generate a text image comprising one or more characters represented by the character image and a character representation of the one or more non character images. For example, the apparatus may evaluate the character image and the non character image separately, simultaneously, in parallel, sequentially, and/or the like. In an example embodiment, the apparatus may remove a part of the image after determining the character or character representation associated with the part of the image. In another example, the apparatus may determine a character represented by a part of the image associated with a character image in conjunction with determining a character representation of a different part of the image associated with a non character image.
  • FIGURE 3A is a diagram illustrating an image 300 according to an example embodiment.
  • the image of example of FIGURE 3A comprises element 301 that relates to a character image, and element 302 that relates to a non character image.
  • FIGURE 3B is a diagram illustrating a text image 320 that represents image 300 of FIGURE 3 A.
  • Text image 320 comprises character group 321 , which relate to element 301 of FIGURE 3A, and character group 322, which relates to element 302 of FIGURE 3A.
  • FIGURE 3C is a diagram illustrating an image 340 according to an example embodiment.
  • the image of example of FIGURE 3C comprises element 341 that relates to a character image, and element 341 that relates to a non character image.
  • FIGURE 3D is a diagram illustrating a text image 360 that represents image 340 of FIGURE 3C.
  • Text image 360 comprises character group 361 , which relates to element 341 of FIGURE 3C, and character group 362, which relates to element 342 of FIGURE 3C.
  • a user may desire to generate the text image for a specific apparatus, display, type of apparatus, type of display, environment, and/or the like.
  • a representation area constraint may relate to dimensional constraint associated with the text image, such as a constraint regarding number of characters, size of charters, height, width, and/or the like.
  • a representation constraint may relate to a constraint that the width of the text image must be 10 characters or less.
  • a representation constraint may relate to a constraint that the height of the text image must be 12 characters or less.
  • an apparatus may determine the text image based, at least in part, on a representation area constraint.
  • an apparatus may determine position of, insert, remove, and/or the like, at least one character associated with a text image based on a representation constraint. For example, the apparatus may position a character differently than indicated by the image. In such an example, the character may be positioned lower than indicated by the image, higher than indicated by the image, more leftward than indicated by the image, more rightward than indicated by the image, and/or the like.
  • FIGURE 3E is a diagram illustrating a modified text image 380 that represents image 340 of FIGURE 3C.
  • Modified text image 380 comprises character group 381, which relates to character image 341 of FIGURE 3C, and character group 382, which relates to non character image 342 of FIGURE 3C.
  • Modified text image 380 relates to a representation of image 340 based, at least in part, on a horizontal representation area constraint.
  • the " ⁇ " character is positioned beneath the character, even though such positioning is different than indicated by image 340.
  • Such positioning may relate to a horizontal dimension constraint on representation area, such as a horizontal constraint of 6 characters.
  • Figure 3E relates to positioning a character lower than indicated in the image
  • the character positioning may vary depending on the representation area constraint, other characters of the text image, and/or the like.
  • a first character indicated below a second character in the image may be positioned horizontally adjacent to the second character based, at least in part, on a vertical representation area constraint, a third character of the text image, and/or the like.
  • an apparatus in determining position of a character with regard to a representation area constraint, may evaluate one or more candidate positions for the character. For example, the apparatus may select a position for the character based, at least in part, on determination of the candidate position that results in the greatest similarity between the text image and the image.
  • FIGURES 4A - 4C are diagrams illustrating text images according to an example embodiment.
  • the examples of FIGURES 4A - 4C are merely examples of text images, and do not limit the scope of the claims.
  • an image may comprise more or less detail than illustrated in the examples of FIGURES 4A-4C.
  • an apparatus may generate more than two associated text images to represent an image.
  • a user may desire to generate a text image based, at least in part, on an image where part of the image has a level of detail difficult to represent in a text image.
  • an apparatus may generate more than one text image to represent at least some of the details of the image.
  • the apparatus may generate a first text image that may omit at least one detail of the image.
  • a detail may relate to an element of an image that spans beyond a text image representation, that is too small to represent in a text image, and/or the like.
  • the apparatus may generate a second text image that comprises less than the entirety of the image, but comprises at least one detail associated with the image that was unrepresented in the first image.
  • the determination to generate more than one text image to represent an image may be based on a representation area constraint, similar as described with reference to FIGURE 3E.
  • FIGURE 4A is a diagram illustrating an image 400 according to an example embodiment.
  • the image of example of FIGURE 4A comprises element 401 that relates to a character image, element 402 that relates to another character image, element 403 that relates to details of the image, and elements relating to a non character image.
  • FIGURE 4B is a diagram illustrating a text image 420 according to an example embodiment.
  • Text image 420 comprises character group 421 representing character image element 401 of FIGURE 4 A, character group 422 representing character image element 402 of FIGURE 4A, and a character group representing elements relating to the non character image of image 400 of FIGURE 4 A. It can be seen that text image 420 does not include characters that represent detail element 403 of FIGURE 4 A.
  • FIGURE 4C is a diagram illustrating a text image 440 according to an example embodiment.
  • Text image 440 comprises character group 442 representing character image element 402 of FIGURE 4 A, character group 443 representing detail element 403, and a character group representing elements relating to part of the non character image of image 400 of FIGURE 4 A. It can be seen that text image 440 relates to a part of image 400 of FIGURE 4 A that is less than the entirety of image 400.
  • an apparatus may associate a first text image, such as text image 440 of FIGURE 4C, to a second text image, such as text image 420 of FIGURE 4B.
  • the association may relate to a computer memory reference, a textual reference, an image reference, and/or the like.
  • the association may relate to a pointer in memory to the first text image.
  • the association may relate to characters of the text image, such as character group 422 of FIGURE 4B and/or character group 442 of FIGURE 4C.
  • the association may be indicated in the first text image and/or the second text image. For example, in text image 420 and text image 440, the characters indicating "Kitchen" may indicate the association between text image 420 and text image 440.
  • FIGURES 4A-4C illustrate a representation of an image having a text image indicating the span of the image, and a text image indicating a detailed subset of the image
  • an apparatus may partition the image differently.
  • the apparatus may vary the partitioning depending upon a predetermined setting, an evaluation of details of the image, an evaluation of the size of the image, a representation area constraint, and/or the like.
  • the apparatus may represent an image as a set of adjacent text images, a set of logically embedded text images, and/or the like.
  • FIGURE 5 is a flow diagram showing a set of operations 500 for generating a text image according to an example embodiment.
  • An apparatus for example electronic device 10 of FIGURE 8 or a portion thereof, may utilize the set of operations 500.
  • the apparatus may comprise means, including, for example processor 20 of FIGURE 8, for performing the operations of FIGURE 5.
  • an apparatus, for example device 10 of FIGURE 8 is transformed by having memory, for example memory 42 of FIGURE 8, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 8, cause the apparatus to perform set of operations 500.
  • the apparatus identifies a first element of an image as a character image.
  • the image may relate to a stored image, a received image, an image associated with input, and/or the like.
  • the input may relate to one or more keypad inputs, motion inputs, touch inputs such as touch input 740 of FIGURE 7C, and/or the like.
  • the image may be received, for example by receiver 16 of FIGURE 8.
  • the image may be received in a message, such as an email, multimedia message, instant message, and/or the like.
  • the image may be received from a camera module, such as camera module 36 of FIGURE 9.
  • the identification of the first element as a character image may be similar as described with reference to FIGURES 3 A-3E and FIGURES 4A-4C.
  • the apparatus determines a first character group comprising at least one character represented by the character image.
  • the determination of the first character group may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
  • the apparatus identifies a second element of the image that is a non character image.
  • the identification of the second element as a non character image may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
  • the apparatus determines a second character group comprising at least one character indicative of the second element.
  • the determination of the second character group may be similar as described with reference to FIGURES 3 A-3E and FIGURES 4A-4C.
  • the apparatus generates a text image comprising the determined character and the determined character representation.
  • the generation of the text image may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
  • FIGURE 6 is a flow diagram showing a set of operations 600 for generating a text image according to an example embodiment.
  • An apparatus for example electronic device 10 of FIGURE 8 or a portion thereof, may utilize the set of operations 600.
  • the apparatus may comprise means, including, for example processor 20 of FIGURE 8, for performing the operations of FIGURE 6.
  • an apparatus, for example device 10 of FIGURE 8 is transformed by having memory, for example memory 42 of FIGURE 8, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 8, cause the apparatus to perform set of operations 600.
  • the apparatus receives indication of at least one touch input, such as touch input 740 of FIGURE 7C, and generates the image based, at least in part on the at least one touch input.
  • the apparatus may receive indication of the first input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 6, receiving one or more indications of the touch input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 6, receiving indication of the touch input from a receiver, such as receiver 16 of FIGURE 6, and/or the like.
  • the apparatus may receive the indication of the touch input from a different apparatus, such as a mouse, a keyboard, an external touch display, and/or the like.
  • the apparatus identifies a first element of an image as a character image.
  • the identification, first element, and image may be similar as described with reference to block 501 of FIGURE 5.
  • the apparatus determines a first character group comprising at least one character represented by the character image.
  • the determination of the first character group may be similar as described with reference to block 502 of FIGURE 5.
  • the apparatus removes the first element from the image.
  • the removal of the first element may relate to removing the element from the image itself, a copy of the image, and or the like.
  • the apparatus identifies a second element of the image that is a non character image.
  • the identification of the second element as a non character image may be similar as described with reference to block 503 of FIGURE 5.
  • the apparatus determines at least one representation area constraint.
  • the representation area constraint may be similar as described with reference to FIGURE 3E.
  • the representation area may relate to a display associated with the apparatus, such as a separate display, an included display, such as display 28 of FIGURE 8, a display associated with another apparatus, and/or the like.
  • the representation area may relate to a display included in a message receiving apparatus to which the apparatus will send a message comprising the text image.
  • the apparatus may receive representation area constraint information from the receiving apparatus, determine the dimension information based, at least in part, on predetermined setting, and or the like.
  • the apparatus determines a second character group comprising at least one character indicative of the second element.
  • the determination of the second character group may be similar as described with reference to block 504 of FIGURE 5.
  • the apparatus generates a first text image comprising the determined character and the determined character representation based, at least in part, on the representation area constraint.
  • the generation of the first text image may be similar as described with reference to block 505 of FIGURE 5.
  • the first text image may be a modified text image similar as described with reference to FIGURE 3E.
  • the apparatus generates a second text image representing a part of the image that is less than the entirety of the image.
  • the generation of the text image may be similar as described with reference to FIGURES 4A-4C.
  • the apparatus associates the first text image and the second text image.
  • the association may be similar as described with reference to FIGURES 4A-4C.
  • the apparatus sends the first text image and the second text image via a message.
  • the message may be an email message, a short message service (SMS) message, a multimedia message, an instant messaging message, and/or the like.
  • SMS short message service
  • FIGURES 7A - 7E are diagrams illustrating input associated with a touch display, for example from display 28 of FIGURE 8, according to an example embodiment.
  • a circle represents an input related to contact with a touch display
  • two crossed lines represent an input related to releasing a contact from a touch display
  • a line represents input related to movement on a touch display.
  • the examples of FIGURES 7A - 7E indicate continuous contact with a touch display, there may be a part of the input that fails to make direct contact with the touch display. Under such circumstances, the apparatus may, nonetheless, determine that the input is a continuous stroke input.
  • the apparatus may utilize proximity information, for example information relating to nearness of an input implement to the touch display, to determine part of a touch input.
  • input 700 relates to receiving contact input 702 and receiving a release input 704.
  • contact input 702 and release input 704 occur at the same position.
  • an apparatus utilizes the time between receiving contact input 702 and release input 704.
  • the apparatus may interpret input 700 as a tap for a short time between contact input 702 and release input 704, as a press for a longer time between contact input 702 and release input 704, and/or the like.
  • input 720 relates to receiving contact input 722, a movement input 724, and a release input 726.
  • Input 720 relates to a continuous stroke input.
  • contact input 722 and release input 726 occur at different positions.
  • Input 720 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
  • an apparatus interprets input 720 based at least in part on the speed of movement 724. For example, if input 720 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
  • an apparatus interprets input 720 based at least in part on the distance between contact input 722 and release input 726. For example, if input 720 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 722 and release input 726.
  • An apparatus may interpret the input before receiving release input 726. For example, the apparatus may evaluate a change in the input, such as speed, position, and/or the like. In such an example, the apparatus may perform one or more determinations based upon the change in the touch input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
  • input 740 relates to receiving contact input 742, a movement input 744, and a release input 746 as shown.
  • Input 740 relates to a continuous stroke input.
  • contact input 742 and release input 746 occur at different positions.
  • Input 740 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
  • an apparatus interprets input 740 based at least in part on the speed of movement 744. For example, if input 740 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
  • an apparatus interprets input 740 based at least in part on the distance between contact input 742 and release input 746. For example, if input 740 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 742 and release input 746. In still another example embodiment, the apparatus interprets the position of the release input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
  • input 760 relates to receiving contact input 762, and a movement input 764, where contact is released during movement.
  • Input 760 relates to a continuous stroke input.
  • Input 760 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like.
  • an apparatus interprets input 760 based at least in part on the speed of movement 764. For example, if input 760 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like.
  • an apparatus interprets input 760 based at least in part on the distance associated with the movement input 764. For example, if input 760 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance of the movement input 764 from the contact input 762 to the release of contact during movement.
  • an apparatus may receive multiple touch inputs at coinciding times. For example, there may be a tap input at a position and a different tap input at a different location during the same time. In another example there may be a tap input at a position and a drag input at a different position.
  • An apparatus may interpret the multiple touch inputs separately, together, and/or a combination thereof. For example, an apparatus may interpret the multiple touch inputs in relation to each other, such as the distance between them, the speed of movement with respect to each other, and/or the like.
  • input 780 relates to receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792.
  • Input 720 relates to two continuous stroke inputs. In this example, contact input 782 and 788, and release input 786 and 792 occur at different positions.
  • Input 780 may be characterized as a multiple touch input. Input 780 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, to indicating one or more user selected text positions and/or the like.
  • an apparatus interprets input 780 based at least in part on the speed of movements 784 and 790.
  • an apparatus interprets input 780 based at least in part on the distance between contact inputs 782 and 788 and release inputs 786 and 792. For example, if input 780 relates to a scaling operation, such as resizing a box, the scaling may relate to the collective distance between contact inputs 782 and 788 and release inputs 786 and 792.
  • the timing associated with the apparatus receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792 varies.
  • the apparatus may receive contact input 782 before contact input 788, after contact input 788, concurrent to contact input 788, and/or the like.
  • the apparatus may or may not utilize the related timing associated with the receiving of the inputs.
  • the apparatus may utilize an input received first by associating the input with a preferential status, such as a primary selection point, a starting position, and/or the like.
  • the apparatus may utilize non-concurrent inputs as if the apparatus received the inputs concurrently.
  • the apparatus may utilize a release input received first the same way that the apparatus would utilize the same input if the apparatus had received the input second.
  • a first touch input comprising a contact input, a movement input, and a release input
  • a second touch input comprising a contact input, a movement input, and a release input, even though they may differ in the position of the contact input, and the position of the release input.
  • FIGURE 8 is a block diagram showing an apparatus, such as an electronic device 10, according to an example embodiment.
  • an electronic device as illustrated and hereinafter described is merely illustrative of an electronic device that could benefit from embodiments of the invention and, therefore, should not be taken to limit the scope of the invention.
  • While one embodiment of the electronic device 10 is illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as, but not limited to, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, media players, cameras, video recorders, global positioning system (GPS) devices and other types of electronic systems, may readily employ embodiments of the invention.
  • PDAs portable digital assistants
  • GPS global positioning system
  • the apparatus of an example embodiment need not be the entire electronic device, but may be a component or group of components of the electronic device in other example embodiments.
  • devices may readily employ embodiments of the invention regardless of their intent to provide mobility.
  • embodiments of the invention are described in conjunction with mobile communications applications, it should be understood that embodiments of the invention may be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
  • the electronic device 10 may comprise an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter 14 and a receiver 16.
  • the electronic device 10 may further comprise a processor 20 or other processing circuitry that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively.
  • the signals may comprise signaling information in accordance with a communications interface standard, user speech, received data, user generated data, and/or the like.
  • the electronic device 10 may operate with one or more air interface standards, communication protocols, modulation types, and access types. By way of illustration, the electronic device 10 may operate in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like.
  • the electronic device 10 may operate in accordance with wireline protocols, such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS-136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and IS-95 (code division multiple access (CDMA)), with third -generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA2000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), or with fourth-generation (4G) wireless communication protocols, wireless networking protocols, such as 802.11, short-range wireless protocols, such as Bluetooth, and/or the like.
  • wireline protocols such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS-136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and IS-95 (code division multiple access (CDMA)
  • third -generation (3G) wireless communication protocols such as Universal Mobile
  • circuitry refers to all of the following: hardware-only implementations (such as implementations in only analog and/or digital circuitry) and to combinations of circuits and software and/or firmware such as to a combination of processor(s) or portions of processors )/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and to circuits, such as a microprocessor(s) or portion of a
  • circuitry would also cover an implementation of merely a processor, multiple processors, or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a cellular network device or other network device.
  • Processor 20 may comprise means, such as circuitry, for implementing audio, video, communication, navigation, logic functions, and/or the like, as well as for implementing embodiments of the invention including, for example, one or more of the functions described in conjunction with FIGURES 1-8.
  • processor 20 may comprise means, such as a digital signal processor device, a microprocessor device, various analog to digital converters, digital to analog converters, processing circuitry and other support circuits, for performing various functions including, for example, one or more of the functions described in conjunction with FIGURES 1 -8.
  • the apparatus may perform control and signal processing functions of the electronic device 10 among these devices according to their respective capabilities.
  • the processor 20 thus may comprise the functionality to encode and interleave message and data prior to modulation and transmission.
  • the processor 20 may additionally comprise an internal voice coder, and may comprise an internal data modem. Further, the processor 20 may comprise functionality to operate one or more software programs, which may be stored in memory and which may, among other things, cause the processor 20 to implement at least one embodiment including, for example, one or more of the functions described in conjunction with FIGURES 1- 8. For example, the processor 20 may operate a connectivity program, such as a conventional internet browser.
  • the connectivity program may allow the electronic device 10 to transmit and receive intemet content, such as location-based content and/or other web page content, according to a Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Message Access Protocol (FMAP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP), and/or the like, for example.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • FMAP Internet Message Access Protocol
  • POP Post Office Protocol
  • Simple Mail Transfer Protocol SMTP
  • WAP Wireless Application Protocol
  • HTTP Hypertext Transfer Protocol
  • the electronic device 10 may comprise a user interface for providing output and/or receiving input.
  • the electronic device 10 may comprise an output device such as a ringer, a conventional earphone and/or speaker 24, a microphone 26, a display 28, and/or a user input interface, which are coupled to the processor 20.
  • the user input interface which allows the electronic device 10 to receive data, may comprise means, such as one or more devices that may allow the electronic device 10 to receive data, such as a keypad 30, a touch display, for example if display 28 comprises touch capability, and/or the like.
  • the touch display may be configured to receive input from a single point of contact, multiple points of contact, and/or the like.
  • the touch display and/or the processor may determine input based on position, motion, speed, contact area, and/or the like.
  • the electronic device 10 may include any of a variety of touch displays including those that are configured to enable touch recognition by any of resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch. Additionally, the touch display may be configured to receive an indication of an input in the form of a touch event which may be defined as an actual physical contact between a selection object (e.g., a finger, stylus, pen, pencil, or other pointing device) and the touch display.
  • a selection object e.g., a finger, stylus, pen, pencil, or other pointing device
  • a touch event may be defined as bringing the selection object in proximity to the touch display, hovering over a displayed object or approaching an object within a predefined distance, even though physical contact is not made with the touch display.
  • a touch input may comprise any input that is detected by a touch display including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touch display, such as a result of the proximity of the selection object to the touch display.
  • Display 28 may be display two- dimensional information, three-dimensional information and/or the like.
  • the keypad 30 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *), alphabetic keys, and/or the like for operating the electronic device 10.
  • the keypad 30 may comprise a conventional QWERTY keypad arrangement.
  • the keypad 30 may also comprise various soft keys with associated functions.
  • the electronic device 10 may comprise an interface device such as a joystick or other user input interface.
  • the electronic device 10 further comprises a battery 34, such as a vibrating battery pack, for powering various circuits that are required to operate the electronic device 10, as well as optionally providing mechanical vibration as a detectable output.
  • the electronic device 10 comprises a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20.
  • the media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
  • the camera module 36 may comprise a digital camera which may form a digital image file from a captured image.
  • the camera module 36 may comprise hardware, such as a lens or other optical component(s), and/or software necessary for creating a digital image file from a captured image.
  • the camera module 36 may comprise only the hardware for viewing an image, while a memory device of the electronic device 10 stores instructions for execution by the processor 20 in the form of software for creating a digital image file from a captured image.
  • the camera module 36 may further comprise a processing element such as a coprocessor that assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data.
  • the encoder and/or decoder may encode and/or decode according to a standard format, for example, a Joint Photographic Experts Group (JPEG) standard format.
  • JPEG Joint Photographic Experts Group
  • the electronic device 10 may comprise one or more user identity modules (UIM) 38.
  • the UIM may comprise information stored in memory of electronic device 10, a part of electronic device 10, a device coupled with electronic device 10, and/or the like.
  • the UTM 38 may comprise a memory device having a built-in processor.
  • the UTM 38 may comprise, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), and/or the like.
  • SIM subscriber identity module
  • UCC universal integrated circuit card
  • USIM universal subscriber identity module
  • R-UIM removable user identity module
  • the UTM 38 may store information elements related to a subscriber, an operator, a user account, and/or the like.
  • UFM 38 may store subscriber information, message information, contact information, security information, program information, and/or the like. Usage of one or more UTM 38 may be enabled and/or disabled.
  • electronic device 10 may enable usage
  • electronic device 10 comprises a single UFM 38.
  • at least part of subscriber information may be stored on the UFM 38.
  • electronic device 10 comprises a plurality of UFM 38.
  • electronic device 10 may comprise two UFM 38 blocks.
  • electronic device 10 may utilize part of subscriber information of a first UFM 38 under some circumstances and part of subscriber information of a second UFM 38 under other circumstances.
  • electronic device 10 may enable usage of the first UFM 38 and disable usage of the second UFM 38.
  • electronic device 10 may disable usage of the first UIM 38 and enable usage of the second UIM 38.
  • electronic device 10 may utilize subscriber information from the first UIM 38 and the second UIM 38.
  • Electronic device 10 may comprise a memory device including, in one embodiment, volatile memory 40, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data.
  • volatile memory 40 such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data.
  • the electronic device 10 may also comprise other memory, for example, non-volatile memory 42, which may be embedded and/or may be removable.
  • non-volatile memory 42 may comprise an EEPROM, flash memory or the like.
  • the memories may store any of a number of pieces of information, and data. The information and data may be used by the electronic device 10 to implement one or more functions of the electronic device 10, such as the functions described in conjunction with FIGURES 1-8.
  • the memories may comprise an identifier, such as an international mobile equipment identification (FMEI) code, which may uniquely identify the electronic device 10.
  • FMEI international mobile equipment identification
  • Electronic device 10 may comprise one or more sensor 37.
  • Sensor 37 may comprise a light sensor, a proximity sensor, a motion sensor, a location sensor, and/or the like.
  • sensor 37 may comprise one or more light sensors at various locations on the device.
  • sensor 37 may provide sensor information indicating an amount of light perceived by one or more light sensors.
  • Such light sensors may comprise a photovoltaic element, a photoresistive element, a charge coupled device (CCD), and/or the like.
  • sensor 37 may comprise one or more proximity sensors at various locations on the device.
  • sensor 37 may provide sensor information indicating proximity of an object, a user, a part of a user, and/or the like, to the one or more proximity sensors.
  • Such proximity sensors may comprise capacitive measurement, sonar measurement, radar
  • FIGURE 8 illustrates an example of an electronic device that may utilize embodiments of the invention including those described and depicted, for example, in FIGURES 1-8
  • electronic device 10 of FIGURE 8 is merely an example of a device that may utilize embodiments of the invention.
  • Embodiments of the invention may be implemented in software, hardware, application logic or a combination of software, hardware, and application logic.
  • the software, application logic and/or hardware may reside on the apparatus, a separate device, or a plurality of separate devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of separate devices.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a "computer-readable medium” may be any tangible media or means that can contain, or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 8.
  • a computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • FIGURE 5 may be performed before block 502.
  • block 606 of FIGURE 6 may be performed before block 602.
  • one or more of the above- described functions may be optional or may be combined.
  • block 606 of FIGURE 6 may be optional or combined with block 608.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An apparatus, comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: identify a first element of an image as a character image, determine a first character group comprising at least one character represented by the character image, identify a second element of the image that is a non character image, determine a second character group comprising at least one character indicative of the second element, and generate a first text image comprising the determined character and the determined character representation is disclosed.

Description

METHOD AND APPARATUS FOR GENERATING A TEXT IMAGE
TECHNICAL FIELD
[0001] The present application relates generally to generating a text image.
BACKGROUND
[0002] There has been a recent surge in the use of electronic devices that generate images. The devices may generate images based on information captured by a camera, provided by user input, received from another apparatus, and/or the like.
SUMMARY
[0003] Various aspects of examples of the invention are set out in the claims.
[0004] An apparatus, comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: identify a first element of an image as a character image, determine a first character group comprising at least one character represented by the character image, identify a second element of the image that is a non character image, determine a second character group comprising at least one character indicative of the second element, and generate a first text image comprising the determined character and the determined character representation is disclosed.
[0005] A method, comprising identifying a first element of an image as a character image, determining a first character group comprising at least one character represented by the character image, identifying a second element of the image that is a non character image, determining a second character group comprising at least one character indicative of the second element, and generating by a processor a first text image comprising the determined character and the determined character representation is disclosed.
[0006] A computer-readable medium encoded with instructions that, when executed by a computer, perform: identifying a first element of an image as a character image, determining a first character group comprising at least one character represented by the character image, identifying a second element of the image that is a non character image, determining a second character group comprising at least one character indicative of the second element, and generating a first text image comprising the determined character and the determined character representation is disclosed.
[0007] An apparatus, comprising means for identifying a first element of an image as a character image, means for determining a first character group comprising at least one character represented by the character image, means for identifying a second element of the image that is a non character image, means for determining a second character group comprising at least one character indicative of the second element, and means for generating by a processor a first text image comprising the determined character and the determined character representation is disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] For a more complete understanding of embodiments of the invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
[0009] FIGURES 1A - 1 H are diagrams illustrating character images according to an example embodiment;
[0010] FIGURES 2A - 2D are diagrams illustrating non character image according to an example embodiment;
[0011] FIGURES 3A - 3E are diagrams illustrating an image comprising a character image and a non character image according to an example embodiment;
[0012] FIGURES 4A - 4C are diagrams illustrating text images according to an example embodiment;
[0013] FIGURE 5 is a flow diagram showing a set of operations for generating a text image according to an example embodiment;
[0014] FIGURE 6 is a flow diagram showing a set of operations 600 for generating a text image according to an example embodiment;
[0015] FIGURES 7A - 7E are diagrams illustrating input associated with a touch display according to an example embodiment; and
[0016] FIGURE 8 is a block diagram showing an apparatus according to an example embodiment. DETAILED DESCRIPTON OF THE DRAWINGS
[0017] An embodiment of the invention and its potential advantages are understood by referring to FIGURES 1 through 8 of the drawings.
[0018] In an example embodiment, a user of an apparatus, for example electronic device 10 of FIGURE 8, may desire to generate an image. The user may desire that the amount of computer readable information representing the image be small. For example, the user may desire to store the computer readable information on the apparatus, for example in non-volatile memory 42 of FIGURE 8, volatile memory 40 of FIGURE 8, and/or the like. In another example, the user may desire to send the image to another device, for example using, a message, an upload, a transfer, and/or the like, while reducing the amount of computer readable
information transferred to represent the image. In another example, the user may desire to display the image on a device with limited capability for displaying the image. In such an example, the device may have limited storage capability, limited display capability, limited image processing capability, and/or the like.
[0019] In an example embodiment, an apparatus may represent an image by generating a text image. A text image relates to a group of characters arranged to convey graphical and/or text information utilizing characters, such as American Standard Code for Information
Interchange (ASCII) characters, Unicode characters, universal character set (UCS) characters, and/or the like. The apparatus may generate the text image based, at least in part on a graphical image. The graphical image may be an image represented in a graphical format, such as bitmap, Joint Photographic Experts Group (JPEG), and/or the like. The apparatus may generate the text image, based, at least in part, on input information, such as information associated with a user drawing, writing, and/or the like.
[0020] FIGURES 1 A - 1H are diagrams illustrating character images according to an example embodiment. The examples of FIGURES 1A - 1H are merely examples of character images, and do not limit the scope of the claims. For example, character images may vary with respect to language, characters, orientation, size, alignment, and/or the like. The characters may relate to Arabic characters, Latin characters, Indic characters, Japanese characters, and/or the like.
[0021] In an example embodiment, a character image relates to graphical information that represents at least one character. For example, a character image may relate to an image of a written character, a typed character, a printed character, and/or the like. A character image may relate to a part and/or the entirety of an image. The character image may relate to one or more written characters, copied characters, photographed characters, scanned characters, and/or the like.
[0022] In an example embodiment, at least one character may be determined based, at least in part on the character image. For example, an apparatus may perform handwriting recognition, continuous handwriting recognition, optical character recognition (OCR), and/or the like on a character image to determine one or more characters. The accuracy of determination of the at least one character may vary across apparatuses and does not limit the claims set forth herein. For example, a first apparatus may have less accurate OCR than a second apparatus.
[0023] FIGURE 1 A is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 1 A, the character image represents three letters that form the word "Big."
[0024] Figure IB is a diagram of characters represented by the character image of FIGURE 1 A. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE IB, the characters are "Big."
[0025] FIGURE 1C is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 1C, the character image represents letters and punctuation that form "The dog is big."
[0026] Figure ID is a diagram of characters represented by the character image of FIGURE 1C. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE IB, the characters are "The dog is big."
[0027] FIGURE IE is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE IE, the character image represents five script letters that form the word "hello."
[0028] Figure IF is a diagram of characters represented by the character image of FIGURE IE. The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE IF, the characters are "hello."
[0029] FIGURE 1 G is a diagram illustrating a character image according to an example embodiment. In the example of FIGURE 1G, the character image represents characters that form the word "¾_¾3-f. [0030] Figure 1H is a diagram of characters represented by the character image of FIGURE IG The characters may be determined by an apparatus, such as electronic device 10 of FIGURE 8. In the example of FIGURE 1H, the characters are
[0031] FIGURES 2A - 2D are diagrams illustrating non character image according to an example embodiment. The examples of FIGURES 2 A - 2D are merely examples of non character image, and do not limit the scope of the claims. For example, non character images may vary with respect to orientation, size, alignment, and/or the like.
[0032] In an example embodiment, a non character image relates to graphical information that represents at least one element, but does not represent a character. For example, a non character image may relate to an image that does not comprise a representation of a word. A non character image may relate to a part and/or the entirety of an image. The non character image may relate to one or more lines, arcs, curves, and/or the like.
[0033] An apparatus may determine a character representation indicative of a non character image. The apparatus may utilize a liquid flow algorithm, a block analysis algorithm, and/or the like. For example, a liquid flow algorithm may evaluate along the path of a line associated with the non character image. In another example, a block algorithm may evaluate a non character image by partitioning it into portions and determining a character that most closely approximates each portion, and combines the characters in accordance with the portions.
[0034] FIGURE 2A is a diagram illustrating a non character image according to an example embodiment. The example of FIGURE 2A may relate to input indicating handwriting, a scanned image, a received image, a photograph, and/or the like.
[0035] Figure 2B is a diagram of a character representation of the non character image of FIGURE 2 A. The character representation may be determined by an apparatus, such as electronic device 10 of FIGURE 8.
[0036] FIGURE 2C is a diagram illustrating a non character image according to an example embodiment. The example of FIGURE 2C may relate to input indicating handwriting, a scanned image, a received image, a photograph, and/or the like.
[0037] Figure 2D is a diagram of a character representation of the non character image of FIGURE 2C. The character representation may be determined by an apparatus, such as electronic device 10 of FIGURE 8. [0038] FIGURES 3 A - 3E are diagrams illustrating an image comprising a character image, similar as described with reference to FIGURES 1 A-1 H, and a non character image, similar as described with reference to FIGURES 2A-2D, according to an example embodiment. The examples of FIGURES 3A - 3E are merely examples of images, and do not limit the scope of the claims. For example, the character image and/or non character image may vary with respect to orientation, size, alignment, and/or the like. In another example, the image may comprise more than one character image and/or more than one non character image.
[0039] In an example embodiment, an apparatus may generate a text image comprising one or more characters represented by the character image and a character representation of the one or more non character images. For example, the apparatus may evaluate the character image and the non character image separately, simultaneously, in parallel, sequentially, and/or the like. In an example embodiment, the apparatus may remove a part of the image after determining the character or character representation associated with the part of the image. In another example, the apparatus may determine a character represented by a part of the image associated with a character image in conjunction with determining a character representation of a different part of the image associated with a non character image.
[0040] FIGURE 3A is a diagram illustrating an image 300 according to an example embodiment. The image of example of FIGURE 3A comprises element 301 that relates to a character image, and element 302 that relates to a non character image.
[0041] FIGURE 3B is a diagram illustrating a text image 320 that represents image 300 of FIGURE 3 A. Text image 320 comprises character group 321 , which relate to element 301 of FIGURE 3A, and character group 322, which relates to element 302 of FIGURE 3A.
[0042] FIGURE 3C is a diagram illustrating an image 340 according to an example embodiment. The image of example of FIGURE 3C comprises element 341 that relates to a character image, and element 341 that relates to a non character image.
[0043] FIGURE 3D is a diagram illustrating a text image 360 that represents image 340 of FIGURE 3C. Text image 360 comprises character group 361 , which relates to element 341 of FIGURE 3C, and character group 362, which relates to element 342 of FIGURE 3C.
[0044] In an example embodiment, a user may desire to generate the text image for a specific apparatus, display, type of apparatus, type of display, environment, and/or the like. There may be a representation area constraint associated with the presentation of the text image. A representation area constraint may relate to dimensional constraint associated with the text image, such as a constraint regarding number of characters, size of charters, height, width, and/or the like. For example, a representation constraint may relate to a constraint that the width of the text image must be 10 characters or less. In another example, a representation constraint may relate to a constraint that the height of the text image must be 12 characters or less. In an example embodiment, an apparatus may determine the text image based, at least in part, on a representation area constraint.
[0045] In an example embodiment, an apparatus may determine position of, insert, remove, and/or the like, at least one character associated with a text image based on a representation constraint. For example, the apparatus may position a character differently than indicated by the image. In such an example, the character may be positioned lower than indicated by the image, higher than indicated by the image, more leftward than indicated by the image, more rightward than indicated by the image, and/or the like.
[0046] FIGURE 3E is a diagram illustrating a modified text image 380 that represents image 340 of FIGURE 3C. Modified text image 380 comprises character group 381, which relates to character image 341 of FIGURE 3C, and character group 382, which relates to non character image 342 of FIGURE 3C. Modified text image 380 relates to a representation of image 340 based, at least in part, on a horizontal representation area constraint. In the example of modified text image 380, the "^" character is positioned beneath the character, even though such positioning is different than indicated by image 340. Such positioning may relate to a horizontal dimension constraint on representation area, such as a horizontal constraint of 6 characters.
[0047] Even though the example of Figure 3E relates to positioning a character lower than indicated in the image, it should be understood that the character positioning may vary depending on the representation area constraint, other characters of the text image, and/or the like. For example, a first character indicated below a second character in the image may be positioned horizontally adjacent to the second character based, at least in part, on a vertical representation area constraint, a third character of the text image, and/or the like.
[0048] In an example embodiment, in determining position of a character with regard to a representation area constraint, an apparatus may evaluate one or more candidate positions for the character. For example, the apparatus may select a position for the character based, at least in part, on determination of the candidate position that results in the greatest similarity between the text image and the image.
[0049] FIGURES 4A - 4C are diagrams illustrating text images according to an example embodiment. The examples of FIGURES 4A - 4C are merely examples of text images, and do not limit the scope of the claims. For example, an image may comprise more or less detail than illustrated in the examples of FIGURES 4A-4C. In another example, an apparatus may generate more than two associated text images to represent an image.
[0050] In an example embodiment, a user may desire to generate a text image based, at least in part, on an image where part of the image has a level of detail difficult to represent in a text image. Under such circumstances, an apparatus may generate more than one text image to represent at least some of the details of the image. The apparatus may generate a first text image that may omit at least one detail of the image. A detail may relate to an element of an image that spans beyond a text image representation, that is too small to represent in a text image, and/or the like. The apparatus may generate a second text image that comprises less than the entirety of the image, but comprises at least one detail associated with the image that was unrepresented in the first image.
[0051] In an example embodiment, the determination to generate more than one text image to represent an image may be based on a representation area constraint, similar as described with reference to FIGURE 3E.
[0052] FIGURE 4A is a diagram illustrating an image 400 according to an example embodiment. The image of example of FIGURE 4A comprises element 401 that relates to a character image, element 402 that relates to another character image, element 403 that relates to details of the image, and elements relating to a non character image.
[0053] FIGURE 4B is a diagram illustrating a text image 420 according to an example embodiment. Text image 420 comprises character group 421 representing character image element 401 of FIGURE 4 A, character group 422 representing character image element 402 of FIGURE 4A, and a character group representing elements relating to the non character image of image 400 of FIGURE 4 A. It can be seen that text image 420 does not include characters that represent detail element 403 of FIGURE 4 A.
[0054] FIGURE 4C is a diagram illustrating a text image 440 according to an example embodiment. Text image 440 comprises character group 442 representing character image element 402 of FIGURE 4 A, character group 443 representing detail element 403, and a character group representing elements relating to part of the non character image of image 400 of FIGURE 4 A. It can be seen that text image 440 relates to a part of image 400 of FIGURE 4 A that is less than the entirety of image 400.
[0055] In the example of FIGURES 4A-4C, an apparatus may associate a first text image, such as text image 440 of FIGURE 4C, to a second text image, such as text image 420 of FIGURE 4B. The association may relate to a computer memory reference, a textual reference, an image reference, and/or the like. For example, the association may relate to a pointer in memory to the first text image. In another example, the association may relate to characters of the text image, such as character group 422 of FIGURE 4B and/or character group 442 of FIGURE 4C. The association may be indicated in the first text image and/or the second text image. For example, in text image 420 and text image 440, the characters indicating "Kitchen" may indicate the association between text image 420 and text image 440.
[0056] Although the examples of FIGURES 4A-4C illustrate a representation of an image having a text image indicating the span of the image, and a text image indicating a detailed subset of the image, an apparatus may partition the image differently. The apparatus may vary the partitioning depending upon a predetermined setting, an evaluation of details of the image, an evaluation of the size of the image, a representation area constraint, and/or the like. For example, the apparatus may represent an image as a set of adjacent text images, a set of logically embedded text images, and/or the like.
[0057] FIGURE 5 is a flow diagram showing a set of operations 500 for generating a text image according to an example embodiment. An apparatus, for example electronic device 10 of FIGURE 8 or a portion thereof, may utilize the set of operations 500. The apparatus may comprise means, including, for example processor 20 of FIGURE 8, for performing the operations of FIGURE 5. In an example embodiment, an apparatus, for example device 10 of FIGURE 8, is transformed by having memory, for example memory 42 of FIGURE 8, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 8, cause the apparatus to perform set of operations 500.
[0058] At block 501 , the apparatus identifies a first element of an image as a character image. The image may relate to a stored image, a received image, an image associated with input, and/or the like. The input may relate to one or more keypad inputs, motion inputs, touch inputs such as touch input 740 of FIGURE 7C, and/or the like. The image may be received, for example by receiver 16 of FIGURE 8. The image may be received in a message, such as an email, multimedia message, instant message, and/or the like. The image may be received from a camera module, such as camera module 36 of FIGURE 9. The identification of the first element as a character image may be similar as described with reference to FIGURES 3 A-3E and FIGURES 4A-4C.
[0059] At block 502, the apparatus determines a first character group comprising at least one character represented by the character image. The determination of the first character group may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
[0060] At block 503, the apparatus identifies a second element of the image that is a non character image. The identification of the second element as a non character image may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
[0061] At block 504, the apparatus determines a second character group comprising at least one character indicative of the second element. The determination of the second character group may be similar as described with reference to FIGURES 3 A-3E and FIGURES 4A-4C.
[0062] At block 505, the apparatus generates a text image comprising the determined character and the determined character representation. The generation of the text image may be similar as described with reference to FIGURES 3A-3E and FIGURES 4A-4C.
[0063] FIGURE 6 is a flow diagram showing a set of operations 600 for generating a text image according to an example embodiment. An apparatus, for example electronic device 10 of FIGURE 8 or a portion thereof, may utilize the set of operations 600. The apparatus may comprise means, including, for example processor 20 of FIGURE 8, for performing the operations of FIGURE 6. In an example embodiment, an apparatus, for example device 10 of FIGURE 8, is transformed by having memory, for example memory 42 of FIGURE 8, comprising computer code configured to, working with a processor, for example processor 20 of FIGURE 8, cause the apparatus to perform set of operations 600.
[0064] At block 601 , the apparatus receives indication of at least one touch input, such as touch input 740 of FIGURE 7C, and generates the image based, at least in part on the at least one touch input. The apparatus may receive indication of the first input by retrieving information from one or more memories, such as non-volatile memory 42 of FIGURE 6, receiving one or more indications of the touch input from a part of the apparatus, such as a touch display, for example display 28 of FIGURE 6, receiving indication of the touch input from a receiver, such as receiver 16 of FIGURE 6, and/or the like. In an example embodiment, the apparatus may receive the indication of the touch input from a different apparatus, such as a mouse, a keyboard, an external touch display, and/or the like.
[0065] At block 602, the apparatus identifies a first element of an image as a character image. The identification, first element, and image may be similar as described with reference to block 501 of FIGURE 5.
[0066] At block 603, the apparatus determines a first character group comprising at least one character represented by the character image. The determination of the first character group may be similar as described with reference to block 502 of FIGURE 5.
[0067] At block 604, the apparatus removes the first element from the image. The removal of the first element may relate to removing the element from the image itself, a copy of the image, and or the like.
[0068] At block 605, the apparatus identifies a second element of the image that is a non character image. The identification of the second element as a non character image may be similar as described with reference to block 503 of FIGURE 5.
[0069] At block 606, the apparatus determines at least one representation area constraint. The representation area constraint may be similar as described with reference to FIGURE 3E. The representation area may relate to a display associated with the apparatus, such as a separate display, an included display, such as display 28 of FIGURE 8, a display associated with another apparatus, and/or the like. For example, the representation area may relate to a display included in a message receiving apparatus to which the apparatus will send a message comprising the text image. In such an example the apparatus may receive representation area constraint information from the receiving apparatus, determine the dimension information based, at least in part, on predetermined setting, and or the like.
[0070] At block 607, the apparatus determines a second character group comprising at least one character indicative of the second element. The determination of the second character group may be similar as described with reference to block 504 of FIGURE 5.
[0071] At block 608, the apparatus generates a first text image comprising the determined character and the determined character representation based, at least in part, on the representation area constraint. The generation of the first text image may be similar as described with reference to block 505 of FIGURE 5. The first text image may be a modified text image similar as described with reference to FIGURE 3E.
[0072] At block 609, the apparatus generates a second text image representing a part of the image that is less than the entirety of the image. The generation of the text image may be similar as described with reference to FIGURES 4A-4C.
[0073] At block 610, the apparatus associates the first text image and the second text image. The association may be similar as described with reference to FIGURES 4A-4C.
[0074] At block 611 , the apparatus sends the first text image and the second text image via a message. The message may be an email message, a short message service (SMS) message, a multimedia message, an instant messaging message, and/or the like.
[0075] FIGURES 7A - 7E are diagrams illustrating input associated with a touch display, for example from display 28 of FIGURE 8, according to an example embodiment. In FIGURES 7A - 7E, a circle represents an input related to contact with a touch display, two crossed lines represent an input related to releasing a contact from a touch display, and a line represents input related to movement on a touch display. Although the examples of FIGURES 7A - 7E indicate continuous contact with a touch display, there may be a part of the input that fails to make direct contact with the touch display. Under such circumstances, the apparatus may, nonetheless, determine that the input is a continuous stroke input. For example, the apparatus may utilize proximity information, for example information relating to nearness of an input implement to the touch display, to determine part of a touch input.
[0076] In the example of FIGURE 7 A, input 700 relates to receiving contact input 702 and receiving a release input 704. In this example, contact input 702 and release input 704 occur at the same position. In an example embodiment, an apparatus utilizes the time between receiving contact input 702 and release input 704. For example, the apparatus may interpret input 700 as a tap for a short time between contact input 702 and release input 704, as a press for a longer time between contact input 702 and release input 704, and/or the like.
[0077] In the example of FIGURE 7B, input 720 relates to receiving contact input 722, a movement input 724, and a release input 726. Input 720 relates to a continuous stroke input. In this example, contact input 722 and release input 726 occur at different positions. Input 720 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 720 based at least in part on the speed of movement 724. For example, if input 720 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 720 based at least in part on the distance between contact input 722 and release input 726. For example, if input 720 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 722 and release input 726. An apparatus may interpret the input before receiving release input 726. For example, the apparatus may evaluate a change in the input, such as speed, position, and/or the like. In such an example, the apparatus may perform one or more determinations based upon the change in the touch input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
[0078] In the example of FIGURE 7C, input 740 relates to receiving contact input 742, a movement input 744, and a release input 746 as shown. Input 740 relates to a continuous stroke input. In this example, contact input 742 and release input 746 occur at different positions. Input 740 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 740 based at least in part on the speed of movement 744. For example, if input 740 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 740 based at least in part on the distance between contact input 742 and release input 746. For example, if input 740 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance between contact input 742 and release input 746. In still another example embodiment, the apparatus interprets the position of the release input. In such an example, the apparatus may modify a text selection point based at least in part on the change in the touch input.
[0079] In the example of FIGURE 7D, input 760 relates to receiving contact input 762, and a movement input 764, where contact is released during movement. Input 760 relates to a continuous stroke input. Input 760 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, and/or the like. In an example embodiment, an apparatus interprets input 760 based at least in part on the speed of movement 764. For example, if input 760 relates to panning a virtual screen, the panning motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 760 based at least in part on the distance associated with the movement input 764. For example, if input 760 relates to a scaling operation, such as resizing a box, the scaling may relate to the distance of the movement input 764 from the contact input 762 to the release of contact during movement.
[0080] In an example embodiment, an apparatus may receive multiple touch inputs at coinciding times. For example, there may be a tap input at a position and a different tap input at a different location during the same time. In another example there may be a tap input at a position and a drag input at a different position. An apparatus may interpret the multiple touch inputs separately, together, and/or a combination thereof. For example, an apparatus may interpret the multiple touch inputs in relation to each other, such as the distance between them, the speed of movement with respect to each other, and/or the like.
[0081] In the example of FIGURE 7E, input 780 relates to receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792. Input 720 relates to two continuous stroke inputs. In this example, contact input 782 and 788, and release input 786 and 792 occur at different positions. Input 780 may be characterized as a multiple touch input. Input 780 may relate to dragging an object from one position to another, to moving a scroll bar, to panning a virtual screen, to drawing a shape, to indicating one or more user selected text positions and/or the like. In an example embodiment, an apparatus interprets input 780 based at least in part on the speed of movements 784 and 790. For example, if input 780 relates to zooming a virtual screen, the zooming motion may be small for a slow movement, large for a fast movement, and/or the like. In another example embodiment, an apparatus interprets input 780 based at least in part on the distance between contact inputs 782 and 788 and release inputs 786 and 792. For example, if input 780 relates to a scaling operation, such as resizing a box, the scaling may relate to the collective distance between contact inputs 782 and 788 and release inputs 786 and 792.
[0082] In an example embodiment, the timing associated with the apparatus receiving contact inputs 782 and 788, movement inputs 784 and 790, and release inputs 786 and 792 varies. For example, the apparatus may receive contact input 782 before contact input 788, after contact input 788, concurrent to contact input 788, and/or the like. The apparatus may or may not utilize the related timing associated with the receiving of the inputs. For example, the apparatus may utilize an input received first by associating the input with a preferential status, such as a primary selection point, a starting position, and/or the like. In another example, the apparatus may utilize non-concurrent inputs as if the apparatus received the inputs concurrently. In such an example, the apparatus may utilize a release input received first the same way that the apparatus would utilize the same input if the apparatus had received the input second.
[0083] Even though an aspect related to two touch inputs may differ, such as the direction of movement, the speed of movement, the position of contact input, the position of release input, and/or the like, the touch inputs may be similar. For example, a first touch input comprising a contact input, a movement input, and a release input, may be similar to a second touch input comprising a contact input, a movement input, and a release input, even though they may differ in the position of the contact input, and the position of the release input.
[0084] FIGURE 8 is a block diagram showing an apparatus, such as an electronic device 10, according to an example embodiment. It should be understood, however, that an electronic device as illustrated and hereinafter described is merely illustrative of an electronic device that could benefit from embodiments of the invention and, therefore, should not be taken to limit the scope of the invention. While one embodiment of the electronic device 10 is illustrated and will be hereinafter described for purposes of example, other types of electronic devices, such as, but not limited to, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, media players, cameras, video recorders, global positioning system (GPS) devices and other types of electronic systems, may readily employ embodiments of the invention. Moreover, the apparatus of an example embodiment need not be the entire electronic device, but may be a component or group of components of the electronic device in other example embodiments.
[0085] Furthermore, devices may readily employ embodiments of the invention regardless of their intent to provide mobility. In this regard, even though embodiments of the invention are described in conjunction with mobile communications applications, it should be understood that embodiments of the invention may be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
[0086] The electronic device 10 may comprise an antenna, (or multiple antennae), a wired connector, and/or the like in operable communication with a transmitter 14 and a receiver 16. The electronic device 10 may further comprise a processor 20 or other processing circuitry that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively. The signals may comprise signaling information in accordance with a communications interface standard, user speech, received data, user generated data, and/or the like. The electronic device 10 may operate with one or more air interface standards, communication protocols, modulation types, and access types. By way of illustration, the electronic device 10 may operate in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like. For example, the electronic device 10 may operate in accordance with wireline protocols, such as Ethernet, digital subscriber line (DSL), asynchronous transfer mode (ATM), second-generation (2G) wireless communication protocols IS-136 (time division multiple access (TDMA)), Global System for Mobile communications (GSM), and IS-95 (code division multiple access (CDMA)), with third -generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA2000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), or with fourth-generation (4G) wireless communication protocols, wireless networking protocols, such as 802.11, short-range wireless protocols, such as Bluetooth, and/or the like.
[0087] As used in this application, the term 'circuitry' refers to all of the following: hardware-only implementations (such as implementations in only analog and/or digital circuitry) and to combinations of circuits and software and/or firmware such as to a combination of processor(s) or portions of processors )/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and to circuits, such as a microprocessor(s) or portion of a
microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor, multiple processors, or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a cellular network device or other network device. [0088] Processor 20 may comprise means, such as circuitry, for implementing audio, video, communication, navigation, logic functions, and/or the like, as well as for implementing embodiments of the invention including, for example, one or more of the functions described in conjunction with FIGURES 1-8. For example, processor 20 may comprise means, such as a digital signal processor device, a microprocessor device, various analog to digital converters, digital to analog converters, processing circuitry and other support circuits, for performing various functions including, for example, one or more of the functions described in conjunction with FIGURES 1 -8. The apparatus may perform control and signal processing functions of the electronic device 10 among these devices according to their respective capabilities. The processor 20 thus may comprise the functionality to encode and interleave message and data prior to modulation and transmission. The processor 20 may additionally comprise an internal voice coder, and may comprise an internal data modem. Further, the processor 20 may comprise functionality to operate one or more software programs, which may be stored in memory and which may, among other things, cause the processor 20 to implement at least one embodiment including, for example, one or more of the functions described in conjunction with FIGURES 1- 8. For example, the processor 20 may operate a connectivity program, such as a conventional internet browser. The connectivity program may allow the electronic device 10 to transmit and receive intemet content, such as location-based content and/or other web page content, according to a Transmission Control Protocol (TCP), Internet Protocol (IP), User Datagram Protocol (UDP), Internet Message Access Protocol (FMAP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP), and/or the like, for example.
[0089] The electronic device 10 may comprise a user interface for providing output and/or receiving input. The electronic device 10 may comprise an output device such as a ringer, a conventional earphone and/or speaker 24, a microphone 26, a display 28, and/or a user input interface, which are coupled to the processor 20. The user input interface, which allows the electronic device 10 to receive data, may comprise means, such as one or more devices that may allow the electronic device 10 to receive data, such as a keypad 30, a touch display, for example if display 28 comprises touch capability, and/or the like. In an embodiment comprising a touch display, the touch display may be configured to receive input from a single point of contact, multiple points of contact, and/or the like. In such an embodiment, the touch display and/or the processor may determine input based on position, motion, speed, contact area, and/or the like.
[0090] The electronic device 10 may include any of a variety of touch displays including those that are configured to enable touch recognition by any of resistive, capacitive, infrared, strain gauge, surface wave, optical imaging, dispersive signal technology, acoustic pulse recognition or other techniques, and to then provide signals indicative of the location and other parameters associated with the touch. Additionally, the touch display may be configured to receive an indication of an input in the form of a touch event which may be defined as an actual physical contact between a selection object (e.g., a finger, stylus, pen, pencil, or other pointing device) and the touch display. Alternatively, a touch event may be defined as bringing the selection object in proximity to the touch display, hovering over a displayed object or approaching an object within a predefined distance, even though physical contact is not made with the touch display. As such, a touch input may comprise any input that is detected by a touch display including touch events that involve actual physical contact and touch events that do not involve physical contact but that are otherwise detected by the touch display, such as a result of the proximity of the selection object to the touch display. Display 28 may be display two- dimensional information, three-dimensional information and/or the like.
[0091] In embodiments including the keypad 30, the keypad 30 may comprise numeric (for example, 0-9) keys, symbol keys (for example, #, *), alphabetic keys, and/or the like for operating the electronic device 10. For example, the keypad 30 may comprise a conventional QWERTY keypad arrangement. The keypad 30 may also comprise various soft keys with associated functions. In addition, or alternatively, the electronic device 10 may comprise an interface device such as a joystick or other user input interface. The electronic device 10 further comprises a battery 34, such as a vibrating battery pack, for powering various circuits that are required to operate the electronic device 10, as well as optionally providing mechanical vibration as a detectable output.
[0092] In an example embodiment, the electronic device 10 comprises a media capturing element, such as a camera, video and/or audio module, in communication with the processor 20. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. For example, in an example embodiment in which the media capturing element is a camera module 36, the camera module 36 may comprise a digital camera which may form a digital image file from a captured image. As such, the camera module 36 may comprise hardware, such as a lens or other optical component(s), and/or software necessary for creating a digital image file from a captured image. Alternatively, the camera module 36 may comprise only the hardware for viewing an image, while a memory device of the electronic device 10 stores instructions for execution by the processor 20 in the form of software for creating a digital image file from a captured image. In an example embodiment, the camera module 36 may further comprise a processing element such as a coprocessor that assists the processor 20 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data. The encoder and/or decoder may encode and/or decode according to a standard format, for example, a Joint Photographic Experts Group (JPEG) standard format.
[0093] The electronic device 10 may comprise one or more user identity modules (UIM) 38. The UIM may comprise information stored in memory of electronic device 10, a part of electronic device 10, a device coupled with electronic device 10, and/or the like. The UTM 38 may comprise a memory device having a built-in processor. The UTM 38 may comprise, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), and/or the like. The UTM 38 may store information elements related to a subscriber, an operator, a user account, and/or the like. For example, UFM 38 may store subscriber information, message information, contact information, security information, program information, and/or the like. Usage of one or more UTM 38 may be enabled and/or disabled. For example, electronic device 10 may enable usage of a first UFM and disable usage of a second UTM.
[0094] In an example embodiment, electronic device 10 comprises a single UFM 38. In such an embodiment, at least part of subscriber information may be stored on the UFM 38.
[0095] In another example embodiment, electronic device 10 comprises a plurality of UFM 38. For example, electronic device 10 may comprise two UFM 38 blocks. In such an example, electronic device 10 may utilize part of subscriber information of a first UFM 38 under some circumstances and part of subscriber information of a second UFM 38 under other circumstances. For example, electronic device 10 may enable usage of the first UFM 38 and disable usage of the second UFM 38. In another example, electronic device 10 may disable usage of the first UIM 38 and enable usage of the second UIM 38. In still another example, electronic device 10 may utilize subscriber information from the first UIM 38 and the second UIM 38.
[0096] Electronic device 10 may comprise a memory device including, in one embodiment, volatile memory 40, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data. The electronic device 10 may also comprise other memory, for example, non-volatile memory 42, which may be embedded and/or may be removable. The non-volatile memory 42 may comprise an EEPROM, flash memory or the like. The memories may store any of a number of pieces of information, and data. The information and data may be used by the electronic device 10 to implement one or more functions of the electronic device 10, such as the functions described in conjunction with FIGURES 1-8. For example, the memories may comprise an identifier, such as an international mobile equipment identification (FMEI) code, which may uniquely identify the electronic device 10.
[0097] Electronic device 10 may comprise one or more sensor 37. Sensor 37 may comprise a light sensor, a proximity sensor, a motion sensor, a location sensor, and/or the like. For example, sensor 37 may comprise one or more light sensors at various locations on the device. In such an example, sensor 37 may provide sensor information indicating an amount of light perceived by one or more light sensors. Such light sensors may comprise a photovoltaic element, a photoresistive element, a charge coupled device (CCD), and/or the like. In another example, sensor 37 may comprise one or more proximity sensors at various locations on the device. In such an example, sensor 37 may provide sensor information indicating proximity of an object, a user, a part of a user, and/or the like, to the one or more proximity sensors. Such proximity sensors may comprise capacitive measurement, sonar measurement, radar
measurement, and/or the like.
[0098] Although FIGURE 8 illustrates an example of an electronic device that may utilize embodiments of the invention including those described and depicted, for example, in FIGURES 1-8, electronic device 10 of FIGURE 8 is merely an example of a device that may utilize embodiments of the invention.
[0099] Embodiments of the invention may be implemented in software, hardware, application logic or a combination of software, hardware, and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device, or a plurality of separate devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of separate devices. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any tangible media or means that can contain, or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIGURE 8. A computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
[00100] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. For example, blocks 503 and 504 of
FIGURE 5 may be performed before block 502. In another example, block 606 of FIGURE 6 may be performed before block 602. Furthermore, if desired, one or more of the above- described functions may be optional or may be combined. For example, block 606 of FIGURE 6 may be optional or combined with block 608.
[00101] Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
[00102] It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims.

Claims

WHAT IS CLAIMED IS
1. An apparatus, comprising:
a processor;
memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following:
identify a first element of an image as a character image;
determine a first character group comprising at least one character represented by the character image;
identify a second element of the image that is a non character image; determine a second character group comprising at least one character indicative of the second element; and
generate a first text image comprising the determined character and the determined character representation.
2. The apparatus of claim 1 , wherein the identification of the first element and the determination of the at least one character are performed by optical character recognition.
3. The apparatus of claim 1 , wherein the identification of the second element comprises removing the first element from the image.
4. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform sending the first text image via a message.
5. The apparatus of claim 4, wherein the message relates to a short message service message.
6. The apparatus of claim 4, wherein the message relates to an instant messaging message.
7. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform determining at least one representation area constraint.
8. The apparatus of claim 7, wherein the determination of the second character group is based, at least in part, on the representation area constraint.
9. The apparatus of claim 7, wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform generating a modified text image based on the generated first text image and the representation area constraint.
10. The apparatus of claim 9, wherein generating the modified text image comprises changing position associated with at least one character of the first character group.
11. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform generating a second text image representing a part of the image that is less than the entirety of the image.
12. The apparatus of claim 1 1 , wherein the second text image indicates at least one detail associated with the image that is unrepresented in the first text image.
13. The apparatus of claim 1 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform associating the second text image and the first text image.
14. The apparatus of claim 13, wherein the first text image indicates the association.
15. The apparatus of claim 1 , wherein the memory and the computer program code are further configured to, working with the processor, cause the apparatus to perform receiving indication of at least one touch input and generating the image based, at least in part on the at least one touch input.
16. The apparatus of claim 15, wherein the apparatus further comprises a touch display.
17. The apparatus of claim 1 , wherein the apparatus is a mobile phone.
18. A method, comprising:
identifying a first element of an image as a character image;
determining at least one character represented by the character image; identifying a second element of the image that is a non character image;
determining at least one character representation indicative of the second element; and
generating by a processor a first text image comprising the character and the character representation.
19. A computer-readable medium encoded with instructions that, when executed by a computer, perform:
identifying a first element of an image as a character image;
determining at least one character represented by the character image; identifying a second element of the image that is a non character image;
determining at least one character representation indicative of the second element; and
generating a first text image comprising the character and the character representation.
20. An apparatus, comprising:
means for identifying a first element of an image as a character image; means for determining at least one character represented by the character image;
means for identifying a second element of the image that is a non character image;
means for determining at least one character representation indicative of the second element; and
means for generating a first text image comprising the character and the character representation.
PCT/CN2009/076183 2009-12-29 2009-12-29 Method and apparatus for generating a text image Ceased WO2011079432A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/076183 WO2011079432A1 (en) 2009-12-29 2009-12-29 Method and apparatus for generating a text image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/076183 WO2011079432A1 (en) 2009-12-29 2009-12-29 Method and apparatus for generating a text image

Publications (1)

Publication Number Publication Date
WO2011079432A1 true WO2011079432A1 (en) 2011-07-07

Family

ID=44226112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/076183 Ceased WO2011079432A1 (en) 2009-12-29 2009-12-29 Method and apparatus for generating a text image

Country Status (1)

Country Link
WO (1) WO2011079432A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113016A1 (en) * 1996-01-09 2003-06-19 Fujitsu Limited Pattern recognizing apparatus
CN1484165A (en) * 2002-07-26 2004-03-24 ��ʿͨ��ʽ���� Input device, input method, input program and recording medium of document information
CN1501273A (en) * 2002-11-12 2004-06-02 联想(北京)有限公司 Method of converting handwritten note into literal text and traveling equipment therefor
US20060204095A1 (en) * 2005-03-08 2006-09-14 Hirobumi Nishida Document layout analysis with control of non-character area
CN1848136A (en) * 2005-04-13 2006-10-18 摩托罗拉公司 Method and system for decoding barcode images
JP2007026470A (en) * 1996-09-27 2007-02-01 Fujitsu Ltd Pattern recognition device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113016A1 (en) * 1996-01-09 2003-06-19 Fujitsu Limited Pattern recognizing apparatus
JP2007026470A (en) * 1996-09-27 2007-02-01 Fujitsu Ltd Pattern recognition device
CN1484165A (en) * 2002-07-26 2004-03-24 ��ʿͨ��ʽ���� Input device, input method, input program and recording medium of document information
CN1501273A (en) * 2002-11-12 2004-06-02 联想(北京)有限公司 Method of converting handwritten note into literal text and traveling equipment therefor
US20060204095A1 (en) * 2005-03-08 2006-09-14 Hirobumi Nishida Document layout analysis with control of non-character area
CN1848136A (en) * 2005-04-13 2006-10-18 摩托罗拉公司 Method and system for decoding barcode images

Similar Documents

Publication Publication Date Title
US8786556B2 (en) Method and apparatus for selecting text information
US9104261B2 (en) Method and apparatus for notification of input environment
US9524094B2 (en) Method and apparatus for causing display of a cursor
EP2399187B1 (en) Method and apparatus for causing display of a cursor
US20100199226A1 (en) Method and Apparatus for Determining Input Information from a Continuous Stroke Input
US8605006B2 (en) Method and apparatus for determining information for display
US20110057885A1 (en) Method and apparatus for selecting a menu item
US9229615B2 (en) Method and apparatus for displaying additional information items
US9378427B2 (en) Displaying handwritten strokes on a device according to a determined stroke direction matching the present direction of inclination of the device
JP2013502861A (en) Contact information input method and system
US20100194694A1 (en) Method and Apparatus for Continuous Stroke Input
US20110148934A1 (en) Method and Apparatus for Adjusting Position of an Information Item
US20110154267A1 (en) Method and Apparatus for Determining an Operation Associsated with a Continuous Stroke Input
WO2011079437A1 (en) Method and apparatus for receiving input
US8970483B2 (en) Method and apparatus for determining input
US20100333015A1 (en) Method and apparatus for representing text information
EP2548107B1 (en) Method and apparatus for determining a selection region
WO2011079432A1 (en) Method and apparatus for generating a text image
US20120113120A1 (en) Method and apparatus for generating a visual representation of information
US20120117515A1 (en) Method and apparatus for generating a visual representation of information
EP3049967A1 (en) Visual representation of a character identity and a location identity
HK1179017A (en) Method and apparatus for determining a selection region

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09852717

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 6315/CHENP/2012

Country of ref document: IN

122 Ep: pct application non-entry in european phase

Ref document number: 09852717

Country of ref document: EP

Kind code of ref document: A1