[go: up one dir, main page]

WO2019059503A1 - Capture d'image pour reconnaissance de caractères - Google Patents

Capture d'image pour reconnaissance de caractères Download PDF

Info

Publication number
WO2019059503A1
WO2019059503A1 PCT/KR2018/007501 KR2018007501W WO2019059503A1 WO 2019059503 A1 WO2019059503 A1 WO 2019059503A1 KR 2018007501 W KR2018007501 W KR 2018007501W WO 2019059503 A1 WO2019059503 A1 WO 2019059503A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
image
unit
camera unit
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2018/007501
Other languages
English (en)
Korean (ko)
Inventor
이준택
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2019059503A1 publication Critical patent/WO2019059503A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • This specification relates to image acquisition for character recognition.
  • a character is inputted through a keypad provided, a character is inputted through a handwriting input unit, and the character is converted into character information to perform a dictionary search.
  • the electronic dictionary implemented by the smartphone application program captures a character image with a camera built in a smart phone, converts the character image into an optical character recognition process, and performs a dictionary search.
  • An electronic dictionary implemented by an application program causes problems such as a character recognition error due to a quality problem of a character image through a smartphone camera.
  • the present invention provides an apparatus and method for increasing the character recognition rate through an electronic dictionary in a camera.
  • the character image input apparatus includes a body portion extending in one direction, a communication portion provided in the body portion and performing communication with another device, a tip portion disposed at one end of the body portion, A tilt measuring unit for obtaining a tilt of an optical axis of the camera unit; and a controller for controlling the inclination of the target object when the tip unit is in contact with the target object, Acquires the image of the surface of the object and the slope between the surface of the object and the optical axis, and the other device recognizes the character information through optical character recognition based on the image and the slope, To the device.
  • the camera unit may be disposed at a part of the body part, away from a surface of the object, in consideration of a focal distance.
  • a finger guide portion for receiving the finger may be formed on the body portion such that when the user holds the finger, the user's finger is out of the field of view of the camera portion.
  • the imaging element of the camera section may be an imaging element capable of taking an infrared image of the surface of the object.
  • the present specification also discloses an electronic dictionary system configured with a character image input device and a dictionary search device.
  • the character image input apparatus includes a camera unit for acquiring an image of a surface of an object disposed on an optical axis, a tilt measuring unit for obtaining a tilt of an optical axis of the camera unit with respect to a vertical axis of a surface of the object, And a first communication unit for transmitting the data and the tilt value to the dictionary searching apparatus, wherein the dictionary searching apparatus includes a second communication unit for receiving the image data and the tilt value from the character image input apparatus, An image preprocessing unit for correcting the image based on the extracted character information, an optical character recognition unit for extracting character information from the corrected image, a search unit for searching for the extracted character information, and a display unit for displaying the search result do.
  • the electronic dictionary system collects image data of an image having a low character recognition rate from the acquired image data from the dictionary search apparatus, analyzes the collected image data, Further comprising a management apparatus for setting parameters related to character recognition of the optical character recognition section, wherein the dictionary search apparatus further includes a parameter management section for updating parameters of the optical character recognition section with the parameters.
  • the present invention also provides a character recognition electronic dictionary, wherein the character recognition electronic dictionary includes a camera unit for acquiring an image of a character, a tilt measuring unit for measuring a tilt of the optical axis of the camera unit with respect to the character, An optical character recognition unit for extracting character information from the corrected character image, and a search unit for searching for the meaning of the extracted character information and outputting the result .
  • the character recognition electronic dictionary includes a camera unit for acquiring an image of a character, a tilt measuring unit for measuring a tilt of the optical axis of the camera unit with respect to the character, An optical character recognition unit for extracting character information from the corrected character image, and a search unit for searching for the meaning of the extracted character information and outputting the result .
  • the camera unit is disposed in a part of a body part extending in one direction, and the camera part is spaced apart from one end of the body part so as to position the character at a focal distance of the camera part by the body part .
  • a finger guide portion for receiving the finger may be formed on the body portion such that when the user holds the finger, the user's finger is out of the field of view of the camera portion.
  • a search method of a character recognition electronic dictionary includes the steps of acquiring an image for a character by user input, measuring a tilt of an optical axis of the camera with respect to the character, A step of extracting character information from the corrected character image, a step of retrieving a meaning of the extracted character information, and a step of displaying the search result .
  • the character photographing apparatus and the dictionary search apparatus are separated from each other, and the character image can be quickly photographed and retrieved accurately.
  • the character recognition rate can be improved by correcting the distortion of the character image due to the inclination of the camera.
  • a clear character image can be quickly obtained through a character image input apparatus having a camera having a fixed focal length.
  • the character recognition rate can be improved by updating the character recognition-related parameter based on the analysis result of the erroneous character image data in the process of optical character recognition.
  • the character recognition rate can be increased by collecting the false-type character image and learning it through the neural network.
  • FIG. 1 is a schematic view of a character recognition electronic dictionary system including a character image input device and a dictionary search device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of the components of a character image input apparatus according to an embodiment of the present invention.
  • Fig. 3 is a block diagram showing the devices constituting the character recognition electronic dictionary system shown in Fig. 1 and the respective components of these devices.
  • FIG. 4 is a diagram illustrating an example of measuring a tilt of a camera unit included in a character image input apparatus, and examples of character images according to a tilt of the character image input apparatus.
  • FIG. 5 is a view showing a placement position of a camera unit in consideration of a focal length in a character image input device according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating an example of a finger guide unit for holding a finger so as not to block the angle of view of the camera unit and an example of holding a finger in the character image input device according to the embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating an operation of recognizing a character and performing a dictionary search in a character recognition electronic dictionary system according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms.
  • the terms may be named for the purpose of distinguishing one element from another, for example, without departing from the scope of the right according to the concept of the present invention, the first element may be referred to as a second element,
  • the component may also be referred to as a first component.
  • module and “part” for the components used in the present specification are given or mixed in consideration of ease of specification only and do not have their own meaning or role, May refer to a functional or structural combination of hardware to perform the method according to an embodiment of the invention or software that can drive the hardware.
  • FIG. 1 schematically shows a character recognition electronic dictionary system including a character image input device and a dictionary search device according to an embodiment of the present invention.
  • the character recognition electronic dictionary system 1000 can be configured to include the character image input device 100, the dictionary search device 200, and the management device 300.
  • the character image input apparatus 100 is provided separately from the dictionary search apparatus 200.
  • the character image input apparatus 100 is provided with a camera section in a pen-shaped body section (housing), and a word or a sentence to be searched by the user is printed or displayed It is possible to acquire an image of a character by photographing the surface, and the obtained character image can be transmitted to the dictionary searching apparatus 200, and the dictionary searching apparatus 200 can perform a dictionary search.
  • the dictionary searching apparatus 200 receives the image data for the character to be searched by the user from the character image input apparatus 100, corrects the extracted character image so that the character information can be easily extracted, And extracts text data through an Optical Character Recognition (OCR) process. Then, the dictionary searching apparatus 200 performs a dictionary search, finds the meaning of the extracted character information, and displays the result to the user.
  • OCR Optical Character Recognition
  • the management apparatus 300 collects image data of a character image having low recognition rate such as no recognition or wrong recognition, and analyzes the collected image data to determine a recognition rate The setting of the device related to the extraction of the character information is updated.
  • FIG. 2 illustrates, by way of example, constituent elements of a character image input apparatus according to an embodiment of the present invention.
  • the character image input apparatus 100 includes a body part 105 formed to extend in one direction, a communication part (not shown) provided in the body part, A camera unit 120 disposed at a part of the body part 105 to acquire an image of a surface of an object disposed on the optical axis, A tilt measuring unit 130 for obtaining a tilt of an optical axis of the camera unit 120 and a tilt measuring unit 130 provided in the body unit 105.
  • a control unit 150 for controlling the image and the slope to be transmitted to the other device so as to recognize the character information through optical character recognition based on the obtained image and the obtained slope value.
  • the body portion 105 is a pen-shaped housing extending in one direction, and a user can take a character image by holding the body portion 105 by hand.
  • a finger guide portion for receiving the finger is formed on an outer circumferential surface of the body portion 105 so that the user's finger is outside the field of view (FOV) of the camera portion when the user holds the body portion 105 . Therefore, the user can acquire a perfect image in which the finger does not cover the surface of the object.
  • FOV field of view
  • the tip portion 110 is disposed at one end of the body portion 105 and includes a button switch connected to the tip portion 110.
  • the button switch connected to the tip part 110 is pressed .
  • the camera unit 120 may be disposed on a part of the body 105 and may acquire images of the surface of the object disposed on the optical axis.
  • the camera unit 120 is disposed in a part of the body part apart from the surface of the object in consideration of the focal distance of the camera unit 120, there is no need to adjust the focus when photographing the surface, So that a clear image can be captured.
  • the image pickup element of the camera unit 120 may be an image pickup element capable of taking an infrared ray image as well as an image of a visible ray region on the surface of the object.
  • the inclination measuring unit 130 may measure the inclination of the camera unit 120 with respect to the axis perpendicular to the surface of the object by acquiring the inclination of the optical axis of the camera unit 120.
  • the controller 150 controls the other device to adjust the obtained image and the obtained tilt value
  • the image and the slope may be controlled to be transmitted to the other device so as to recognize the character information through optical character recognition as a basis.
  • the other apparatus corrects the tilt of the image based on the received image and tilt value, and extracts search data from the corrected image through an optical character recognition (OCR) process.
  • OCR optical character recognition
  • the other apparatus can perform a dictionary search, an image search, and the like based on the extracted retrieval data, and then display the retrieval result.
  • the other apparatus may be a device for extracting search data through an optical character recognition process from input image data as in the case of the dictionary search apparatus shown in FIG. 1 and performing a search.
  • the user can simply place the tip portion 110 of the pen-shaped body portion 105 on the surface of the object on which the character to be searched is displayed, The gradient value can be obtained.
  • Fig. 3 is a block diagram showing the devices constituting the character recognition electronic dictionary system shown in Fig. 1 and the respective components of these devices.
  • the character image input apparatus 100 includes a user input unit 111 for receiving a user command and instructing a camera unit to acquire a character image, a camera unit 120 for acquiring an image for a character, A communication unit 140 for transmitting the data on the obtained character image and the measured tilt value to the dictionary searching apparatus, a tilt measuring unit 130 for measuring the tilt of the optical axis of the camera unit 120, And a controller 150 for allowing the camera unit 120 to acquire a character image and allow the tilt measuring unit 130 to measure the tilt of the optical axis of the camera unit 120.
  • the character image input apparatus 100 may have a pen-shaped body extending in one direction.
  • the character input apparatus 100 may include a user input unit 111, a camera unit 120, The tilt measuring unit 130, and the communication unit 140 may be disposed.
  • the user input unit 111 may receive a user command for a character image shooting and a dictionary search for a pre-search through an input method such as a touch input, a button input, a motion input, and a sound input.
  • the user input unit 111 when the user input unit 111 receives a user's command through a touch input or a button input, the user input unit 111 displays a touch pad or a touch pad on the surface of the pen- And may be provided in the form of a push button.
  • the user touches the touch pad, inputs a specific gesture on the touch pad, or presses the push button
  • the camera 120 provided in the character image input apparatus 100 photographs the target character to obtain a character image and measures the tilt of the optical axis of the camera unit with respect to the character,
  • the data for the obtained character image and the measured slope value are transmitted to the dictionary searching apparatus 200 in order.
  • a pen-tip type button is connected to one end of the body of the character image input apparatus 100, When the user brings the character image input apparatus 100 to the surface of the printed medium or the surface of the display on which the characters are displayed, the pen tip type button can be pressed.
  • the user input unit 111 recognizes a user's specific motion through a motion sensor (for example, an acceleration sensor and a gyro sensor) provided in the character image input apparatus 100, Can be input.
  • a motion sensor for example, an acceleration sensor and a gyro sensor
  • the user input portion 111 displays the built- The motion pattern of the character image input apparatus 100 according to the behavior of the user is measured.
  • the user input unit 111 compares a motion pattern stored in advance with the measured motion pattern, for example, whether or not the character image input apparatus 100 is similar to the motion pattern when the character image is input to the printed surface of the target character do.
  • the user input unit 111 causes the camera unit 120 provided in the character image input apparatus 100 to photograph the target character A step of acquiring a character image, measuring a tilt of the optical axis of the camera unit with respect to the character, and transmitting data of the obtained character image and the measured tilt value to the dictionary searching apparatus 200 .
  • the user input unit 111 recognizes a specific voice of a user through a sound sensor (e.g., a microphone or the like) provided in the character image input apparatus 100, .
  • a sound sensor e.g., a microphone or the like
  • the camera unit 120 may capture the target character to acquire a character image, measure the tilt of the optical axis of the camera unit with respect to the character, and then calculate the data of the obtained character image and the measured tilt value
  • the dictionary search apparatus 200 and procedures for performing character image correction and dictionary search are sequentially performed.
  • the user command for the character image shooting and the dictionary search for the dictionary search is input through the input method such as touch input, button input, motion input, sound input, etc.
  • the method and means for receiving the command are not limited to the input method and the input means and do not exclude any method or means capable of receiving the input of the user in addition to the input method and the input means.
  • a character image for extracting a character is obtained by using a CMOS element used in a general digital camera or an image sensor of a visible light region such as a CCD element as an imaging element of the camera unit 120.
  • the character recognition rate in the character image captured by the light scattered on the surface is low .
  • the use of an imaging device capable of eliminating the influence of diffuse reflection can increase the character recognition rate.
  • the character image of the infrared region can be photographed by using the image sensor of the infrared region as the image sensing element of the camera portion 120.
  • the camera unit 120 may be a NoIR camera with an IR filter removed.
  • the camera unit 120 can acquire not only the visible light but also the character image including the infrared region, so that regardless of the nature of the surface of the medium on which the character to be photographed is displayed, Can be obtained.
  • the infrared sensor is used as the imaging element of the camera unit 120, or when the NoIR camera module is used as the module of the camera unit 120, it is possible to prevent degradation of the character image due to diffuse reflection, The recognition rate of the character information in the optical character recognition process can be increased.
  • a structured light projector may be coupled to the camera unit 120 to capture a character image, thereby increasing the character recognition rate.
  • the tilt measuring unit 130 measures the tilt of the character on the basis of the vertical axis with respect to the surface, The inclination of the optical axis of the camera unit 120 is measured.
  • the subject 120 is a character
  • a distorted character image is obtained when the subject 120 is photographed in a state in which the camera unit 120 does not form an angle perpendicular to the subject.
  • the character recognition rate is lowered during the optical character recognition process.
  • the image preprocessing unit 220 calculates the distortion (distortion) of the character using the measured angle value (slope) It is possible to improve the character recognition rate in the optical character recognition process.
  • the communication unit 140 included in the character image input apparatus 100 transmits data on the character image acquired by the camera unit 120 and data on the tilt value measured by the tilt measuring unit 130, And transmits it to the dictionary search apparatus 200.
  • the control unit 150 of the character image input apparatus 100 instructs the camera unit 120 to acquire a character image and instructs the tilt measuring unit 130 to acquire the character And transmits the data on the obtained character image and the measured tilt value to the dictionary searching apparatus 200.
  • the dictionary searching apparatus 200 includes a communication unit 210 for receiving character image data and slope measurement data from the character image input apparatus 100, a video preprocessing unit for correcting the character image based on the received slope value, An optical character recognition unit 230 for extracting text data from the corrected character image, a search unit 240 for performing a dictionary search to find the meaning of the character information, And a display unit 250 for displaying an image.
  • the communication unit 210 receives character image data and tilt measurement value data transmitted by the communication unit 140 of the character image input device 100.
  • the image preprocessing unit 220 receives the character image data and the slope measurement value data from the communication unit 210, and then, based on the slope measurement value, The distortion of the generated character image can be corrected.
  • the image preprocessing unit 220 binarizes the photographed (acquired) character image so that the optical character recognition process can be well processed.
  • the binarization may be executed before the character image distortion correction, or after the character image distortion correction.
  • the image preprocessing unit 220 performs a process of converting the image of the character image obtained by using the image correction method such as homography or Hough Transform to the degree of inclination of the optical axis of the camera unit 120, It is possible to provide the character image to the optical character recognition unit 230 such that the camera unit 120 vertically photographs the character to be photographed.
  • the image correction method such as homography or Hough Transform
  • the image preprocessing unit 220 may perform image processing such as binarization, noise removal, border removal, skewing / deskewing, By performing the correction, the character recognition rate in the optical character recognition process can be increased.
  • the optical character recognition unit 230 receives the corrected character image data from the image preprocessing unit 220 and then performs character recognition on the corrected character image through an optical character recognition process, .
  • the optical character recognition unit 230 analyzes characters corresponding to the character image through a classifier such as a fully-connected layer or a SVM (Support Vector Machine), and extracts a text .
  • a classifier such as a fully-connected layer or a SVM (Support Vector Machine)
  • SVM Small Vector Machine
  • the optical character recognition unit 230 When extracting the character information from the character image, the optical character recognition unit 230 performs character extraction according to the parameter setting value according to the language-specific characteristic, thereby reducing the possibility of recognition of the character or the possibility of misrecognition, The recognition rate can be increased.
  • the search unit 240 After receiving the character information recognized by the optical character recognition unit 230, the search unit 240 performs a pre-search to find the meaning of the character.
  • the search unit 240 may access the network to retrieve the meaning of the character, or may access the dictionary data provided in the dictionary search apparatus 200 to retrieve the meaning of the character .
  • the display unit 250 displays the result of the search by the search unit 240, and displays the meaning of the word or the translation result of the character to the user.
  • the character recognition electronic dictionary system 1000 recognizes a character even if it is constituted by only the character image input device 100 and the dictionary search device 200 and searches for the meaning of the recognized character and displays it to the user .
  • the character recognition electronic dictionary system 1000 may further include the management apparatus 300 to increase the character recognition rate and display more accurate search results to the user.
  • the management apparatus 300 includes a mistaken image management DB 310 for collecting data of a character image in which character information extraction has failed, such as when recognition is not performed at all, or when recognition is incorrect, from the dictionary search apparatus 200, And a neural network manager 320 for adjusting parameters related to extraction of character information in the optical character recognition unit 230 so as to improve the recognition rate of the character information by analyzing and learning the image data, May be transmitted to the dictionary search apparatus 200 to increase the recognition rate of the character information of the optical character recognition unit 230.
  • a mistaken image management DB 310 for collecting data of a character image in which character information extraction has failed, such as when recognition is not performed at all, or when recognition is incorrect, from the dictionary search apparatus 200
  • a neural network manager 320 for adjusting parameters related to extraction of character information in the optical character recognition unit 230 so as to improve the recognition rate of the character information by analyzing and learning the image data, May be transmitted to the dictionary search apparatus 200 to increase the recognition rate of the character information of the optical character recognition unit 230.
  • the misty-type image management DB 310 determines whether or not the character image has a low recognition rate, such as when the character information is not recognized at all.
  • the image data is collected from the dictionary search apparatus 200 and classified into types, and the classified image data is transmitted to the neural network management unit 320.
  • the neural network management unit 320 learns or re-learns the image data transmitted from the false-feeling image management DB 310 through the neural network to derive the parameters. That is, when extracting character information from the optical character recognition unit 230, the parameters can be adjusted so as to improve the recognition rate of character recognition.
  • the first search device 200 may further include a parameter management unit 260 to receive parameters adjusted by the management device 300 and to update the character recognition related parameters of the optical character recognition unit 230 have.
  • the parameter management unit 260 stores various parameters used for extracting text data from the character image in the optical character recognition process according to the characteristics of each language, The character information can be extracted from the photographed character image with reference to the parameters.
  • the parameter may be determined according to various characteristics of each language such as a language type, a character notation method, and may be determined in consideration of characteristics related to language-specific properties in addition to the listed properties.
  • the parameter may be determined according to various characteristics of the character itself such as the overall color of the character image, the lighting condition, the quality of the displayed character, the font type used, the character type, the color of the character, Can be determined in consideration of the characteristics related to the inherent properties.
  • the character image is corrected using the tilted angle value of the camera unit 120 in the character image input apparatus 100, Will be described in detail with reference to Figs. 3 and 4.
  • FIG. 4A is a diagram illustrating an example of a tilt of a character image input device
  • FIG. 4B is a diagram illustrating an example of a character image acquired in a state where an optical axis of the character image input device is perpendicular to a character
  • FIG. 4 (c) is a diagram showing an example of a character image obtained in a state in which the optical axis of the camera unit in the character image input device is inclined with respect to the character.
  • the tilt measuring unit 130 included in the character image input apparatus 100 detects a tilt of a surface of a paper sheet,
  • the tilt A of the optical axis of the camera unit 120 can be measured.
  • a distorted character image is obtained when the camera 120 photographs the subject in a state where the camera 120 does not form an angle perpendicular to the subject (for example, the paper surface on which characters are printed) The recognition rate of the character is lowered.
  • FIG. 4 (b) is an image in which the camera 120 photographs the character with an angle perpendicular to the paper surface on which the characters are printed, and there is no distortion in the characters to be photographed.
  • Fig. 4 (c) The camera section 120 photographs the character in a state in which the camera section 120 does not form an angle perpendicular to the paper surface on which the character is printed as in the case of the character image input apparatus 100 shown in FIG. .
  • the character image input apparatus 100 photographs a character in a state in which it is raised, distortion occurs in the character image and the character recognition rate is lowered.
  • the word " contents " may be misrecognized as " corternts ".
  • the image preprocessing unit 220 corrects the distortion of the character using the measured angle value, the distorted character image shown in FIG. 4C is corrected as shown in FIG. 4B as a character image in a state in which the image sensor 120 is not inclined. Accordingly, there is an effect that the character recognition rate in the optical character recognition process can be enhanced through the inclination correction of the character image.
  • the character distortion correction in the character image input apparatus 100 may be performed by using an image correction method such as homography or Hough transform in the image preprocessing unit 220 so that the image sensor 120 is inclined Can be corrected as a character image acquired in a non-inclined state.
  • an image correction method such as homography or Hough transform in the image preprocessing unit 220 so that the image sensor 120 is inclined Can be corrected as a character image acquired in a non-inclined state.
  • the tilt measuring unit 130 may be a gyro sensor and an acceleration sensor. In this case, the tilt measuring unit 130 may measure the tilt of the camera unit 120 in more than one direction.
  • the image preprocessing unit 220 can correct the photographed character image even if the character image input apparatus 100 is inclined in various directions according to the user.
  • FIG. 5 is a diagram illustrating a position of a camera unit in consideration of a focal length in a character image input device according to an exemplary embodiment of the present invention.
  • FIG. 5 (a) 5 (b) is an illustration showing an example in which a camera unit is disposed on the upper outer peripheral surface of the character image input device.
  • the camera unit 120 of the character image input device 100 may be a camera having a fixed focus.
  • a camera module of a fixed focus type is used as the camera unit 120, and the camera module 120 of the character image input device 100 includes a pen-
  • the fixed focus camera module may be disposed in consideration of the focal length F thereof or the fixed camera module may be used as the camera module 120 as shown in FIG.
  • the camera module of the fixed focus type is disposed in a protruding housing at the upper end of the pen-shaped body portion 105 of the character image input device 100 in consideration of the focal length F thereof, You can shoot.
  • the camera module 120 of the fixed focus type is disposed on the body part 105 at a predetermined distance from the pen tip formed at one end of the body part 105, that is, at a focal distance, , It is not necessary to take a character image after focusing on each one, so that a clear character image can be obtained quickly.
  • FIG. 6 is a diagram illustrating an example of a finger guide unit for holding a finger so as not to block the angle of view of the camera unit and an example of holding a finger in the character image input device according to the embodiment of the present invention.
  • the pen-shaped body portion 105 may be configured such that when the user grasps the body portion 105, the user's finger enters the field of view (FOV) of the camera portion 120
  • a finger guide B for receiving the finger may be formed on a part of the body part 105.
  • FIG. 6A shows a state in which the camera unit 120 is disposed inside the lower end of the body part 105 and the finger guide part B is provided in the lower part of the body part 105, And is formed on the outer peripheral surface of the body portion 105.
  • the middle finger stays in the finger guide portion B as shown in Fig. 6 (b) (FOV) of the camera unit 120 is not blocked.
  • 6C shows a state in which the camera unit 120 is disposed outside the upper end of the body part 105 and the finger guide unit B is moved in the vertical direction of the camera unit 120, And is formed on the outer peripheral surface of the body portion 105.
  • the thumb and index finger stay in the finger guide portion B as shown in Fig. 6 (d) , So that the center fingers do not obscure the angle of view (FOV) of the camera unit 120.
  • the finger guide portion B guides the user's finger so as not to obscure the field of view of the camera portion 120 due to the finger, and can be any shape as long as it is a shape that is indented or protruded, Do.
  • FIG. 7 is a flowchart illustrating an operation of recognizing a character and performing a dictionary search in a character recognition electronic dictionary system according to an embodiment of the present invention.
  • the character image input apparatus 100 receives a user's command through the user input unit 111 and instructs the camera unit 120 to acquire a character image (S701).
  • the camera unit 120 captures a character and acquires the image, and the tilt measuring unit 130 measures the tilt of the optical axis of the camera unit 120 with respect to the character (S703) .
  • the character image input apparatus 100 transmits the obtained character image data and the measured tilt value to the dictionary search apparatus 200 through the communication unit 140 (S705).
  • the communication unit 210 of the dictionary searching apparatus 200 receives the data on the obtained character image and the measured tilt value.
  • the image preprocessing unit 220 in the dictionary searching apparatus 200 corrects the tilt distortion of the character image based on the data on the photographed character image and the received tilt value in step S707.
  • the optical character recognition unit 230 in the dictionary searching apparatus 200 extracts character information from the corrected character image (S709).
  • step S709 If it is determined in step S709 that the character image is erroneously recognized in the character information extraction process or is not recognized at all, the dictionary search device 200 transmits the character image in which the recognition problem occurred to the management device 300 (S713).
  • the management apparatus 300 analyzes and learns the received character image data to adjust the parameters related to the extraction of information for recognizing characters in the character image, thereby improving the recognition rate of the character information (S715).
  • the dictionary search apparatus 200 receives the parameter whose value has been adjusted from the management apparatus 300 (S717), and updates the parameters of the optical character recognition section 230 with the received parameters. Accordingly, the dictionary searching apparatus 200 can extract the character information from the character image in a state where the recognition rate of the optical character recognition apparatus is improved.
  • the method of searching for a character recognition electronic dictionary can be embodied as computer readable code on a computer readable recording medium.
  • a computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like.
  • the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner.
  • functional programs, codes, and code segments for implementing the present invention can be easily deduced by programmers skilled in the art to which the present description belongs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)

Abstract

La présente invention concerne un dispositif d'entrée d'image de texte, comprenant : un corps qui s'étend dans une direction ; une unité de communication disposée à l'intérieur du corps de façon à communiquer un quelconque autre dispositif ; une unité de pointe disposée à une extrémité du corps ; une unité d'appareil de prise de vues disposée dans une portion du corps de façon à capturer une image de la surface d'un sujet disposé sur son axe optique ; une unité de mesure d'inclinaison destinée à acquérir l'inclinaison de l'axe optique de l'unité d'appareil de prise de vues ; et une unité de commande disposée à l'intérieur du corps et effectuant une commande de telle sorte que, lorsque l'unité de pointe est amenée en contact avec le sujet, une image de la surface du sujet et l'inclinaison entre la surface du sujet et l'axe optique sont acquises, et l'image et l'inclinaison sont transmises à l'autre dispositif de façon à permettre à l'autre dispositif de reconnaître des informations de caractère par reconnaissance optique de caractères sur la base de l'image et de l'inclinaison.
PCT/KR2018/007501 2017-09-25 2018-07-03 Capture d'image pour reconnaissance de caractères Ceased WO2019059503A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170123287A KR102022772B1 (ko) 2017-09-25 2017-09-25 문자 인식용 영상 획득
KR10-2017-0123287 2017-09-25

Publications (1)

Publication Number Publication Date
WO2019059503A1 true WO2019059503A1 (fr) 2019-03-28

Family

ID=65810798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/007501 Ceased WO2019059503A1 (fr) 2017-09-25 2018-07-03 Capture d'image pour reconnaissance de caractères

Country Status (2)

Country Link
KR (1) KR102022772B1 (fr)
WO (1) WO2019059503A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392847A (zh) * 2021-06-17 2021-09-14 拉萨搻若文化艺术产业开发有限公司 一种藏汉英三语ocr手持扫描翻译装置及翻译方法
CN114120342A (zh) * 2021-11-26 2022-03-01 北京金山数字娱乐科技有限公司 简历文档识别方法、装置、计算设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024177387A1 (fr) * 2023-02-23 2024-08-29 삼성전자주식회사 Dispositif électronique et procédé de reconnaissance de texte

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001155140A (ja) * 1999-11-30 2001-06-08 Hitachi Ltd 映像情報入力装置および情報処理システム
KR20010085741A (ko) * 1998-08-31 2001-09-07 가나이 쓰도무 카메라부착 펜형 입력장치
KR100741368B1 (ko) * 2005-03-21 2007-07-20 유니챌(주) 문자자동인식장치 및 방법
JP2007323667A (ja) * 2007-07-30 2007-12-13 Fujitsu Ltd 撮影装置
KR20110029189A (ko) * 2009-09-15 2011-03-23 연제성 카메라 촬영 영상으로 복수 개의 단어를 번역하는 단어장 역할을 하는 전자사전 기능을 가진 휴대형기기

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006072542A (ja) * 2004-08-31 2006-03-16 Sato Corp バーコード読み取り装置及びバーコード読み取り方法
KR101310515B1 (ko) * 2006-05-17 2013-09-25 정병주 프로젝션을 이용한 문자입력방법과 장치
JP5848230B2 (ja) * 2012-11-12 2016-01-27 グリッドマーク株式会社 手書き入出力システム、手書き入力シート、情報入力システム、情報入力補助シート

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010085741A (ko) * 1998-08-31 2001-09-07 가나이 쓰도무 카메라부착 펜형 입력장치
JP2001155140A (ja) * 1999-11-30 2001-06-08 Hitachi Ltd 映像情報入力装置および情報処理システム
KR100741368B1 (ko) * 2005-03-21 2007-07-20 유니챌(주) 문자자동인식장치 및 방법
JP2007323667A (ja) * 2007-07-30 2007-12-13 Fujitsu Ltd 撮影装置
KR20110029189A (ko) * 2009-09-15 2011-03-23 연제성 카메라 촬영 영상으로 복수 개의 단어를 번역하는 단어장 역할을 하는 전자사전 기능을 가진 휴대형기기

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392847A (zh) * 2021-06-17 2021-09-14 拉萨搻若文化艺术产业开发有限公司 一种藏汉英三语ocr手持扫描翻译装置及翻译方法
CN113392847B (zh) * 2021-06-17 2023-12-05 拉萨搻若文化艺术产业开发有限公司 一种藏汉英三语ocr手持扫描翻译装置及翻译方法
CN114120342A (zh) * 2021-11-26 2022-03-01 北京金山数字娱乐科技有限公司 简历文档识别方法、装置、计算设备及存储介质

Also Published As

Publication number Publication date
KR102022772B1 (ko) 2019-09-18
KR20190034812A (ko) 2019-04-03

Similar Documents

Publication Publication Date Title
WO2019107981A1 (fr) Dispositif électronique reconnaissant du texte dans une image
WO2015102361A1 (fr) Appareil et procédé d'acquisition d'image pour une reconnaissance de l'iris à l'aide d'une distance de trait facial
KR102236616B1 (ko) 정보 처리 장치, 그의 제어 방법, 및 기억 매체
WO2019059503A1 (fr) Capture d'image pour reconnaissance de caractères
WO2015122577A1 (fr) Dispositif électronique comprenant une zone de détection minimale d'empreinte digitale et son procédé de traitement d'informations
WO2016153209A2 (fr) Procédé de reconnaissance d'empreintes digitales sans contact par smartphone
WO2017099427A1 (fr) Procédé d'authentification biométrique convergente reposant sur une articulation du doigt et une veine du doigt, et appareil associé
WO2016129917A1 (fr) Terminal d'utilisateur et son procédé de production
WO2016163755A1 (fr) Procédé et appareil de reconnaissance faciale basée sur une mesure de la qualité
WO2021025290A1 (fr) Procédé et dispositif électronique permettant de convertir une entrée manuscrite en texte
WO2018062580A1 (fr) Procédé de traduction de caractères et appareil associé
WO2018008881A1 (fr) Dispositif terminal et serveur de service, procédé et programme de fourniture d'un service d'analyse de diagnostic exécutés par ledit dispositif, et support d'enregistrement lisible par ordinateur sur lequel est enregistré ledit programme
WO2018164364A1 (fr) Procédé de reconnaissance de parties de corps multiples sans contact et dispositif de reconnaissance de parties de corps multiples, à l'aide de multiples données biométriques
WO2019139404A1 (fr) Dispositif électronique et procédé de traitement d'image correspondante
WO2015137666A1 (fr) Appareil de reconnaissance d'objet et son procédé de commande
WO2016122068A1 (fr) Procédé pour reconnaître un pneu et dispositif associé
WO2020256517A2 (fr) Procédé et système de traitement de mappage de phase automatique basés sur des informations d'image omnidirectionnelle
WO2017023139A1 (fr) Appareil de recherche automatique de contenu alternatif pour personne déficiente visuelle
WO2014088125A1 (fr) Dispositif de photographie d'images et procédé associé
WO2016080716A1 (fr) Système de caméra de reconnaissance d'iris, terminal le comprenant et procédé de reconnaissance d'iris du système
JP5254897B2 (ja) 手画像認識装置
WO2014017733A1 (fr) Appareil de photographie numérique et son procédé de commande
WO2021066275A1 (fr) Dispositif électronique et procédé de commande de celui-ci
WO2024210322A1 (fr) Dispositif électronique, serveur et système pour fournir des biosignaux hautement précis sur la base d'informations acquises sans contact, et procédé de fonctionnement associé
WO2017034321A1 (fr) Technique de prise en charge de photographie dans un dispositif possédant un appareil photo et dispositif à cet effet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18858814

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18858814

Country of ref document: EP

Kind code of ref document: A1