[go: up one dir, main page]

WO2018194227A1 - Three-dimensional touch recognition device using deep learning and three-dimensional touch recognition method using same - Google Patents

Three-dimensional touch recognition device using deep learning and three-dimensional touch recognition method using same Download PDF

Info

Publication number
WO2018194227A1
WO2018194227A1 PCT/KR2017/011272 KR2017011272W WO2018194227A1 WO 2018194227 A1 WO2018194227 A1 WO 2018194227A1 KR 2017011272 W KR2017011272 W KR 2017011272W WO 2018194227 A1 WO2018194227 A1 WO 2018194227A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
deep learning
brightness
dimensional
touch recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/011272
Other languages
French (fr)
Korean (ko)
Inventor
이수웅
권순오
안희경
이강원
이종일
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Korea Institute of Industrial Technology KITECH
Original Assignee
Korea Institute of Industrial Technology KITECH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korea Institute of Industrial Technology KITECH filed Critical Korea Institute of Industrial Technology KITECH
Publication of WO2018194227A1 publication Critical patent/WO2018194227A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04166Details of scanning methods, e.g. sampling time, grouping of sub areas or time sharing with display driving
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission

Definitions

  • the present invention relates to a three-dimensional touch recognition device using deep learning and a three-dimensional touch recognition method using the same, and more particularly, to allow a user input to be performed on a flexible soft material, a variety of users using deep learning
  • a three-dimensional touch recognition apparatus capable of recognizing, determining, and processing an input, and a three-dimensional touch recognition method using the same.
  • Commonly used input devices include a mouse, a keyboard, a touch pad, a trackball, and the like, which may be used by the user to grab or touch the casing and the main part of the device with a proper force and move and click. Sophisticated operation is required.
  • Korean Patent No. 10-1719278 name of the invention: deep learning framework and image recognition method for visual content-based image recognition
  • deep learning technology is modularized, in / out parameter property extraction and training data for each module Equipped with a content-based deep learning analysis tool to automate set analysis and deep learning scenarios, and a parameter property interworking module for interlocking parameter properties of IN / OUT between modules through the content-based deep learning analysis tool, and interworking between modules is possible.
  • An analysis result repository that stores the analysis results through the dynamic call interface interworking module, the standard API interface integration module between the modules, the one-pass integration module and the deep learning analysis tool integrating the task analysis, results and confirmation of the modules into one.
  • An integrated GUI framework is disclosed.
  • An object of the present invention for solving the above problems is to implement a user input such as pressing, moving, etc. without the need for complicated operation.
  • An object of the present invention is to analyze a three-dimensional pattern input to a device using a deep learning algorithm.
  • the input unit having a sheet having a function to restore the initial shape when the user input acts on the outer surface and the user input is resolved;
  • a plurality of markers arranged and arranged along an inner side surface of the sheet;
  • An imaging unit which collects a marker image as an image of the marker that changes in response to the user input;
  • An illumination unit irradiating light toward an inner surface of the sheet;
  • an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.
  • the deep learning algorithm may be any one of a deep neural network, a convolutional neural network, or a cyclic neural network.
  • the analysis unit may determine the three-dimensional pattern by a position change or a brightness change of a size point marker, which is a marker that is recognized as the largest size among the plurality of markers. .
  • the analysis unit may be configured by changing the size or shape of the size auxiliary marker positioned within a predetermined range around the size point marker, or changing the brightness of the size point marker.
  • the dimensional pattern can be determined.
  • the analysis unit may determine the three-dimensional pattern by changing the position or changing the brightness of the brightness point marker, which is a marker recognized among the plurality of the markers with the highest brightness. .
  • the analysis unit may be configured by changing the position or brightness of the brightness point marker, and changing the size or shape of the brightness assist marker positioned within a predetermined range around the brightness point marker.
  • the dimensional pattern can be determined.
  • the marker may be formed in a circular shape.
  • a start notification unit may further include a start notification unit configured to perform visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet.
  • the deep learning result value may be received from the analysis unit, and the control unit may further include a control unit for transmitting a control signal corresponding to the deep learning result value to an external device.
  • the configuration of the present invention for achieving the above object, (i) the step of the user input acting on the sheet; (ii) transferring the marker image photographed by photographing the inner surface of the sheet per unit time to the analyzer; (iii) the analysis unit analyzing the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker; And (iv) inputting data of the three-dimensional pattern into a deep learning algorithm and outputting a deep learning result value.
  • the analysis unit in the step (iii), the analysis unit, the three-dimensional pattern by changing the position of the size point marker that is the marker that is recognized the largest size among the plurality of the markers; You can judge.
  • the analysis unit may generate the three-dimensional pattern by changing the position of the brightness point marker, which is a marker recognized by the imaging unit among the plurality of markers having the highest brightness. You can judge.
  • step (iii) in the step (iii), it is possible to measure the horizontal displacement of the three-dimensional displacement of the marker by changing the position of the marker.
  • step (iii) it is possible to measure the vertical displacement of the three-dimensional displacement of the marker by changing the brightness of the marker.
  • the effect of the present invention is that by analyzing and determining the three-dimensional pattern input to the device by the user input using a deep learning algorithm, it is possible to improve the accuracy for various user patterns.
  • FIG. 1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a 3D touch recognition device according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the matter that the first user input acts on the sheet according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the matter that the pressing operation acts on the sheet according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the matter that the movement operation in one direction for the sheet according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of the matter that the movement operation in the other direction with respect to the sheet according to an embodiment of the present invention.
  • FIG. 7 is an image in which a three-dimensional pattern is imaged according to an exemplary embodiment.
  • the input unit having a sheet having a function to restore the initial shape when the user input is applied to the outer surface and the user input is resolved;
  • a plurality of markers arranged and arranged along an inner side surface of the sheet;
  • An imaging unit which collects a marker image as an image of the marker that changes in response to the user input;
  • An illumination unit irradiating light toward an inner surface of the sheet;
  • an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.
  • FIG. 1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention
  • Figure 2 is a block diagram of a three-dimensional touch recognition device according to an embodiment of the present invention.
  • the 3D touch recognition apparatus using the deep learning of the present invention includes a sheet 110 having a function of restoring to an initial shape when a user input is applied to an outer surface and the user input is cancelled.
  • An input unit 100 provided; Markers 120 are arranged in a plurality arranged along the inner surface of the sheet 110;
  • An imaging unit 200 collecting a marker image as an image of the marker 120 that changes in response to a user input;
  • An illumination unit 300 for irradiating light to the inner surface of the sheet 110;
  • an analyzing unit 400 for generating a 3D pattern by analyzing the marker image and outputting a deep learning result through an iterative operation of inputting data of the 3D pattern to the deep learning algorithm.
  • the marker 120 may be formed in a circular shape.
  • the marker 120 is described as being circular, but is not necessarily limited thereto, and may be formed in various shapes such as an ellipse, a square, and a polygon.
  • the marker 120 may be a figure formed by a line, or may be a form in which a color is filled in the figure.
  • the installation pattern of the plurality of markers 120 may have a predetermined number of rows and columns, and various patterns such as the same spacing between rows and rows, columns and columns, or an arrangement of concentric circles may be considered.
  • the plurality of markers 120 may form a square array.
  • the change in the marker image may be generated by a change in the spacing between the columns or rows of the array formed by the marker 120 or a change in the shape of the array.
  • the change in the marker image may be generated by a change in the size or shape of the marker 120 itself.
  • the user input may be performed by stopping or operating a body part of the user while the body part of the user is in contact with the outer surface of the sheet 110.
  • a part of the user's entity may mean any body part capable of pressing, releasing, moving, etc. in contact with one surface of the seat 110 by using a hand or a finger, a foot or a toe, an elbow or a knee.
  • the user who generates the user input may include not only a person but also a machine, a robot, and other devices.
  • the pressing pressure and the time for holding the pressing are not limited.
  • the time required for releasing the pressing is not limited to a specific range.
  • Movement means that the user moves from one point of one surface of the sheet 110 to another while maintaining the pressing state.
  • the moving path is not limited to a specific one, and may include a straight line, a curved line, a curved line, Circles, ellipses, arcs, splines and the like, but may not be limited thereto.
  • the imaging unit 200 may include an element such as a CCD, but is not limited thereto.
  • the imaging unit 200 may include a wide-angle lens to enable photographing of the entire photographing surface.
  • the sheet 110 may be made of an elastic material.
  • the remaining portion of the input unit 100 except for the sheet 110 may be formed of a flexible material, such as the sheet 110, or may be formed of a material having no elasticity, unlike the sheet 110.
  • the input unit 100 may have a shape in which the internal space is opened from the outside, and as shown in FIG. 2, the input space 100 may have a shape in which the internal space is closed from the outside.
  • the lighting unit 300 may include an LED lamp.
  • a configuration such as automatically turned off may be adopted.
  • FIG 3 is a schematic diagram of the matter that the first user input to the sheet 110 according to an embodiment of the present invention
  • Figure 4 is a pressing operation for the seat 110 according to an embodiment of the present invention
  • Figure 5 is a schematic diagram of the action
  • Figure 5 is a schematic diagram of the action of the movement action in one direction with respect to the sheet 110 according to an embodiment of the present invention.
  • Figure 6 is a schematic diagram of the matter that the movement action in a different direction with respect to the sheet 110 according to an embodiment of the present invention
  • Figure 7 is a three-dimensional pattern is imaged according to an embodiment of the present invention Image.
  • 3 (a), 4 (a), 5 (a) and 6 (a) may be cross-sectional views of the input unit to which the respective user inputs act.
  • 3 (b), 4 (b), 5 (b) and 6 (b) may be plan views of the inner surface of the sheet on which the respective user inputs are acting.
  • reference numeral 121 may indicate a size point marker 121 or a brightness point marker 121.
  • reference numeral 122 may indicate a size assist marker 122 or a brightness assist marker 122.
  • the analysis unit 400 may determine the 3D pattern by changing the position or changing the brightness of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. have. Alternatively, the analysis unit 400 determines the 3D pattern by changing the position or changing the brightness of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200. can do.
  • the size point marker 121 and the brightness point marker 121 may represent the largest marker 120 and the brightest marker 120 based on the marker image recognized by the imaging unit 200. Particularly, in the case of the brightness point marker 121, the imaging unit 200 and the lighting unit 300 are located in the same direction, and thus the brightness increases when the position of the marker 120 approaches the imaging unit 200. It may be.
  • the three-dimensional pattern may be a change in position by two-dimensional horizontal displacement and a vertical displacement perpendicular to this horizontal displacement.
  • the two-dimensional horizontal displacement may be expressed as a displacement on the x-y plane
  • the vertical displacement may be expressed as a displacement with respect to the z axis perpendicular to the x-y plane.
  • the size point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the size point marker 121 that is recognized as the largest size may be changed.
  • the movement of the size point marker 121 may be recognized by changing the position of the marker 120 having the largest size recognized by the imaging unit 200 among the plurality of markers 120. That is, the marker 120 itself may not be moved but may recognize the position change of the size point marker 121 to determine the movement of the size point marker 121.
  • the analysis unit 400 the position change or brightness change of the size point marker 121, and the change in the size or shape of the size auxiliary marker 122 located within a predetermined range around the size point marker 121.
  • the three-dimensional pattern can be judged by.
  • the predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the size point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the size point markers 121.
  • the marker may be in the range of a circle included.
  • the present invention is not limited thereto.
  • the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)
  • the size point marker 121 When the size point marker 121 is repositioned, the size or shape of the size auxiliary marker 122 located within a predetermined range is changed, and data about the size marker 12 is previously stored in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.
  • the analysis unit 400 analyzes not only an image of a position change or a brightness change of the size point marker 121, but also an image of a change of the size or shape of each of the plurality of individual size auxiliary markers 122 and three-dimensionally. The pattern can be determined.
  • the size point marker 121 when the size point marker 121 performs the horizontal displacement movement, when the position change of the size point marker 121 is performed, the size assistance within a predetermined range of the 3 ⁇ 3 array is performed.
  • the size or shape of the marker 122 is changed, and the three-dimensional pattern may be determined by comprehensively determining the movement of the size point marker 121 and the size or shape change of the size auxiliary marker 122.
  • the two-dimensional horizontal displacement of the hand can be measured by changing the position of the size point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the size point marker 121.
  • three-dimensional patterns can be determined for user input by hand.
  • the brightness point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the brightness point marker 121 recognized as the highest brightness may be changed.
  • the movement of the brightness point marker 121 may be recognized by changing the position of the marker 120 that is recognized as the highest brightness among the plurality of markers 120 by the imaging unit 200. That is, the marker 120 itself does not move but recognizes the positional change of the brightness point marker 121 to determine the movement of the brightness point marker 121.
  • the analysis unit 400 changes the position or the brightness of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122 positioned within a predetermined range around the brightness point marker 121.
  • the three-dimensional pattern can be judged by.
  • the predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the brightness point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the brightness point markers 121.
  • the marker may be in the range of a circle included.
  • the present invention is not limited thereto.
  • the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)
  • the size or shape of the brightness subsidiary marker 122 positioned within a predetermined range is changed, and data about the brightness subsidiary marker 122 is stored in advance in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.
  • the analysis unit 400 analyzes not only an image of a position change or a brightness change of the brightness point marker 121, but also an image of a change in size or shape of each of the plurality of individual brightness assistant markers 122 to be three-dimensional.
  • the pattern can be determined.
  • the brightness point marker 121 when the brightness point marker 121 performs the horizontal displacement movement, when the position change of the brightness point marker 121 is performed, brightness assistance within a predetermined range of the 3 ⁇ 3 array is performed.
  • the size or shape of the marker 122 changes, and the three-dimensional pattern may be determined by comprehensively determining the movement of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122.
  • the two-dimensional horizontal displacement of the hand can be measured by changing the position of the brightness point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the brightness point marker 121.
  • three-dimensional patterns can be determined for user input by hand.
  • any one marker 120 is picked up at a portion where the pressing operation of the inner surface of the sheet 110 is performed.
  • the brightness may increase while moving closer to the unit 200.
  • the marker 120 having increased brightness may be recognized as the brightness point marker 121, and the first point P1 may be set as a start coordinate.
  • the brightness point marker 121 by the pressing operation of a stronger force may be recognized that the brightness is further increased, and the vertical displacement due to the user input may be measured by the brightness change of the brightness point marker 121. That is, it can be measured by recognizing that the three-dimensional coordinates are changed from the first point (P1) to the second point (P2).
  • a vertical displacement may be formed in a direction opposite to that in FIG. 4. Can be.
  • the marker 120 having the greatest brightness may be changed, and such a marker 120 may be changed.
  • the brightness point marker 121 moves from the second point P2 to the third point P3.
  • the marker 120 having the greatest brightness may be changed.
  • the brightness point marker 121 may be recognized to move from the third point P3 to the fourth point P4.
  • a three-dimensional pattern as shown in FIG. 7 may be formed by the complex displacement of the horizontal displacement and the vertical displacement as described above.
  • the three-dimensional touch recognition apparatus using the deep learning of the present invention can detect not only a three-dimensional pattern but also a force in a vertical direction or a horizontal direction.
  • the vertical force may be measured by multiplying the vertical displacement by the elastic modulus of the sheet 110.
  • the sheet 110 may be manufactured so that the elastic modulus of the sheet 110 is the same at each point of the sheet 110.
  • the horizontal force is a vertical force N and the sheet 110 applied to the seat 110 by the pressing operation of the user's hand. It can be measured as a force corresponding to the frictional force calculated by the product of the friction coefficient ( ⁇ ) of).
  • the friction coefficient of the sheet 110 may be determined by referring to the friction coefficient reference data table stored in the analysis unit 400.
  • the coefficient of friction of the sheet 110 is determined by the sheet 110. It may be different from the surface friction coefficient as a property of the surface of.
  • the coefficient of friction of the sheet 110 used in the horizontal force detection method of the present invention is determined in consideration of the state in which the shape of the sheet 110 is bent by the pressing operation. It can be determined by the data obtained through mechanical experiments on the sides.
  • the force in the horizontal direction may be calculated by multiplying the friction coefficient of the sheet 110 determined by the above process with the vertical force, which is the vertical force.
  • the deep learning algorithm may be one of a deep neural network, a convolutional neural network, or a cyclic neural network.
  • the deep learning algorithm used in the 3D touch recognition device using the deep learning of the present invention may be a known technique.
  • the neural network described above is used as the deep learning algorithm, but the present invention is not limited thereto.
  • each three-dimensional pattern may be represented by a change in three-dimensional coordinates, and as shown in FIG. 7, each three-dimensional pattern may be represented and stored as a three-dimensional image, respectively.
  • the analysis unit 400 performs a learning using a deep learning algorithm for each 3D pattern, and analyzes and determines a 3D pattern by a user input based on the learned data to derive a deep learning result value. Can be.
  • the analysis unit 400 inputs the 3D pattern input by the user.
  • the user can recognize the user input of the intended 3D pattern. Accordingly, even if the 3D pattern input by the user is determined to be the same pattern and there is a difference in the 3D coordinates, the analyzer 400 determines that the pattern generated by the user input is the same as the 3D pattern stored in advance. It may be determined as a pattern, and a deep learning result value thereof may be output.
  • the 3D touch recognition apparatus using the deep learning of the present invention may further include a start notification unit that performs visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet 110.
  • the formation of the three-dimensional pattern may be started, the user may need a method for checking whether the first user input is recognized.
  • the start notification unit emits light so that the user can confirm the start of formation of the three-dimensional pattern.
  • the start notification unit emits light so that the 3D pattern formation may be confirmed.
  • the start notification unit is not limited thereto, and the user may confirm the start of formation of the 3D pattern using sound or vibration. You can do that.
  • the 3D touch recognition apparatus using the deep learning of the present invention further includes a controller 500 that receives the deep learning result from the analysis unit 400 and transmits a control signal corresponding to the deep learning result to external equipment. can do.
  • Each three-dimensional pattern can be matched to a specific instruction. Specifically, when the 3D pattern as shown in FIG. 7 is formed, and if it is matched that the start of the operation of the external equipment is performed, the controller 500 analyzes and determines the input of the 3D pattern as shown in FIG. 7 to the controller 500. The deep learning result value is output, and the controller 500 may transmit a control signal matching the deep learning result value to the external device to start the operation of the external device.
  • a computer having a three-dimensional touch recognition device using the deep learning of the present invention can be manufactured.
  • the deep learning result value for the 3D pattern processed by the analyzer 400 may be transmitted to the computer through the control unit 500, and a command corresponding to the 3D pattern may be transmitted to the computer so that the computer may execute the command. .
  • the robot having the 3D touch recognition device using the deep learning of the present invention can be manufactured.
  • the end of the robot performing the work corresponding to the three-dimensional pattern can be moved, the three-dimensional touch recognition device using the deep learning of the present invention can function as a steering device for controlling the robot.
  • user input may act on the sheet 110.
  • the imaging unit 200 may transfer the photographed marker image to the analysis unit 400 by photographing the inner surface of the sheet 110 per unit time.
  • the unit time may be in milliseconds (ms), and the photographing may be performed in units smaller than milliseconds (ms) in order to improve accuracy.
  • the analysis unit 400 may analyze the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker 120.
  • the analysis unit 400 may determine the 3D pattern by changing the position of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. .
  • the analyzer 400 may determine the 3D pattern by changing the position of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200 with the highest brightness. .
  • the horizontal displacement of the three-dimensional displacement of the marker 120 can be measured by changing the position of the marker 120.
  • the vertical displacement of the three-dimensional displacement of the marker 120 may be measured by changing the brightness of the marker 120.
  • the deep learning result may be output by inputting data of the 3D pattern to the deep learning algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

One embodiment of the present invention provides: a three-dimensional touch recognition device for enabling a user input to be performed on a soft elastic material, and various user inputs to be processed by being recognized and determined using deep learning; and a three-dimensional touch recognition method using same. The three-dimensional touch recognition device using deep learning, according to one embodiment of the present invention, comprises: an input unit provided with a sheet having a user input applied to the outer surface thereof, and having a function of being restored to the initial shape thereof when the user input is released; a plurality of markers arranged along the inner surface of the sheet; a capturing unit for collecting marker images which are images of markers changing according to the user input; a lighting unit for irradiating light towards the inner surface of the sheet; and an analysis unit for generating a three-dimensional pattern by analyzing the marker images, and outputting a deep learning result value by means of a repetitive operation of inputting data associated with the three-dimensional pattern in a deep learning algorithm.

Description

딥러닝을 이용한 3차원 터치 인식 장치 및 이를 이용한 3차원 터치 인식 방법3D touch recognition device using deep learning and 3D touch recognition method using same

본 발명은 딥러닝을 이용한 3차원 터치 인식 장치 및 이를 이용한 3차원 터치 인식 방법에 관한 것으로, 더욱 상세하게는, 신축성이 있는 부드러운 소재에 대해 사용자입력이 수행되도록 하고, 딥러닝을 이용하여 다양한 사용자입력을 인식, 판단하여 처리할 수 있는 3차원 터치 인식 장치 및 이를 이용한 3차원 터치 인식 방법에 관한 것이다. The present invention relates to a three-dimensional touch recognition device using deep learning and a three-dimensional touch recognition method using the same, and more particularly, to allow a user input to be performed on a flexible soft material, a variety of users using deep learning A three-dimensional touch recognition apparatus capable of recognizing, determining, and processing an input, and a three-dimensional touch recognition method using the same.

현재 통상적으로 사용되는 입력장치로는, 마우스, 키보드, 터치패드, 트랙볼 등이 있으며, 이들은, 사용자가 해당 장치의 케이싱 및 주요부위를 적절한 힘으로 파지 또는 접촉하여 이동 및 클릭하여야 하는 등 사용자에게 다소 정교한 동작을 요구하고 있다.Commonly used input devices include a mouse, a keyboard, a touch pad, a trackball, and the like, which may be used by the user to grab or touch the casing and the main part of the device with a proper force and move and click. Sophisticated operation is required.

그러나, 장애인이나 어린이, 노인의 경우, 상기와 같은 구성을 갖는 입력장치를 사용하는데 필요한 정교한 동작을 형성하는 것이 어렵거나 불가능할 수 있다는 문제점이 있다. However, in the case of a disabled person, a child, or an elderly person, there is a problem in that it may be difficult or impossible to form an elaborate operation required to use the input device having the above configuration.

그리고, 이러한 사용자들도 쉽게 활용 가능한 입력장치 등의 개발이 이루어지고 있으나, 음성인식 등의 고도한 기술을 이용하거나, 특수한 구조를 채택하는 것이 일반적이어서, 대부분 구조가 복잡하고 생산 비용이 높다는 문제점이 있다.In addition, such devices are being developed to facilitate the use of such users, but it is common to use a high-level technology such as voice recognition or to adopt a special structure, and thus, most of the structures are complicated and production costs are high. have.

최근에는, 딥러닝을 포함한 기계학습은 IoT 기술의 발달과 빅데이터 처리를 뒷받침할 수 있는 GPU를 비롯한 각종 하드웨어 발전으로 패턴 인식 분야의 활성화와 정확성 제고에 큰 기여를 하고 있으며, 이러한 딥러닝 등을 이용하여 인식에 대한 다양성과 정확성을 향상시킬 수 있다. Recently, machine learning, including deep learning, has contributed greatly to the activation and accuracy of the pattern recognition field with various hardware developments including GPUs that can support the development of IoT technology and big data processing. This can be used to improve the diversity and accuracy of perception.

대한민국 등록특허 제10-1719278호(발명의 명칭: 비주얼 콘텐츠기반 영상 인식을 위한 딥러닝 프레임워크 및 영상 인식 방법)에는, 딥러닝 기술을 모듈화하며, 각 모듈별 In/Out 파라미터 속성 추출 및 훈련 데이터셋 분석, 딥러닝 시나리오의 자동화하는 콘텐츠 기반 딥러닝 분석도구를 탑재하며, 상기 콘텐츠 기반 딥러닝 분석도구를 통한 모듈간 IN/OUT의 파라미터 속성을 연동시키는 파라미터 속성 연동모듈과, 모듈간 연동이 가능한 동적 호출 인터페이스 연동모듈과, 상기 모듈간의 표준API 인터페이스 통합모듈과, 모듈의 태스킹 분석, 결과, 확인을 하나로 통합하는 One-pass 통합모듈 및 딥러닝 분석도구를 통한 분석 결과를 저장하는 분석 결과 저장소를 포함하는 통합GUI프레임워크가 개시되어 있다.In Korean Patent No. 10-1719278 (name of the invention: deep learning framework and image recognition method for visual content-based image recognition), deep learning technology is modularized, in / out parameter property extraction and training data for each module Equipped with a content-based deep learning analysis tool to automate set analysis and deep learning scenarios, and a parameter property interworking module for interlocking parameter properties of IN / OUT between modules through the content-based deep learning analysis tool, and interworking between modules is possible. An analysis result repository that stores the analysis results through the dynamic call interface interworking module, the standard API interface integration module between the modules, the one-pass integration module and the deep learning analysis tool integrating the task analysis, results and confirmation of the modules into one. An integrated GUI framework is disclosed.

(선행특허문헌) 대한민국 등록특허 10-1719278(Preceding Patent Document) Republic of Korea Patent Registration 10-1719278

상기와 같은 문제점을 해결하기 위한 본 발명의 목적은, 복잡한 동작이 필요 없이 누름, 이동 등의 사용자입력을 구현할 수 있도록 하는 것이다.An object of the present invention for solving the above problems is to implement a user input such as pressing, moving, etc. without the need for complicated operation.

그리고, 본 발명의 목적은, 딥러닝 알고리즘을 이용하여 장치에 입력된 3차원 패턴을 분석하도록 하는 것이다. An object of the present invention is to analyze a three-dimensional pattern input to a device using a deep learning algorithm.

본 발명이 이루고자 하는 기술적 과제는 이상에서 언급한 기술적 과제로 제한되지 않으며, 언급되지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다. The technical problem to be achieved by the present invention is not limited to the technical problem mentioned above, and other technical problems not mentioned above may be clearly understood by those skilled in the art from the following description. There will be.

상기와 같은 목적을 달성하기 위한 본 발명의 구성은, 외측면에 사용자입력이 작용하고 상기 사용자입력이 해소되면 초기형상으로 복원되는 기능을 갖는 시트를 구비하는 입력부; 상기 시트의 내측면을 따라 복수 개 배열되어 설치되는 마커; 상기 사용자입력에 대응하여 변화하는 상기 마커의 이미지로서의 마커이미지를 수집하는 촬상부; 상기 시트의 내측면을 향해 광을 조사하는 조명부; 및 상기 마커이미지를 분석하여 3차원 패턴을 생성하고, 상기 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하는 반복적인 작업을 통해 딥러닝 결과값을 출력하는 분석부;를 포함한다. The configuration of the present invention for achieving the above object, the input unit having a sheet having a function to restore the initial shape when the user input acts on the outer surface and the user input is resolved; A plurality of markers arranged and arranged along an inner side surface of the sheet; An imaging unit which collects a marker image as an image of the marker that changes in response to the user input; An illumination unit irradiating light toward an inner surface of the sheet; And an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.

본 발명의 실시 예에 있어서, 상기 딥러닝 알고리즘은 심층 신경망, 합성곱 신경망 또는 순환 신경망 중 어느 하나일 수 있다. In an embodiment of the present invention, the deep learning algorithm may be any one of a deep neural network, a convolutional neural network, or a cyclic neural network.

본 발명의 실시 예에 있어서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커의 위치 변경 또는 명도 변화에 의해 상기 3차원 패턴을 판단할 수 있다. In an embodiment of the present disclosure, the analysis unit may determine the three-dimensional pattern by a position change or a brightness change of a size point marker, which is a marker that is recognized as the largest size among the plurality of markers. .

본 발명의 실시 예에 있어서, 상기 분석부는, 상기 크기포인트마커의 위치 변경 또는 명도 변화, 및 상기 크기포인트마커를 중심으로 소정의 범위 내 위치하는 크기보조마커의 크기 또는 형상의 변화에 의해 상기 3차원 패턴을 판단할 수 있다.In an embodiment of the present disclosure, the analysis unit may be configured by changing the size or shape of the size auxiliary marker positioned within a predetermined range around the size point marker, or changing the brightness of the size point marker. The dimensional pattern can be determined.

본 발명의 실시 예에 있어서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커의 위치 변경 또는 명도 변화에 의해 상기 3차원 패턴을 판단할 수 있다. In an embodiment of the present disclosure, the analysis unit may determine the three-dimensional pattern by changing the position or changing the brightness of the brightness point marker, which is a marker recognized among the plurality of the markers with the highest brightness. .

본 발명의 실시 예에 있어서, 상기 분석부는, 상기 명도포인트마커의 위치 변경 또는 명도 변화, 및 상기 명도포인트마커를 중심으로 소정의 범위 내 위치하는 명도보조마커의 크기 또는 형상의 변화에 의해 상기 3차원 패턴을 판단할 수 있다.In an embodiment of the present disclosure, the analysis unit may be configured by changing the position or brightness of the brightness point marker, and changing the size or shape of the brightness assist marker positioned within a predetermined range around the brightness point marker. The dimensional pattern can be determined.

본 발명의 실시 예에 있어서, 상기 마커는, 원형의 형상으로 형성될 수 있다.In an embodiment of the present invention, the marker may be formed in a circular shape.

본 발명의 실시 예에 있어서, 상기 사용자입력 중 최초의 사용자입력인 시작입력이 상기 시트의 외측면에 작용 시 시각적 알림을 수행하는 시작알림부를 더 포함할 수 있다.According to an embodiment of the present disclosure, a start notification unit may further include a start notification unit configured to perform visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet.

본 발명의 실시 예에 있어서, 상기 분석부로부터 상기 딥러닝 결과값을 전달 받고, 상기 딥러닝 결과값에 대응하는 제어신호를 외부 장비로 전달하는 제어부를 더 포함할 수 있다.In an embodiment of the present disclosure, the deep learning result value may be received from the analysis unit, and the control unit may further include a control unit for transmitting a control signal corresponding to the deep learning result value to an external device.

상기와 같은 목적을 달성하기 위한 본 발명의 구성은, (i) 상기 시트에 상기 사용자입력이 작용하는 단계; (ii) 상기 촬상부가 단위 시간 당 상기 시트의 내측면을 촬영하여 촬영된 상기 마커이미지를 상기 분석부로 전달하는 단계; (iii) 상기 분석부가 상기 마커이미지를 분석하여 상기 마커의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 생성하는 단계; 및 (iv) 상기 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하여 딥러닝 결과값을 출력하는 단계;를 포함한다.The configuration of the present invention for achieving the above object, (i) the step of the user input acting on the sheet; (ii) transferring the marker image photographed by photographing the inner surface of the sheet per unit time to the analyzer; (iii) the analysis unit analyzing the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker; And (iv) inputting data of the three-dimensional pattern into a deep learning algorithm and outputting a deep learning result value.

본 발명의 실시 예에 있어서, 상기 (iii) 단계에서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커의 위치 변경에 의해 상기 3차원 패턴을 판단할 수 있다. In the embodiment of the present invention, in the step (iii), the analysis unit, the three-dimensional pattern by changing the position of the size point marker that is the marker that is recognized the largest size among the plurality of the markers; You can judge.

본 발명의 실시 예에 있어서, 상기 (iii) 단계에서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커의 위치 변경에 의해 상기 3차원 패턴을 판단할 수 있다. In an embodiment of the present disclosure, in the step (iii), the analysis unit may generate the three-dimensional pattern by changing the position of the brightness point marker, which is a marker recognized by the imaging unit among the plurality of markers having the highest brightness. You can judge.

본 발명의 실시 예에 있어서, 상기 (iii) 단계에서, 상기 마커의 위치 변경에 의해 상기 마커의 3차원 변위 중 수평 변위를 측정할 수 있다.In an embodiment of the present invention, in the step (iii), it is possible to measure the horizontal displacement of the three-dimensional displacement of the marker by changing the position of the marker.

본 발명의 실시 예에 있어서, 상기 (iii) 단계에서, 상기 마커의 명도 변화에 의해 상기 마커의 3차원 변위 중 수직 변위를 측정할 수 있다.In an embodiment of the present invention, in the step (iii), it is possible to measure the vertical displacement of the three-dimensional displacement of the marker by changing the brightness of the marker.

상기와 같은 구성에 따른 본 발명의 효과는, 신축성이 있는 부드러운 소재에 대해 사용자입력이 수행되도록 하고, 이를 통해 다양한 사용자입력을 인식, 판단하여 처리할 수 있으므로, 장애인, 환자, 유아 등 정교한 거동이 어려운 사용자가 편리하게 이용할 수 있다는 것이다. The effect of the present invention according to the configuration as described above, so that the user input is performed on a soft material with elasticity, through which a variety of user input can be recognized, determined and processed, so that sophisticated behavior such as the disabled, patients, infants It is convenient for difficult users.

그리고, 본 발명의 효과는, 사용자입력에 의해 장치에 입력된 3차원 패턴을 딥러닝 알고리즘을 이용하여 분석 및 판단하므로, 다양한 사용자패턴에 대한 정확성을 향상시킬 수 있다는 것이다. In addition, the effect of the present invention is that by analyzing and determining the three-dimensional pattern input to the device by the user input using a deep learning algorithm, it is possible to improve the accuracy for various user patterns.

본 발명의 효과는 상기한 효과로 한정되는 것은 아니며, 본 발명의 상세한 설명 또는 특허청구범위에 기재된 발명의 구성으로부터 추론 가능한 모든 효과를 포함하는 것으로 이해되어야 한다. The effects of the present invention are not limited to the above-described effects, but should be understood to include all the effects deduced from the configuration of the invention described in the detailed description or claims of the present invention.

도 1은 본 발명의 일 실시 예에 따른 3차원 터치 인식 장치의 사시도이다. 1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention.

도 2는 본 발명의 일 실시 예에 따른 3차원 터치 인식 장치의 구성도이다. 2 is a block diagram of a 3D touch recognition device according to an embodiment of the present invention.

도 3은 본 발명의 일 실시 예에 따른 시트에 대해 최초의 사용자입력이 작용하는 사항에 대한 모식도이다. 3 is a schematic diagram of the matter that the first user input acts on the sheet according to an embodiment of the present invention.

도 4는 본 발명의 일 실시 예에 따른 시트에 대해 누름 동작이 작용하는 사항에 대한 모식도이다.4 is a schematic diagram of the matter that the pressing operation acts on the sheet according to an embodiment of the present invention.

도 5는 본 발명의 일 실시 예에 따른 시트에 대해 하나의 방향으로 이동 동작이 작용하는 사항에 대한 모식도이다.5 is a schematic diagram of the matter that the movement operation in one direction for the sheet according to an embodiment of the present invention.

도 6은 본 발명의 일 실시 예에 따른 시트에 대해 다른 방향으로 이동 동작이 작용하는 사항에 대한 모식도이다.6 is a schematic diagram of the matter that the movement operation in the other direction with respect to the sheet according to an embodiment of the present invention.

도 7은 본 발명의 일 실시 예에 따른 3차원 패턴이 이미지화된 이미지이다.7 is an image in which a three-dimensional pattern is imaged according to an exemplary embodiment.

본 발명의 가장 바람직한 일 실시예는, 외측면에 사용자입력이 작용하고 상기 사용자입력이 해소되면 초기형상으로 복원되는 기능을 갖는 시트를 구비하는 입력부; 상기 시트의 내측면을 따라 복수 개 배열되어 설치되는 마커; 상기 사용자입력에 대응하여 변화하는 상기 마커의 이미지로서의 마커이미지를 수집하는 촬상부; 상기 시트의 내측면을 향해 광을 조사하는 조명부; 및 상기 마커이미지를 분석하여 3차원 패턴을 생성하고, 상기 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하는 반복적인 작업을 통해 딥러닝 결과값을 출력하는 분석부;를 포함한다. The most preferred embodiment of the present invention, the input unit having a sheet having a function to restore the initial shape when the user input is applied to the outer surface and the user input is resolved; A plurality of markers arranged and arranged along an inner side surface of the sheet; An imaging unit which collects a marker image as an image of the marker that changes in response to the user input; An illumination unit irradiating light toward an inner surface of the sheet; And an analysis unit configured to generate a 3D pattern by analyzing the marker image and to output a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm.

이하에서는 첨부한 도면을 참조하여 본 발명을 설명하기로 한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며, 따라서 여기에서 설명하는 실시 예로 한정되는 것은 아니다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다. Hereinafter, with reference to the accompanying drawings will be described the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.

명세서 전체에서, 어떤 부분이 다른 부분과 "연결(접속, 접촉, 결합)"되어 있다고 할 때, 이는 "직접적으로 연결"되어 있는 경우뿐 아니라, 그 중간에 다른 부재를 사이에 두고 "간접적으로 연결"되어 있는 경우도 포함한다. 또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 구비할 수 있다는 것을 의미한다. Throughout the specification, when a part is said to be "connected (connected, contacted, coupled)" with another part, it is not only "directly connected" but also "indirectly connected" with another member in between. "Includes the case. In addition, when a part is said to "include" a certain component, this means that it may further include other components, without excluding the other components unless otherwise stated.

본 명세서에서 사용한 용어는 단지 특정한 실시 예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 명세서에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. As used herein, the terms "comprise" or "have" are intended to indicate that there is a feature, number, step, action, component, part, or combination thereof described on the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.

이하 첨부된 도면을 참고하여 본 발명에 대하여 상세히 설명하기로 한다. Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

도 1은 본 발명의 일 실시 예에 따른 3차원 터치 인식 장치의 사시도이고, 도 2는 본 발명의 일 실시 예에 따른 3차원 터치 인식 장치의 구성도이다.1 is a perspective view of a three-dimensional touch recognition device according to an embodiment of the present invention, Figure 2 is a block diagram of a three-dimensional touch recognition device according to an embodiment of the present invention.

도 1 및 도 2에서 보는 바와 같이, 본 발명의 딥러닝을 이용한 3차원 터치 인식 장치는, 외측면에 사용자입력이 작용하고 사용자입력이 해소되면 초기형상으로 복원되는 기능을 갖는 시트(110)를 구비하는 입력부(100); 시트(110)의 내측면을 따라 복수 개 배열되어 설치되는 마커(120); 사용자입력에 대응하여 변화하는 마커(120)의 이미지로서의 마커이미지를 수집하는 촬상부(200); 시트(110)의 내측면에 광을 조사하는 조명부(300); 및 마커이미지를 분석하여 3차원 패턴을 생성하고, 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하는 반복적인 작업을 통해 딥러닝 결과값을 출력하는 분석부(400);를 포함할 수 있다. As shown in FIGS. 1 and 2, the 3D touch recognition apparatus using the deep learning of the present invention includes a sheet 110 having a function of restoring to an initial shape when a user input is applied to an outer surface and the user input is cancelled. An input unit 100 provided; Markers 120 are arranged in a plurality arranged along the inner surface of the sheet 110; An imaging unit 200 collecting a marker image as an image of the marker 120 that changes in response to a user input; An illumination unit 300 for irradiating light to the inner surface of the sheet 110; And an analyzing unit 400 for generating a 3D pattern by analyzing the marker image and outputting a deep learning result through an iterative operation of inputting data of the 3D pattern to the deep learning algorithm.

마커(120)는, 원형의 형상으로 형성될 수 있다. The marker 120 may be formed in a circular shape.

본 발명의 실시 예에서는, 마커(120)가 원형이라고 설명하고 있으나, 반드시 이에 한정되는 것은 아니고, 타원형, 정사각형, 다각형 등 다양한 형상으로 형성될 수 있다. 그리고, 마커(120)는, 선으로 형성되는 도형일 수 있고, 도형 내부에 색이 채워진 형태일 수도 있다. In an embodiment of the present invention, the marker 120 is described as being circular, but is not necessarily limited thereto, and may be formed in various shapes such as an ellipse, a square, and a polygon. In addition, the marker 120 may be a figure formed by a line, or may be a form in which a color is filled in the figure.

복수 개의 마커(120)의 설치 패턴은, 소정의 개수의 행과 열을 갖고, 행과 행, 열과 열 사이의 간격을 동일하게 하거나, 동심원의 배열 등 다양한 패턴을 모두 고려할 수 있다. The installation pattern of the plurality of markers 120 may have a predetermined number of rows and columns, and various patterns such as the same spacing between rows and rows, columns and columns, or an arrangement of concentric circles may be considered.

바람직하게는, 복수 개의 마커(120)가 정방형의 배열을 형성할 수 있다. Preferably, the plurality of markers 120 may form a square array.

이러한 경우, 마커이미지의 변화는, 마커(120)로 형성되는 배열의 열 또는 행 간의 간격 변화 또는 배열의 형상 변화에 의해 생성될 수 있다. 그리고, 마커이미지의 변화는, 마커(120) 자체의 크기 또는 형상의 변화에 의해 생성될 수 있음은 물론이다.In this case, the change in the marker image may be generated by a change in the spacing between the columns or rows of the array formed by the marker 120 or a change in the shape of the array. In addition, the change in the marker image may be generated by a change in the size or shape of the marker 120 itself.

사용자입력은, 시트(110)의 외측면에 사용자의 신체 일부가 접촉된 상태에서, 사용자의 신체 일부가 정지 또는 동작하여 수행될 수 있다. The user input may be performed by stopping or operating a body part of the user while the body part of the user is in contact with the outer surface of the sheet 110.

사용자의 실체 일부는, 손이나 손가락, 발이나 발가락, 팔꿈치 또는 무릎 등으로써, 시트(110)의 일면에 접촉하여 누름, 누름해제, 이동 등의 동작이 가능한 모든 신체 부위를 의미할 수 있다. A part of the user's entity may mean any body part capable of pressing, releasing, moving, etc. in contact with one surface of the seat 110 by using a hand or a finger, a foot or a toe, an elbow or a knee.

또한, 사용자입력을 발생하는 주체인 사용자는, 사람만으로 한정하지 않고, 기계, 로봇 기타 장치를 포함할 수 있다.In addition, the user who generates the user input may include not only a person but also a machine, a robot, and other devices.

누름의 경우, 누르는 압력, 누름을 유지하는 시간을 한정하는 것은 아니다. 누름해제의 경우, 누름해제에 소요되는 시간 등을 특정한 범위 내로 한정할 것은 아니다. 나아가, 소정의 시간 동안 누름이 발생하고, 이후 즉시 누름해제가 발생하는 경우, 이를 클릭이라고 칭할 수 있다. 이동은, 사용자가 누름의 상태를 유지하면서, 시트(110) 일면의 한 지점에서부터 다른 지점으로 움직이는 것을 의미하는데, 움직이는 경로는 특정한 것에 한정되지 않으며, 직선, 곡선을 포함할 수 있으며, 곡선도, 원, 타원, 호, 스플라인 등을 포함할 수 있으나, 이에 한정할 것은 아니다.In the case of pressing, the pressing pressure and the time for holding the pressing are not limited. In the case of pressing, the time required for releasing the pressing is not limited to a specific range. Furthermore, when a press occurs for a predetermined time and then a press release immediately occurs, this may be referred to as a click. Movement means that the user moves from one point of one surface of the sheet 110 to another while maintaining the pressing state. The moving path is not limited to a specific one, and may include a straight line, a curved line, a curved line, Circles, ellipses, arcs, splines and the like, but may not be limited thereto.

촬상부(200)에는, CCD 등의 소자가 포함될 수 있는데, 이러한 예에 한정할 것은 아니다.The imaging unit 200 may include an element such as a CCD, but is not limited thereto.

그리고, 촬상부(200)는, 촬영면 전체에 대한 촬영이 가능하도록 광각렌즈를 구비할 수 있다. In addition, the imaging unit 200 may include a wide-angle lens to enable photographing of the entire photographing surface.

시트(110)는, 신축성이 있는 재질로 만들어질 수 있다.The sheet 110 may be made of an elastic material.

시트(110)를 제외한 입력부(100)의 나머지 부위는, 시트(110)와 같이 신축성이 있는 재질로 형성될 수 있고, 또는, 시트(110)와 달리 신축성이 없는 재질로 형성될 수도 있다.The remaining portion of the input unit 100 except for the sheet 110 may be formed of a flexible material, such as the sheet 110, or may be formed of a material having no elasticity, unlike the sheet 110.

또한, 입력부(100)는, 내부 공간이 외부로부터 개방된 형상일 수 있고, 도 2에서 보는 바와 같이, 내부 공간이 외부로부터 폐쇄된 형상일 수도 있다.In addition, the input unit 100 may have a shape in which the internal space is opened from the outside, and as shown in FIG. 2, the input space 100 may have a shape in which the internal space is closed from the outside.

조명부(300)는, LED 램프를 포함할 수 있다.The lighting unit 300 may include an LED lamp.

그리고, 사용자입력이 소정의 시간을 초과하여 발생하지 않는 경우, 자동으로 소등되는 등의 구성을 채택할 수도 있을 것이다.And, if the user input does not occur for more than a predetermined time, a configuration such as automatically turned off may be adopted.

도 3은 본 발명의 일 실시 예에 따른 시트(110)에 대해 최초의 사용자입력이 작용하는 사항에 대한 모식도이고, 도 4는 본 발명의 일 실시 예에 따른 시트(110)에 대해 누름 동작이 작용하는 사항에 대한 모식도이며, 도 5는 본 발명의 일 실시 예에 따른 시트(110)에 대해 하나의 방향으로 이동 동작이 작용하는 사항에 대한 모식도이다. 그리고, 도 6은 본 발명의 일 실시 예에 따른 시트(110)에 대해 다른 방향으로 이동 동작이 작용하는 사항에 대한 모식도이며, 도 7은 본 발명의 일 실시 예에 따른 3차원 패턴이 이미지화된 이미지이다. 3 is a schematic diagram of the matter that the first user input to the sheet 110 according to an embodiment of the present invention, Figure 4 is a pressing operation for the seat 110 according to an embodiment of the present invention Figure 5 is a schematic diagram of the action, Figure 5 is a schematic diagram of the action of the movement action in one direction with respect to the sheet 110 according to an embodiment of the present invention. And, Figure 6 is a schematic diagram of the matter that the movement action in a different direction with respect to the sheet 110 according to an embodiment of the present invention, Figure 7 is a three-dimensional pattern is imaged according to an embodiment of the present invention Image.

여기서, 도 3의 (a), 도 4의 (a), 도 5의 (a) 및 도 6의 (a)는 각각의 사용자입력이 작용하고 있는 입력부에 대한 단면도일 수 있다. 그리고, 도 3의 (b), 도 4의 (b), 도 5의 (b) 및 도 6의 (b)는 각각의 사용자입력이 작용하고 있는 시트의 내측면에 대한 평면도일 수 있다.3 (a), 4 (a), 5 (a) and 6 (a) may be cross-sectional views of the input unit to which the respective user inputs act. 3 (b), 4 (b), 5 (b) and 6 (b) may be plan views of the inner surface of the sheet on which the respective user inputs are acting.

도 3 내지 도 6에서, 도면 부호 121은, 크기포인트마커(121) 또는 명도포인트마커(121)를 나타낼 수 있다. 그리고, 도면 부호 122는, 크기보조마커(122) 또는 명도보조마커(122)를 나타낼 수 있다. 3 to 6, reference numeral 121 may indicate a size point marker 121 or a brightness point marker 121. In addition, reference numeral 122 may indicate a size assist marker 122 or a brightness assist marker 122.

분석부(400)는, 복수 개의 마커(120) 중 촬상부(200)에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커(121)의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 판단할 수 있다. 또는, 분석부(400)는, 복수 개의 마커(120) 중 촬상부(200)에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커(121)의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 판단할 수 있다. The analysis unit 400 may determine the 3D pattern by changing the position or changing the brightness of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. have. Alternatively, the analysis unit 400 determines the 3D pattern by changing the position or changing the brightness of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200. can do.

크기포인트마커(121)와 명도포인트마커(121)는 촬상부(200)에 인식되는 마커이미지를 기준으로 가장 크기가 큰 마커(120)와 가장 명도가 높은 마커(120)를 나타낼 수 있다. 특히, 명도포인트마커(121)의 경우, 촬상부(200)와 조명부(300)가 동일 방향에 위치하고 있어, 마커(120)의 위치가 촬상부(200)에 근접하는 경우 명도가 높아지는 현상을 이용하는 것일 수 있다.The size point marker 121 and the brightness point marker 121 may represent the largest marker 120 and the brightest marker 120 based on the marker image recognized by the imaging unit 200. Particularly, in the case of the brightness point marker 121, the imaging unit 200 and the lighting unit 300 are located in the same direction, and thus the brightness increases when the position of the marker 120 approaches the imaging unit 200. It may be.

3차원 패턴은, 2차원적인 수평 변위와 이러한 수평 변위에 수직인 수직 변위에 의한 위치 변경일 수 있다. 본 발명의 실시 예에서는, 도 3 내지 도 6에서 보는 바와 같이, 2차원적인 수평 변위는 x-y 평면 상 변위로 표현되고, 수직 변위는 x-y 평면에 수직인 z축에 대한 변위로 표현될 수 있다.The three-dimensional pattern may be a change in position by two-dimensional horizontal displacement and a vertical displacement perpendicular to this horizontal displacement. In an embodiment of the present invention, as shown in FIGS. 3 to 6, the two-dimensional horizontal displacement may be expressed as a displacement on the x-y plane, and the vertical displacement may be expressed as a displacement with respect to the z axis perpendicular to the x-y plane.

일 실시 예로써, 가장 크기가 크게 인식되는 마커(120)의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 판단하는 사항에 대해 설명하기로 한다.As an embodiment, the matter of determining the 3D pattern by changing the position or changing the brightness of the marker 120 having the largest size will be described.

도 3에서 보는 바와 같이, 시트(110)에 최초로 사용자입력이 작용 시, 크기포인트마커(121)가 인식될 수 있다. 그리고, 도 4 내지 도 6에서 보는 바와 같이, 손을 이용하여 시트(110)의 외측면에 대해 누름 동작을 수행한 후 이동 동작을 수행하는 경우, 누름과 이동에 의해 촬상부(200)에 대해 가장 크기가 크게 인식되는 크기포인트마커(121)의 위치가 변경될 수 있다. 여기서, 크기포인트마커(121)의 이동은, 복수 개의 마커(120) 중 촬상부(200)에 가장 크기가 크게 인식되는 마커(120)의 위치가 변함으로써 인식될 수 있다. 즉, 마커(120) 자체가 이동하는 것이 아닌 크기포인트마커(121)의 위치 변화를 인식하여 크기포인트마커(121)의 이동으로 판단할 수 있는 것이다. As shown in FIG. 3, when a user input is applied to the sheet 110 for the first time, the size point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the size point marker 121 that is recognized as the largest size may be changed. Here, the movement of the size point marker 121 may be recognized by changing the position of the marker 120 having the largest size recognized by the imaging unit 200 among the plurality of markers 120. That is, the marker 120 itself may not be moved but may recognize the position change of the size point marker 121 to determine the movement of the size point marker 121.

그리고, 분석부(400)는, 크기포인트마커(121)의 위치 변경 또는 명도 변화, 및 크기포인트마커(121)를 중심으로 소정의 범위 내 위치하는 크기보조마커(122)의 크기 또는 형상의 변화에 의해 3차원 패턴을 판단할 수 있다. And, the analysis unit 400, the position change or brightness change of the size point marker 121, and the change in the size or shape of the size auxiliary marker 122 located within a predetermined range around the size point marker 121. The three-dimensional pattern can be judged by.

소정의 범위는, 크기포인트마커(121)를 중심으로 NXM의 배열(N, M은 임의의 정수)을 형성하는 행렬 범위 또는 크기포인트마커(121)를 중심으로 L개(L은 임의의 정수)의 마커가 포함되는 원형의 범위일 수 있다. 다만, 이에 한정되는 것은 아니다. 그리고, 소정의 범위가 시트(110)의 내측면 전체 범위임을 배제하는 것은 아니다. (소정의 범위는, 도 5 및 도 6에서 점선으로 표시되어 있다.)The predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the size point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the size point markers 121. The marker may be in the range of a circle included. However, the present invention is not limited thereto. In addition, it is not excluded that the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)

크기포인트마커(121)가 위치 변경하면 소정의 범위 내에 위치하는 크기보조마커(122)의 크기 또는 형상이 변화하고, 이에 대한 데이터가 분석부(400)에 미리 저장되거나, 딥러닝에 의해 학습되어 분석부(400)에 저장될 수 있다. When the size point marker 121 is repositioned, the size or shape of the size auxiliary marker 122 located within a predetermined range is changed, and data about the size marker 12 is previously stored in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.

그리고, 분석부(400)는, 크기포인트마커(121)의 위치 변화 또는 명도 변화에 대한 이미지뿐만 아니라, 복수 개인 크기보조마커(122) 각각의 크기 또는 형상의 변화에 대한 이미지를 분석하여 3차원 패턴을 판단할 수 있다.In addition, the analysis unit 400 analyzes not only an image of a position change or a brightness change of the size point marker 121, but also an image of a change of the size or shape of each of the plurality of individual size auxiliary markers 122 and three-dimensionally. The pattern can be determined.

구체적으로, 도5 및 도 6에서 보는 바와 같이, 크기포인트마커(121)가 수평 변위 이동을 수행하는 경우, 크기포인트마커(121)의 위치 변경이 수행되면, 3X3배열의 소정의 범위 내 크기보조마커(122)의 크기 또는 형상이 변화하고, 크기포인트마커(121)의 이동과 크기보조마커(122) 크기 또는 형상 변화를 종합적으로 판단하여 3차원 패턴을 판단할 수 있다. Specifically, as shown in FIGS. 5 and 6, when the size point marker 121 performs the horizontal displacement movement, when the position change of the size point marker 121 is performed, the size assistance within a predetermined range of the 3 × 3 array is performed. The size or shape of the marker 122 is changed, and the three-dimensional pattern may be determined by comprehensively determining the movement of the size point marker 121 and the size or shape change of the size auxiliary marker 122.

상기와 같은 크기포인트마커(121)의 위치 변경에 의해 손의 2차원적인 수평 변위를 측정할 수 있고, 크기포인트마커(121)의 명도 변화에 의해 손의 누름에 의한 수직 변위를 측정할 수 있어, 결론적으로 손에 의한 사용자입력에 대해 3차원 패턴을 판단할 수 있다.The two-dimensional horizontal displacement of the hand can be measured by changing the position of the size point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the size point marker 121. In conclusion, three-dimensional patterns can be determined for user input by hand.

다른 실시 예로써, 가장 명도가 높게 인식되는 마커(120)의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 판단하는 사항에 대해 설명하기로 한다.As another embodiment, the matter of determining the 3D pattern by changing the position or changing the brightness of the marker 120 recognized as the highest brightness will be described.

도 3에서 보는 바와 같이, 시트(110)에 최초로 사용자입력이 작용 시, 명도포인트마커(121)가 인식될 수 있다. 그리고, 도 4 내지 도 6에서 보는 바와 같이, 손을 이용하여 시트(110)의 외측면에 대해 누름 동작을 수행한 후 이동 동작을 수행하는 경우, 누름과 이동에 의해 촬상부(200)에 대해 가장 명도가 높게 인식되는 명도포인트마커(121)의 위치가 변경될 수 있다. 여기서, 명도포인트마커(121)의 이동은, 복수 개의 마커(120) 중 촬상부(200)에 가장 명도가 높게 인식되는 마커(120)의 위치가 변함으로써 인식될 수 있다. 즉, 마커(120) 자체가 이동하는 것이 아닌 명도포인트마커(121)의 위치 변화를 인식하여 명도포인트마커(121)의 이동으로 판단할 수 있는 것이다. As shown in FIG. 3, when the user input is applied to the sheet 110 for the first time, the brightness point marker 121 may be recognized. 4 to 6, when the pressing operation is performed after the pressing operation is performed on the outer surface of the sheet 110 by using the hand, the imaging unit 200 may be pressed by pressing and moving. The position of the brightness point marker 121 recognized as the highest brightness may be changed. Here, the movement of the brightness point marker 121 may be recognized by changing the position of the marker 120 that is recognized as the highest brightness among the plurality of markers 120 by the imaging unit 200. That is, the marker 120 itself does not move but recognizes the positional change of the brightness point marker 121 to determine the movement of the brightness point marker 121.

그리고, 분석부(400)는, 명도포인트마커(121)의 위치 변경 또는 명도 변화, 및 명도포인트마커(121)를 중심으로 소정의 범위 내 위치하는 명도보조마커(122)의 크기 또는 형상의 변화에 의해 3차원 패턴을 판단할 수 있다. In addition, the analysis unit 400 changes the position or the brightness of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122 positioned within a predetermined range around the brightness point marker 121. The three-dimensional pattern can be judged by.

소정의 범위는, 명도포인트마커(121)를 중심으로 NXM의 배열(N, M은 임의의 정수)을 형성하는 행렬 범위 또는 명도포인트마커(121)를 중심으로 L개(L은 임의의 정수)의 마커가 포함되는 원형의 범위일 수 있다. 다만, 이에 한정되는 것은 아니다. 그리고, 소정의 범위가 시트(110)의 내측면 전체 범위임을 배제하는 것은 아니다. (소정의 범위는, 도 5 및 도 6에서 점선으로 표시되어 있다.)The predetermined range is L ranges (L is an arbitrary integer) around the matrix range or the brightness point markers 121 forming an array of NXMs (N and M are arbitrary integers) around the brightness point markers 121. The marker may be in the range of a circle included. However, the present invention is not limited thereto. In addition, it is not excluded that the predetermined range is the entire inner surface of the sheet 110. (The predetermined range is shown by the dotted line in FIGS. 5 and 6.)

명도포인트마커(121)가 위치 변경하면 소정의 범위 내에 위치하는 명도보조마커(122)의 크기 또는 형상이 변화하고, 이에 대한 데이터가 분석부(400)에 미리 저장되거나, 딥러닝에 의해 학습되어 분석부(400)에 저장될 수 있다. When the brightness point marker 121 is repositioned, the size or shape of the brightness subsidiary marker 122 positioned within a predetermined range is changed, and data about the brightness subsidiary marker 122 is stored in advance in the analysis unit 400 or is learned by deep learning. It may be stored in the analysis unit 400.

그리고, 분석부(400)는, 명도포인트마커(121)의 위치 변화 또는 명도 변화에 대한 이미지뿐만 아니라, 복수 개인 명도보조마커(122) 각각의 크기 또는 형상의 변화에 대한 이미지를 분석하여 3차원 패턴을 판단할 수 있다.In addition, the analysis unit 400 analyzes not only an image of a position change or a brightness change of the brightness point marker 121, but also an image of a change in size or shape of each of the plurality of individual brightness assistant markers 122 to be three-dimensional. The pattern can be determined.

구체적으로, 도5 및 도 6에서 보는 바와 같이, 명도포인트마커(121)가 수평 변위 이동을 수행하는 경우, 명도포인트마커(121)의 위치 변경이 수행되면, 3X3배열의 소정의 범위 내 명도보조마커(122)의 크기 또는 형상이 변화하고, 명도포인트마커(121)의 이동과 명도보조마커(122) 크기 또는 형상 변화를 종합적으로 판단하여 3차원 패턴을 판단할 수 있다. Specifically, as shown in FIGS. 5 and 6, when the brightness point marker 121 performs the horizontal displacement movement, when the position change of the brightness point marker 121 is performed, brightness assistance within a predetermined range of the 3 × 3 array is performed. The size or shape of the marker 122 changes, and the three-dimensional pattern may be determined by comprehensively determining the movement of the brightness point marker 121 and the change in the size or shape of the brightness assist marker 122.

상기와 같은 명도포인트마커(121)의 위치 변경에 의해 손의 2차원적인 수평 변위를 측정할 수 있고, 명도포인트마커(121)의 명도 변화에 의해 손의 누름에 의한 수직 변위를 측정할 수 있어, 결론적으로 손에 의한 사용자입력에 대해 3차원 패턴을 판단할 수 있다.The two-dimensional horizontal displacement of the hand can be measured by changing the position of the brightness point marker 121 as described above, and the vertical displacement of the hand can be measured by changing the brightness of the brightness point marker 121. In conclusion, three-dimensional patterns can be determined for user input by hand.

구체적으로, 수평 변위와 수직 변위를 판단하는 사항에 대해 설명하기로 한다. (이하, 명도포인트마커(121)를 기준으로 하여 설명하기로 한다.)Specifically, the matter of determining the horizontal displacement and the vertical displacement will be described. (Hereinafter, the brightness point marker 121 will be described as a reference.)

도 3에서 보는 바와 같이, 사용자가 시트(110)의 외측면에 대해 최초의 누름 동작을 수행하는 경우, 시트(110) 내측면의 누름 동작이 수행된 부위에서 어느 하나의 마커(120)가 촬상부(200)에 근접하도록 이동하며 명도가 증가할 수 있다. 그리고, 명도가 증가한 마커(120)는 명도포인트마커(121)로 인식되고, 제1지점(P1)이 시작 좌표로 설정될 수 있다. As shown in FIG. 3, when the user performs an initial pressing operation on the outer surface of the sheet 110, any one marker 120 is picked up at a portion where the pressing operation of the inner surface of the sheet 110 is performed. The brightness may increase while moving closer to the unit 200. In addition, the marker 120 having increased brightness may be recognized as the brightness point marker 121, and the first point P1 may be set as a start coordinate.

그리고, 도 4에서 보는 바와 같이, 사용자가 시트(110)의 외측면에 대해 최초의 누름 동작보다 더 강한 힘으로 누름 동작을 수행하는 경우, 더 강한 힘의 누름 동작에 의해 명도포인트마커(121)는 명도가 더 증가된 것으로 인식될 수 있고, 이러한 명도포인트마커(121)의 명도 변화에 의해 사용자입력에 의한 수직 변위를 측정할 수 있다. 즉, 제1지점(P1)에서 제2지점(P2)으로 3차원 좌표가 변경됨을 인식하여 측정할 수 있다. And, as shown in Figure 4, when the user performs a pressing operation with a stronger force than the initial pressing operation on the outer surface of the sheet 110, the brightness point marker 121 by the pressing operation of a stronger force It may be recognized that the brightness is further increased, and the vertical displacement due to the user input may be measured by the brightness change of the brightness point marker 121. That is, it can be measured by recognizing that the three-dimensional coordinates are changed from the first point (P1) to the second point (P2).

그리고, 명도가 증가하는 것이 아닌 명도가 감소하는 경우, 즉, 명도포인트마커(121)가 촬상부(200)로부터 이격되어 명도가 감소하면, 도 4에서의 방향과 반대 방향으로 수직 변위가 형성될 수 있다. When the brightness is not increased but the brightness is decreased, that is, when the brightness point marker 121 is spaced apart from the image pickup unit 200 and the brightness decreases, a vertical displacement may be formed in a direction opposite to that in FIG. 4. Can be.

도 5에서 보는 바와 같이, 사용자가 시트(110)의 외측면에 대해 하나의 방향으로 이동 동작을 수행하는 경우, 가장 큰 명도를 구비하는 마커(120)가 변경될 수 있고, 이와 같은 마커(120)의 변경에 의해 명도포인트마커(121)가 제2지점(P2)에서 제3지점(P3)으로 이동하는 것으로 인식될 수 있다. As shown in FIG. 5, when the user performs a movement operation in one direction with respect to the outer surface of the sheet 110, the marker 120 having the greatest brightness may be changed, and such a marker 120 may be changed. ), It may be recognized that the brightness point marker 121 moves from the second point P2 to the third point P3.

또한, 도 6에서 보는 바와 같이, 사용자가 시트(110)의 외측면에 대해 다른 방향으로 이동 동작을 수행하는 경우, 가장 큰 명도를 구비하는 마커(120)가 변경될 수 있고, 이와 같은 마커(120)의 변경에 의해 명도포인트마커(121)가 제3지점(P3)에서 제4지점(P4)으로 이동하는 것으로 인식될 수 있다.In addition, as shown in FIG. 6, when the user performs a movement operation in a different direction with respect to the outer surface of the sheet 110, the marker 120 having the greatest brightness may be changed. As a result of the change of 120, the brightness point marker 121 may be recognized to move from the third point P3 to the fourth point P4.

그리고, 상기와 같은 수평 변위와 수직 변위의 복합적인 변위에 의해 도 7에서 보는 바와 같은 3차원 패턴이 형성될 수 있다. In addition, a three-dimensional pattern as shown in FIG. 7 may be formed by the complex displacement of the horizontal displacement and the vertical displacement as described above.

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치는, 3차원 패턴뿐만이 아니라, 수직 방향 또는 수평 방향의 힘을 검출할 수 있다. The three-dimensional touch recognition apparatus using the deep learning of the present invention can detect not only a three-dimensional pattern but also a force in a vertical direction or a horizontal direction.

도 4에서 보는 바와 같이, 누름 동작을 수행하면 수직 변위가 형성되고, 이러한 수직 변위에 시트(110)의 탄성계수를 곱하여 사용자입력에 의한 수직 방향의 힘을 측정할 수 있다. As shown in FIG. 4, when the pressing operation is performed, a vertical displacement is formed, and the vertical force may be measured by multiplying the vertical displacement by the elastic modulus of the sheet 110.

따라서, 수직 방향의 힘을 검출하기 위해, 시트(110)의 탄성계수가 시트(110)의 각 지점마다 동일하도록 시트(110)는 제조될 수 있다.Therefore, in order to detect the force in the vertical direction, the sheet 110 may be manufactured so that the elastic modulus of the sheet 110 is the same at each point of the sheet 110.

도 5에서 보는 바와 같이, 이동 동작을 수행하면 수평 변위가 형성되고, 이때, 수평 방향 힘은, 사용자입력인 손의 누름 동작에 의해 시트(110)에 가해지는 수직항력(N)과 시트(110)의 마찰계수(μ)의 곱에 의해 계산되는 마찰력에 대응되는 힘으로 측정될 수 있다.As shown in FIG. 5, when the moving operation is performed, a horizontal displacement is formed. In this case, the horizontal force is a vertical force N and the sheet 110 applied to the seat 110 by the pressing operation of the user's hand. It can be measured as a force corresponding to the frictional force calculated by the product of the friction coefficient (μ) of).

여기서, 시트(110)의 마찰계수는, 분석부(400)에 저장된 마찰계수기준데이터테이블을 참조하여 결정할 수 있다.Here, the friction coefficient of the sheet 110 may be determined by referring to the friction coefficient reference data table stored in the analysis unit 400.

도 5에서 보는 바와 같이, 시트(110)에 누름 동작을 수행하여 수직 변위가 발생한 상태에서 손(사용자입력)을 이동하여 수평 변위를 발생시키므로, 시트(110)의 마찰계수는, 시트(110)의 표면에 대한 물성으로써의 표면마찰계수와 다를 수 있다.As shown in FIG. 5, since the pressing operation is performed on the sheet 110 to generate a horizontal displacement by moving a hand (user input) in a vertical displacement state, the coefficient of friction of the sheet 110 is determined by the sheet 110. It may be different from the surface friction coefficient as a property of the surface of.

본 발명의 수평 방향 힘 검출 방법에서 이용되는 시트(110)의 마찰계수는, 누름 동작에 의해 시트(110) 형상이 휘어지는 상태를 고려하여 정해지는 것으로, 입력부(100) 형성 후에 시트(110) 외측면에 대한 기계적 실험을 통해 획득되는 데이터에 의해 정해질 수 있다.The coefficient of friction of the sheet 110 used in the horizontal force detection method of the present invention is determined in consideration of the state in which the shape of the sheet 110 is bent by the pressing operation. It can be determined by the data obtained through mechanical experiments on the sides.

수평 방향의 힘은, 상기의 과정에 의해 정해지는 시트(110)의 마찰계수와 수직항력인 수직 방향 힘의 곱에 의해 계산될 수 있다.The force in the horizontal direction may be calculated by multiplying the friction coefficient of the sheet 110 determined by the above process with the vertical force, which is the vertical force.

딥러닝 알고리즘은 심층 신경망, 합성곱 신경망 또는 순환 신경망 중 어느 하나일 수 있다.The deep learning algorithm may be one of a deep neural network, a convolutional neural network, or a cyclic neural network.

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치에 이용되는 딥러닝 알고리즘은 공지된 기술일 수 있다. The deep learning algorithm used in the 3D touch recognition device using the deep learning of the present invention may be a known technique.

본 발명의 실시 예에서는 딥러닝 알고리즘으로 상기와 같은 신경망이 이용된다고 설명하고 있으나, 반드시 이에 한정되는 것은 아니다. In the exemplary embodiment of the present invention, the neural network described above is used as the deep learning algorithm, but the present invention is not limited thereto.

상기와 같이, 각각의 3차원 패턴은, 3차원의 좌표의 변화로 표현될 수 있으며, 도 7에서 보는 바와 같이, 각각의 3차원 패턴은 각각 3차원의 이미지로 표현되며 저장될 수 있다. As described above, each three-dimensional pattern may be represented by a change in three-dimensional coordinates, and as shown in FIG. 7, each three-dimensional pattern may be represented and stored as a three-dimensional image, respectively.

분석부(400)는, 각각의 3차원 패턴에 대해 딥러닝 알고리즘을 이용하여 학습을 수행하고, 학습된 데이터를 기반으로 사용자입력에 의한 3차원 패턴을 분석하여 판단하여 딥러닝 결과값을 도출할 수 있다. The analysis unit 400 performs a learning using a deep learning algorithm for each 3D pattern, and analyzes and determines a 3D pattern by a user input based on the learned data to derive a deep learning result value. Can be.

구체적으로, 동일한 사용자가 동일한 3차원 패턴의 입력을 위해 사용자입력을 수행하더라도, 3차원 패턴을 형성하는 각 좌표의 위치는 차이가 날 수 있는데, 분석부(400)는 사용자가 입력한 3차원 패턴을 학습된 데이트를 기반으로 분석하고 판단하여, 사용자가 의도한 3차원 패턴의 사용자입력을 인식할 수 있다. 이에 따라, 사용자가 동일한 패턴이라고 판단하여 입력부(100)에 입력한 3차원 패턴이 3차원 좌표에 있어서 차이가 있더라도, 분석부(400)는 사용자입력에 의한 패턴이 사전에 저장된 3차원 패턴과 동일한 패턴이라고 판단하고, 이에 대한 딥러닝 결과값을 출력할 수 있다. Specifically, even though the same user performs user input for input of the same 3D pattern, the position of each coordinate forming the 3D pattern may be different, but the analysis unit 400 inputs the 3D pattern input by the user. By analyzing and determining based on the learned date, the user can recognize the user input of the intended 3D pattern. Accordingly, even if the 3D pattern input by the user is determined to be the same pattern and there is a difference in the 3D coordinates, the analyzer 400 determines that the pattern generated by the user input is the same as the 3D pattern stored in advance. It may be determined as a pattern, and a deep learning result value thereof may be output.

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치는, 사용자입력 중 최초의 사용자입력인 시작입력이 시트(110)의 외측면에 작용 시 시각적 알림을 수행하는 시작알림부를 더 포함할 수 있다. The 3D touch recognition apparatus using the deep learning of the present invention may further include a start notification unit that performs visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet 110.

상기와 같이 시트(110)의 외측면에 최초의 사용자입력이 작용하는 경우, 3차원 패턴의 형성이 시작될 수 있는데, 사용자는 최초의 사용자입력이 인식되고 있는지 확인하는 방법이 필요할 수 있다. When the first user input is applied to the outer surface of the sheet 110 as described above, the formation of the three-dimensional pattern may be started, the user may need a method for checking whether the first user input is recognized.

따라서, 최초의 사용자입력인 시작입력이 시트(110)의 외측면에 작용하면 시작알림부에서 광을 방출하도록 하여 3차원 패턴의 형성 시작을 사용자가 확인할 수 있다. Therefore, when the start input, which is the first user input, acts on the outer surface of the sheet 110, the start notification unit emits light so that the user can confirm the start of formation of the three-dimensional pattern.

본 발명의 실시 예에서는, 시작알림부가 광을 방출하여 3차원 패턴 형성을 확인할 수 있도록 한다고 설명하고 있으나, 반드시 이에 한정되는 것은 아니고, 소리나 진동을 이용하여 3차원 패턴의 형성 시작을 사용자가 확인할 수 있도록 할 수 있다. In an embodiment of the present disclosure, the start notification unit emits light so that the 3D pattern formation may be confirmed. However, the start notification unit is not limited thereto, and the user may confirm the start of formation of the 3D pattern using sound or vibration. You can do that.

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치는, 분석부(400)로부터 딥러닝 결과값을 전달 받고, 딥러닝 결과값에 대응하는 제어신호를 외부 장비로 전달하는 제어부(500)를 더 포함할 수 있다. The 3D touch recognition apparatus using the deep learning of the present invention further includes a controller 500 that receives the deep learning result from the analysis unit 400 and transmits a control signal corresponding to the deep learning result to external equipment. can do.

각각의 3차원 패턴은 특정의 명령과 매칭될 수 있다. 구체적으로, 도 7과 같은 3차원 패턴이 형성되면, 외부 장비의 작동 시작이 수행된다고 매칭되는 경우, 도 7과 같은 3차원 패턴의 입력을 분석 및 판단한 분석부(400)로부터 제어부(500)로 딥러닝 결과값이 출력되고, 제어부(500)는 딥러닝 결과값에 매칭되는 제어신호를 외부 장비로 전달하여 외부 장비의 작동 시작이 수행될 수 있다. Each three-dimensional pattern can be matched to a specific instruction. Specifically, when the 3D pattern as shown in FIG. 7 is formed, and if it is matched that the start of the operation of the external equipment is performed, the controller 500 analyzes and determines the input of the 3D pattern as shown in FIG. 7 to the controller 500. The deep learning result value is output, and the controller 500 may transmit a control signal matching the deep learning result value to the external device to start the operation of the external device.

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치를 구비한 컴퓨터를 제조할 수 있다. A computer having a three-dimensional touch recognition device using the deep learning of the present invention can be manufactured.

분석부(400)에서 처리된 3차원 패턴에 대한 딥러닝 결과값이 제어부(500)를 거쳐 컴퓨터에 전달되고, 해당 3차원 패턴과 대응하는 명령이 컴퓨터에 전달되어 컴퓨커가 명령을 수행할 수 있다. The deep learning result value for the 3D pattern processed by the analyzer 400 may be transmitted to the computer through the control unit 500, and a command corresponding to the 3D pattern may be transmitted to the computer so that the computer may execute the command. .

본 발명의 딥러닝을 이용한 3차원 터치 인식 장치를 구비한 로봇을 제조할 수 있다. The robot having the 3D touch recognition device using the deep learning of the present invention can be manufactured.

구체적인 일 실시 예로써, 3차원 패턴에 대응하여 작업을 수행하는 로봇의 말단이 이동할 수 있어, 본 발명의 딥러닝을 이용한 3차원 터치 인식 장치가 로봇을 조종하는 조종 장치로써 기능할 수 있다. As a specific embodiment, the end of the robot performing the work corresponding to the three-dimensional pattern can be moved, the three-dimensional touch recognition device using the deep learning of the present invention can function as a steering device for controlling the robot.

이하, 본 발명의 딥러닝을 이용한 3차원 터치 인식 장치를 이용한 3차원 터치 인식 방법을 설명하기로 한다. Hereinafter, a three-dimensional touch recognition method using a three-dimensional touch recognition device using the deep learning of the present invention will be described.

첫째 단계에서, 시트(110)에 사용자입력이 작용할 수 있다.In a first step, user input may act on the sheet 110.

둘째 단계에서, 촬상부(200)가 단위 시간 당 시트(110)의 내측면을 촬영하여 촬영된 마커이미지를 분석부(400)로 전달할 수 있다. In the second step, the imaging unit 200 may transfer the photographed marker image to the analysis unit 400 by photographing the inner surface of the sheet 110 per unit time.

여기서, 단위 시간은, 밀리초(millisecond, ms)단위일 수 있고, 정밀도의 향상을 위하여, 밀리초(millisecond, ms)단위보다 더 작은 단위로 촬영을 수행할 수 있음은 물론이다.Here, the unit time may be in milliseconds (ms), and the photographing may be performed in units smaller than milliseconds (ms) in order to improve accuracy.

셋째 단계에서, 분석부(400)가 마커이미지를 분석하여 마커(120)의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 생성할 수 있다. In a third step, the analysis unit 400 may analyze the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker 120.

여기서, 분석부(400)는, 복수 개의 마커(120) 중 촬상부(200)에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커(121)의 위치 변경에 의해 3차원 패턴을 판단할 수 있다. 또는, 분석부(400)는, 복수 개의 마커(120) 중 촬상부(200)에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커(121)의 위치 변경에 의해 3차원 패턴을 판단할 수 있다. Here, the analysis unit 400 may determine the 3D pattern by changing the position of the size point marker 121, which is a marker that is recognized as the largest size among the plurality of markers 120 by the imaging unit 200. . Alternatively, the analyzer 400 may determine the 3D pattern by changing the position of the brightness point marker 121, which is a marker recognized among the plurality of markers 120 by the imaging unit 200 with the highest brightness. .

셋째 단계에서, 마커(120)의 위치 변경에 의해 마커(120)의 3차원 변위 중 수평 변위를 측정할 수 있다. 그리고, 마커(120)의 명도 변화에 의해 마커(120)의 3차원 변위 중 수직 변위를 측정할 수 있다. In a third step, the horizontal displacement of the three-dimensional displacement of the marker 120 can be measured by changing the position of the marker 120. The vertical displacement of the three-dimensional displacement of the marker 120 may be measured by changing the brightness of the marker 120.

넷째 단계에서, 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하여 딥러닝 결과값을 출력할 수 있다.In a fourth step, the deep learning result may be output by inputting data of the 3D pattern to the deep learning algorithm.

전술한 본 발명의 설명은 예시를 위한 것이며, 본 발명이 속하는 기술분야의 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 쉽게 변형이 가능하다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시 예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 예를 들어, 단일형으로 설명되어 있는 각 구성 요소는 분산되어 실시될 수도 있으며, 마찬가지로 분산된 것으로 설명되어 있는 구성 요소들도 결합된 형태로 실시될 수 있다. The foregoing description of the present invention is intended for illustration, and it will be understood by those skilled in the art that the present invention may be easily modified in other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive. For example, each component described as a single type may be implemented in a distributed manner, and similarly, components described as distributed may be implemented in a combined form.

본 발명의 범위는 후술하는 특허청구범위에 의하여 나타내어지며, 특허청구범위의 의미 및 범위 그리고 그 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 한다. The scope of the present invention is represented by the following claims, and it should be construed that all changes or modifications derived from the meaning and scope of the claims and their equivalents are included in the scope of the present invention.

Claims (16)

외측면에 사용자입력이 작용하고 상기 사용자입력이 해소되면 초기형상으로 복원되는 기능을 갖는 시트를 구비하는 입력부;An input unit including a sheet having a function of restoring an initial shape when a user input acts on an outer surface and the user input is cancelled; 상기 시트의 내측면을 따라 복수 개 배열되어 설치되는 마커;A plurality of markers arranged and arranged along an inner side surface of the sheet; 상기 사용자입력에 대응하여 변화하는 상기 마커의 이미지로서의 마커이미지를 수집하는 촬상부;An imaging unit which collects a marker image as an image of the marker that changes in response to the user input; 상기 시트의 내측면을 향해 광을 조사하는 조명부; 및An illumination unit irradiating light toward an inner surface of the sheet; And 상기 마커이미지를 분석하여 3차원 패턴을 생성하고, 상기 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하는 반복적인 작업을 통해 딥러닝 결과값을 출력하는 분석부;를 포함하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.A deep learning unit for generating a 3D pattern by analyzing the marker image and outputting a deep learning result through an iterative operation of inputting data of the 3D pattern into a deep learning algorithm. 3D touch recognition device using a. 청구항 1에 있어서, The method according to claim 1, 상기 딥러닝 알고리즘은 심층 신경망, 합성곱 신경망 또는 순환 신경망 중 어느 하나인 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.The deep learning algorithm is a three-dimensional touch recognition device using deep learning, characterized in that any one of a deep neural network, a composite product neural network or a cyclic neural network. 청구항 1에 있어서,The method according to claim 1, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커의 위치 변경 또는 명도 변화에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치. The analysis unit 3D using deep learning, characterized in that for determining the three-dimensional pattern by changing the position or the brightness of the size point marker that is the marker that is recognized the largest size among the plurality of markers. Touch recognition device. 청구항 3에 있어서, The method according to claim 3, 상기 분석부는, 상기 크기포인트마커의 위치 변경 또는 명도 변화, 및 상기 크기포인트마커를 중심으로 소정의 범위 내 위치하는 크기보조마커의 크기 또는 형상의 변화에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치. The analysis unit may determine the three-dimensional pattern by changing the position or brightness of the size point marker, and the size or shape of the size sub-marker located within a predetermined range around the size point marker. 3D touch recognition device using deep learning. 청구항 1에 있어서,The method according to claim 1, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커의 위치 변경 또는 명도 변화에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치. The analysis unit may determine the three-dimensional pattern by changing the position or brightness of the brightness point marker, which is a marker that is recognized as the highest brightness among the plurality of markers, the three-dimensional using deep learning Touch recognition device. 청구항 5에 있어서,The method according to claim 5, 상기 분석부는, 상기 명도포인트마커의 위치 변경 또는 명도 변화, 및 상기 명도포인트마커를 중심으로 소정의 범위 내 위치하는 명도보조마커의 크기 또는 형상의 변화에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.The analyzing unit may determine the three-dimensional pattern by changing the position or brightness of the brightness point marker, and changing the size or shape of the brightness subsidiary marker located within a predetermined range around the brightness point marker. 3D touch recognition device using deep learning. 청구항 1에 있어서,The method according to claim 1, 상기 마커는, 원형의 형상으로 형성되는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.The marker is a three-dimensional touch recognition apparatus using deep learning, characterized in that formed in a circular shape. 청구항 1에 있어서,The method according to claim 1, 상기 사용자입력 중 최초의 사용자입력인 시작입력이 상기 시트의 외측면에 작용 시 시각적 알림을 수행하는 시작알림부를 더 포함하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.3. The apparatus of claim 3, further comprising a start notification unit configured to perform visual notification when a start input, which is the first user input among the user inputs, acts on the outer surface of the sheet. 청구항 1에 있어서,The method according to claim 1, 상기 분석부로부터 상기 딥러닝 결과값을 전달 받고, 상기 딥러닝 결과값에 대응하는 제어신호를 외부 장비로 전달하는 제어부를 더 포함하는 것을 특징으로 하는 딥러닝을 이용한 3차원 터치 인식 장치.And a controller configured to receive the deep learning result value from the analysis unit and to transmit a control signal corresponding to the deep learning result value to an external device. 청구항 1 내지 청구항 8 중 어느 하나의 항에 의한 딥러닝을 이용한 3차원 터치 인식 장치를 구비한 컴퓨터.A computer comprising a three-dimensional touch recognition device using deep learning according to any one of claims 1 to 8. 청구항 1 내지 청구항 8 중 어느 하나의 항에 의한 딥러닝을 이용한 3차원 터치 인식 장치를 구비한 로봇.A robot having a three-dimensional touch recognition device using deep learning according to any one of claims 1 to 8. 청구항 1의 딥러닝을 이용한 3차원 터치 인식 장치를 이용한 3차원 터치 인식 방법에 있어서, In the three-dimensional touch recognition method using a three-dimensional touch recognition device using a deep learning, (i) 상기 시트에 상기 사용자입력이 작용하는 단계;(i) the user input acting on the sheet; (ii) 상기 촬상부가 단위 시간 당 상기 시트의 내측면을 촬영하여 촬영된 상기 마커이미지를 상기 분석부로 전달하는 단계;(ii) transferring the marker image photographed by photographing the inner surface of the sheet per unit time to the analyzer; (iii) 상기 분석부가 상기 마커이미지를 분석하여 상기 마커의 위치 변경 또는 명도 변화에 의해 3차원 패턴을 생성하는 단계; 및(iii) the analysis unit analyzing the marker image to generate a three-dimensional pattern by changing the position or brightness of the marker; And (iv) 상기 3차원 패턴의 데이터를 딥러닝 알고리즘에 입력하여 딥러닝 결과값을 출력하는 단계;를 포함하는 것을 특징으로 하는 3차원 터치 인식 방법.and (iv) inputting data of the three-dimensional pattern into a deep learning algorithm and outputting a deep learning result value. 청구항 12에 있어서,The method according to claim 12, 상기 (iii) 단계에서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 크기가 크게 인식되는 마커인 크기포인트마커의 위치 변경에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 3차원 터치 인식 방법. In the step (iii), the analysis unit, the three-dimensional pattern, characterized in that for determining the three-dimensional pattern by changing the position of the size point marker that is the marker that is recognized the largest size among the plurality of markers Touch recognition method. 청구항 12에 있어서,The method according to claim 12, 상기 (iii) 단계에서, 상기 분석부는, 복수 개의 상기 마커 중 상기 촬상부에 의해 가장 명도가 높게 인식되는 마커인 명도포인트마커의 위치 변경에 의해 상기 3차원 패턴을 판단하는 것을 특징으로 하는 3차원 터치 인식 방법.In the step (iii), the analysis unit, the three-dimensional pattern, characterized in that by determining the position of the brightness point marker, which is a marker that is recognized as the highest brightness among the plurality of the markers, the three-dimensional pattern Touch recognition method. 청구항 12에 있어서,The method according to claim 12, 상기 (iii) 단계에서, 상기 마커의 위치 변경에 의해 상기 마커의 3차원 변위 중 수평 변위를 측정하는 것을 특징으로 하는 3차원 터치 인식 방법. In the step (iii), the horizontal displacement of the three-dimensional displacement of the marker by measuring the position of the marker, characterized in that for measuring the horizontal displacement. 청구항 12에 있어서,The method according to claim 12, 상기 (iii) 단계에서, 상기 마커의 명도 변화에 의해 상기 마커의 3차원 변위 중 수직 변위를 측정하는 것을 특징으로 하는 3차원 터치 인식 방법. In the step (iii), the vertical displacement of the three-dimensional displacement of the marker by changing the brightness of the marker, characterized in that the three-dimensional touch recognition method.
PCT/KR2017/011272 2017-04-20 2017-10-12 Three-dimensional touch recognition device using deep learning and three-dimensional touch recognition method using same Ceased WO2018194227A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0051174 2017-04-20
KR1020170051174A KR101921140B1 (en) 2017-04-20 2017-04-20 3 dimensional touch sensing apparatus usnig deep learning and method for sensing 3 dimensional touch usnig the same

Publications (1)

Publication Number Publication Date
WO2018194227A1 true WO2018194227A1 (en) 2018-10-25

Family

ID=63855937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/011272 Ceased WO2018194227A1 (en) 2017-04-20 2017-10-12 Three-dimensional touch recognition device using deep learning and three-dimensional touch recognition method using same

Country Status (2)

Country Link
KR (1) KR101921140B1 (en)
WO (1) WO2018194227A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (en) * 2020-06-02 2020-10-20 南京信息工程大学 A method for reproducing three-dimensional shape features of images on a touch screen

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102163143B1 (en) * 2019-01-07 2020-10-08 한림대학교 산학협력단 Apparatus and method of correcting touch sensor input
KR102268003B1 (en) 2019-12-11 2021-06-21 한림대학교 산학협력단 Surface recognizing method using deep learning based on heterogeneous multivariate multiple modal data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044690A (en) * 2006-11-17 2008-05-21 실리콤텍(주) Input device using imaging sensor and method
JP2009087264A (en) * 2007-10-02 2009-04-23 Alps Electric Co Ltd Hollow type switching device and electronic device with the same
KR20110084028A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Distance measuring device and method using image data
KR20120060548A (en) * 2010-12-02 2012-06-12 전자부품연구원 System for 3D based marker
KR101396203B1 (en) * 2013-03-13 2014-05-19 한국생산기술연구원 Apparatus and method for sensing operation of aircusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044690A (en) * 2006-11-17 2008-05-21 실리콤텍(주) Input device using imaging sensor and method
JP2009087264A (en) * 2007-10-02 2009-04-23 Alps Electric Co Ltd Hollow type switching device and electronic device with the same
KR20110084028A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Distance measuring device and method using image data
KR20120060548A (en) * 2010-12-02 2012-06-12 전자부품연구원 System for 3D based marker
KR101396203B1 (en) * 2013-03-13 2014-05-19 한국생산기술연구원 Apparatus and method for sensing operation of aircusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796708A (en) * 2020-06-02 2020-10-20 南京信息工程大学 A method for reproducing three-dimensional shape features of images on a touch screen
CN111796708B (en) * 2020-06-02 2023-05-26 南京信息工程大学 Method for reproducing three-dimensional shape features of image on touch screen

Also Published As

Publication number Publication date
KR20180117952A (en) 2018-10-30
KR101921140B1 (en) 2018-11-22

Similar Documents

Publication Publication Date Title
Ward-Cherrier et al. Tactile manipulation with a TacThumb integrated on the open-hand M2 gripper
WO2018128355A1 (en) Robot and electronic device for performing hand-eye calibration
WO2017111464A1 (en) X-ray imaging apparatus, control method for the same, and x-ray detector
WO2018194227A1 (en) Three-dimensional touch recognition device using deep learning and three-dimensional touch recognition method using same
WO2014142596A1 (en) Device for sensing operation of air cushion and method therefor
WO2020218644A1 (en) Method and robot for redefining location of robot by using artificial intelligence
WO2017034283A1 (en) Apparatus for generating tactile sensation
WO2017082496A1 (en) Wafer alignment method and alignment equipment using same
WO2023068440A1 (en) Robot hand system and method for controlling robot hand
WO2018080112A1 (en) Input device and display device including the same
WO2017034321A1 (en) Technique for supporting photography in device having camera, and device therefor
WO2018203590A1 (en) Contact position and depth measurement algorithm for three-dimensional touch recognition
WO2016013832A1 (en) Touch screen device and display device using three-dimensional position information
WO2021045481A1 (en) Object recognition system and method
WO2020235784A1 (en) Nerve detection method and device
WO2023163305A1 (en) Deep learning-based gait pattern detection method and computer program performing same
WO2022045497A1 (en) User authentication device and control method therefor
WO2023121355A1 (en) Optical tactile sensor
WO2021095903A1 (en) User authentication device for performing user authentication by using vein, and method therefor
WO2017179786A1 (en) Three-dimensional input device, method and system using motion recognition sensor
WO2023003157A1 (en) Electronic device and fingerprint information acquisition method of electronic device
WO2014168416A1 (en) Non-contact operation device and electronic device linked with same
WO2021020883A1 (en) Three-dimensional scanning device and method
WO2025206766A1 (en) Holding device
WO2025206680A1 (en) Posture control apparatus and method for rehabilitation exercise robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17906574

Country of ref document: EP

Kind code of ref document: A1