[go: up one dir, main page]

WO2008066359A1 - Language learning contents providing system using image parts - Google Patents

Language learning contents providing system using image parts Download PDF

Info

Publication number
WO2008066359A1
WO2008066359A1 PCT/KR2007/006183 KR2007006183W WO2008066359A1 WO 2008066359 A1 WO2008066359 A1 WO 2008066359A1 KR 2007006183 W KR2007006183 W KR 2007006183W WO 2008066359 A1 WO2008066359 A1 WO 2008066359A1
Authority
WO
WIPO (PCT)
Prior art keywords
image parts
providing system
sentence
learning content
content providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2007/006183
Other languages
French (fr)
Inventor
Jae Bong Choi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2009539192A priority Critical patent/JP5553609B2/en
Publication of WO2008066359A1 publication Critical patent/WO2008066359A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a language learning content providing system using image parts, and, more particularly, to a language learning content providing system using image parts, which displays a plurality of image parts for representing a target learning sentence on a user device in conformity with the development of the story of the target learning sentence, and also displays object-oriented diagrams for helping to understand the correlation between respective image parts on the user device, thereby enabling a learner to easily understand the correlation and structure of respective elements constituting English.
  • the present invention uses one characteristic of English in which an English sentence has a structure in which the sentence sequentially expands temporally and spatially from a subject.
  • English has a structure in which respective words and paragraphs are arranged through thoroughly logical steps in sequence from a physically close location to a physically remote location on the basis of the subject of a sentence, that is, a subject that is a basis.
  • Postpositional words have developed in Korean and Japanese. Since a postpositional word makes the position and use of the corresponding word clear, there is no " significant problem with the --transmission of the meaning of a corresponding sentence, even though the word sequence of the sentence is changed. However, since English does not have postpositional words, the meaning to be transmitted is distorted or becomes ambiguous when the word sequence is changed. Accordingly, in English, the positions and sequence of words decisively play important roles . The following examples will help easily understand the above fact. - Below -
  • Korean Patent Application No. 10-2001-0005314 relates to an English lecturing method.
  • This patent discloses that the meanings of sounds constituting respective English words are naturally understood by finding out a method of including meanings in the sounds of each English word, mentions the sequence of solving the word sequence of English in pictorial symbols, and presents a method of solving the problem of the word sequence "a subject + a verb" among the word sequence problems of English using hieroglyphic characters.
  • Korean Patent Application No. 10-2001-0052657 discloses an English learning method using picture making.
  • Korean Patent Application No. 10-2001-0052657 discloses an English learning method of allowing a learner to draw a picture corresponding to English desired to be expressed and comparing the sequence of the English, desired to be expressed by the learner, based on the sequence of drawing of the figure with the sequence of expression based on a normal sequence.
  • figures merely illustrate a sentence or the unit elements of the sentence, but do not transmit information about the flow, sequence and connection relationship of the overall sentence corresponding to the figures, and are not particularly helpful in improving hearing, speaking, reading and writing English in the sequence of expression of English, like native English speakers, using the figures.
  • an object of the present invention is to provide a language learning content providing system using image parts, which helps to understand a language in the sequence of native English speakers having a word sequence, which is affected or influenced by the behavior of a subject on the basis of the subject of English.
  • Another object of the present invention is to provide a language learning content providing system using image parts, which sequentially expresses a plurality of image parts on a user device in conformity with the sequence of the development of the story of each target learning sentence and helps a learner easily understand the structure of an English sentence.
  • a further object of the present invention is to provide a language learning content providing system using image parts, which represents a target learning sentence using a plurality of image parts that are sequentially played on a user device, thereby helping a learner overcome a habit of reinterpreting an English sentence after fully reading it.
  • Yet another object of the present invention is to present a language learning content providing system using image parts, which helps to easily generate content for language learning using neighboring illustrations, photographs and moving images .
  • a language learning content providing system using image parts wherein the system is implemented through a server that provides learning content, including target learning sentences and a plurality of image parts for describing content of each of the target learning sentences, to a learner's user device; the server is configured to sequentially provide the plurality of image parts to the user device in conformity with development of the content of the target learning sentence, so that the plurality of image parts is displayed on the user device; and the respective image parts, displayed in conformity with the development of the content of the target learning sentence, are displayed in synchronization with object- oriented diagrams that describe correlations between the image parts.
  • the object-oriented diagrams are synchronized with the target learning sentence by the server.
  • the server may be provided with voice data for the target learning sentence, and the voice data may be synchronized with any of the image parts and the object- oriented diagrams .
  • the server may be provided with a plurality of image files for the image parts, arid, when the user device selects any one of the plurality of image files, a selected image file may be synchronized with the target learning sentence.
  • the learning content includes the target learning sentence, and further includes any of image parts, object-oriented diagrams and voice data.
  • the target learning sentence may be English.
  • Each of the object-oriented diagrams may be any one of a figure, an arrow, a straight line and a curve.
  • the target learning sentence is divided into any of words, phrases, and clauses, and the resulting elements are respectively synchronized with the object- oriented diagrams by the server.
  • the server may processes a word, a phrase or a clause, synchronized with each of the object-oriented diagrams, so that any one of a font, a letter size, a letter thickness, a letter incline, an underline, shading and a letter color thereof is different from that of one or more adjacent words, phrases or clauses.
  • the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
  • a language learning content providing system using image parts including a database provided with target learning sentences, a plurality of image parts for describing each of the target learning sentences, and object-oriented diagrams for describing correlations between the image parts; and a diagram combination module for creating learning content by synchronizing the image parts, which are provided to and displayed on a learner's user device in a sequence of development of the target learning sentence, with the object-oriented diagrams and providing the created learning content to the user device.
  • the language learning content providing system using image parts further includes a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence, and each of the elements of the target learning sentence is any one of a word, a phrase and a clause.
  • a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence, and each of the elements of the target learning sentence is any one of a word, a phrase and a clause.
  • the highlighting module changes any one of a color and size of a word constituting each of the elements.
  • the highlighting module may increase a size of a word constituting each element compared to that of adjacent words .
  • the database is provided with voice data for the target learning sentence, and the diagram combination module synchronizes any of the image parts and the object-oriented diagrams with the voice data.
  • the database may be provided with a plurality of image files for the image parts, and any one of the plurality of image files for the image parts may be selected by a learner' s user device and displayed on the user device.
  • the target learning sentence is English.
  • each of the object-oriented diagrams is any one of a figure, an arrow, a straight line and a curve.
  • the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
  • a language learning content providing system using image parts including a diagram combination module for creating learning content by synchronizing each target learning sentence, a plurality of image parts for describing the target learning sentence, and an object- oriented diagram for describing correlations between the image parts; and an image processing module for acquiring information about a learner' s portable device from a mobile communication company and converting images into images that are reproducible in the portable device with reference to the information.
  • the image processing module accesses a mobile communication company server for serving the portable device, and includes a phone information acquisition module for acquiring model information and resolution information of the portable device from the mobile communication company server.
  • the image processing module may change sizes of the image parts, the target learning sentence and the object- oriented diagrams in conformity with the resolution information.
  • the image processing module may change colors of the image parts, the target learning sentence and the object- oriented diagrams to colors that are reproducible in the portable device in conformity with the resolution information.
  • Each of the object-oriented diagrams may be any one of a figure, an arrow, a straight line and a curve.
  • the target learning sentence may be divided into any of words, phrases, and clauses, the resulting elements may be respectively synchronized with the object-oriented diagrams by the diagram combination module.
  • the language learning content providing system using image parts further includes a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence, and each of the elements of the target learning sentence is any of a word, a phrase and a clause.
  • the highlighting module may change any one of a color and a size of a word constituting each element.
  • the highlighting module may increase a size of a word constituting each element compared to that of adjacent words .
  • the diagram combination module may combine voice data for the target learning sentence with the learning content.
  • a language learning content providing system using image parts combines images with terms and displays the terms corresponding to descriptions of the situations of images on a user device in real time, thereby helping a learner overcome the habit of interpreting an English sentence in a reverse sequence from the back to the front of the sentence after fully reading it.
  • a learner can recognize how the elements of a sentence and actual objects (images, screens, entities shown to the eyes, and entities occurring in the learner's mind) correspond to one another, and thus the user can associate the recognition of actual objects, the flow of perceptions and the elements of the sentence with one another. That is, when the elements of the sentence appear sequentially, corresponding actual objects are brought to the learner's mind sequentially and are directly understood. Accordingly, the learner can speak or express the sentence correspondingly to the sequence in which actual objects are recognized by the eyes or in the head.
  • the learner can learn the method by which native English speakers understand and take their language, and the problem in which English is interpreted, written, and spoken and heard reversely from the back to the front thereof, which is the detrimental defect of Korean-style English learning, can be overcome .
  • the present invention combines images (moving images, or still images) with object-oriented diagrams and helps a learner understand the sequential flow of a sentence as the object-oriented diagrams are sequentially displayed.
  • the language learning content providing system using image parts creates learning content by dividing an image frame into a plurality of image frames and including an object-oriented diagram in each of the image frames, so that content for language learning can be easily created using neighboring illustrations, photographs and moving images.
  • the present invention can also provide learning content to portable devices and allows the above- described same effects to be provided even through the portable devices.
  • FIG. 1 is a conceptual block diagram of a language learning content providing system using image parts according to an embodiment of the present invention
  • FIGS. 2 to 7 are views illustrating how a target learning sentence, image parts, a desktop, and an object- oriented diagram are displayed on the desktop of a user device;
  • FIG. 8 is a view illustrating an example of an object- oriented diagram that is applicable to the preposition "in";
  • FIG. 9 is a conceptual diagram of a language learning content providing system using image parts according to another embodiment of the present invention.
  • FIG. 10 is a conceptual block diagram of still another embodiment of the present invention.
  • FIG. 11 is a diagram illustrating an example of an interface screen that is provided from an interface module to user devices; and FIG. 12 is a diagram illustrating an example in which an image selected by a learner is applied in actual learning content .
  • FIG. 1 is a conceptual block diagram of a language learning content providing system using image parts according to an embodiment of the present invention.
  • the illustrated language learning content providing system 300 using image parts includes a diagram combination module 320, a database 310, a highlighting module 330, a voice combination module 340, and a user interface module 350.
  • the database 310 contains image parts, object-oriented diagrams, voice fonts, and target learning sentences. When learning content, in which images, object-oriented diagrams, voice fonts and phrases are combined together, is generated, the database 310 can store generated learning content.
  • the diagram combination module 320 forms one piece of learning content by synchronizing and combining target learning sentences, image parts, and object-oriented diagrams, which are stored in the database 310.
  • the ⁇ object-oriented diagram' refers to a diagram for describing the correlation between respective image parts.
  • An object-oriented diagram is shown between two (or more) image parts, and can have the form of a figure, an arrow, a straight line or a curve.
  • an object-oriented diagram is used to describe the meaning of a preposition, the object-oriented diagram may not be positioned between two image parts. This is described later with reference to FIG. 5.
  • the diagram combination module 320 may use any of target learning sentences, image parts, voice data and object-oriented diagrams as a reference, and may synchronize the rest with the reference.
  • the diagram combination module 320 may synchronize an image part or an object-oriented diagram with each of the words (phrases, or clauses) constituting each target learning sentence.
  • the diagram combination module 320 may synchronize elements (for example, words, phrases or clauses constituting a target learning sentence) constituting the target learning sentence when the respective image parts are played in one of user devices 10a to 1On.
  • the highlighting module 330 may highlight an element (for example, any of a word, a phrase or a clause) that is synchronized with an image part that is played when the respective image parts are played in one of the user devices 10a to 1On.
  • voice data or an object-oriented diagram synchronized with each image part, exists, the corresponding voice data or object-oriented diagram is played (or displayed) along with the image part.
  • Learning content generated in the diagram combination module 320, is played in one of the user devices 10a to 1On.
  • the played learning content is represented as image parts for describing a target learning sentence and an object-oriented diagram for describing the correlation between image parts.
  • object-oriented diagrams for describing the correlation between image parts.
  • FIG. 2 shows a start screen for learning content which is provided to one of the user devices 10a to 1On by the language learning content providing system 300 using image parts according to the present invention.
  • a target learning sentence (I swim in the lake all morning until my grandmother calls me for lunch) to be learned in learning content is placed at the bottom of the drawing.
  • a learner clicks oh a start button 5 image parts shown in FIGS. 3 to 7 are played sequentially.
  • FIG. 3 to 7 show examples in which learning content provided to one of the user devices 10a to 1On of learners is played on the one of the user devices 10a to 1On of the learners in the language learning content providing system 300 using image parts.
  • FIG. 3 shows an example in which an object- oriented diagram 400a is displayed on a desktop when the preposition "in” 310a is highlighted in the target learning sentence ("I swim in the lake all morning until my ⁇ grandmother calls me for lunch") .
  • the object-oriented diagram is displayed in the form of a figure, an arrow, a straight line, or a curve, and describes the correlation between the image parts .
  • the object-oriented diagram can represent the direction in which a subject (I) is oriented using an arrow in an image part corresponding to the subject (I) in order to indicate the direction in which the subject (I) is oriented.
  • respective words constituting the target learning sentence may be highlighted sequentially or may be played in the form of voice.
  • words corresponding to voice that is being played may be also highlighted at the same time that the voice is played. This applies throughout the entire detailed description of the present invention.
  • the preposition "in” 310a when the preposition "in" 310a is highlighted, the preposition "in” 310a is differentiated from adjacent words.
  • the process of highlighting a word may be conducted using any one of: 1) a method of setting the size or thickness of a word (for example, the preposition "in”) to a value greater than that of the size of adjacent words (for example, "swim" and "the”) ,
  • a method of differentiating a word for example, the preposition "in" from adjacent words (for example, swim, and the) by providing a shading effect to the word or underlining the word.
  • the word to which a learner should pay attention is the preposition "in” by differentiating the size of the word from that of adjacent words.
  • a word here, the preposition "in”
  • the object-oriented diagram 400a a word (here, the preposition "in”) corresponding to the object-oriented diagram 400a is displayed along with the object-oriented diagram 400a (reference numeral 500a) , and thus a learner can easily understand the association between the word (in) to which attention should be paid and the object-oriented diagram corresponding to the word.
  • FIG. 4 shows an example in which an image part is displayed on the desktop when "all morning” of the elements of the sentence is highlighted.
  • the ⁇ image part' refers to a portion that is extracted from the entire image (the start screen) shown in FIG. 2 in order to represent an element of the target learning sentence.
  • Reference numerals 200a and 20Od are portions extracted from the entire image (start screen) shown in FIG. 2, and they are displayed on the desktop in synchronization with the words of the sentence (I swim in the lake all morning until my grandmother calls me for lunch) that are highlighted.
  • a learner associates a highlighted term (here, "all morning”) with the image part 20Od corresponding to the highlighted word.
  • FIG. 5 shows an example in which an object-oriented diagram is displayed on the desktop when the preposition "until" in the sentence is highlighted.
  • the highlighted preposition "until” has the meaning of "continuance to a specified time” within the sentence (I swim in the lake all morning until my grandmother calls me for lunch) .
  • An object-oriented diagram 400b for the preposition "until” is constructed using an arrow and a straight line so that a learner can perceive a limit visually.
  • the present invention makes a learner learn to recognize only that the preposition "until” represents a temporal limit. That is, the present learner can view, hear and understand information in the sequence in which native speakers understand the information.
  • the object-oriented diagram 400b helps a learner understand by displaying the corresponding preposition "in” along with the object-oriented diagram 400b (reference numeral 500b) in the same manner as the object-oriented diagram 400a shown in FIG. 2.
  • FIG. 6 shows an example in which image parts are displayed on the desktop when "calls me” in the sentence is highlighted.
  • the image parts 20Of and 200g are displayed on the desktop sequentially (or at the same time) . It is preferred that the image parts have an appropriate configuration in which a grandmother 20Oe calls a subject 20Og.
  • the image part corresponding to "calls me" of the sentence represents a form in which the grandmother 20Of calls someone and an object-oriented diagram 600c represents that the subject called by the grandmother 20Of is the subject 20Og.
  • FIG. 7 shows an example in which an image part and an object-oriented diagram are displayed on the desktop when "for lunch" of the sentence is highlighted.
  • a learner can perform matching between image parts and the corresponding elements of the sentence and matching between object-oriented diagrams existing between the image parts and the elements of the sentence corresponding to the object-oriented diagrams in his mind, through the sequentially provided image parts, the object-oriented diagrams existing between the image parts, and the elements of a sentence highlighted and corresponding thereto.
  • the elements of the sentence may also be matched to voice corresponding thereto. Accordingly, respective elements constituting an English sentence occur to a learner's mind naturally. Further, this enables the elements of a sentence and the connection between respective elements of the sentence to be understood more easily than the case in which a learner learns a sentence while simply viewing it. Accordingly, the ability of dictation can be improved.
  • object-oriented diagrams for example, reference numerals 600b and 600c having the form of a directional arrow, which display directionality so that the elements of a sentence flow out naturally.
  • the object-oriented diagrams (for example, reference numerals 600b and 600c) having the form of a directional arrow do not directly correspond to the elements of the sentence, but can be displayed in synchronization with corresponding entities when the elements of the sentence (not the entire sentence, but only elements of a sentence constituting the entire sentence) are displayed, the elements of the sentence are highlighted (the entire sentence is displayed and only the specific elements of the sentence are highlighted) , or voice corresponding to the elements of a sentence is played.
  • prepositions In order to naturally connect respective elements of a sentence, connecting elements are necessary. Typically, prepositions, conjunctions, participles, and relatives function as these elements. People who learn English as a foreign language find it difficult to use prepositions and conjunctions naturally. In particular, unlike conjunctions, which have clear meanings to some extent and are used only to connect phrases, prepositions are used in various elements of sentences in various ways, and thus a number of learners recognize that prepositions must be memorized and used through idioms without understanding specific meanings .
  • learners who study English as a foreign language have a lot of difficulty using prepositions freely and accurately because they do not have prepositions in their native languages, and they do not have instinct sentiments, images, and concepts related to prepositions. Accordingly, learners frequently memorize phrases, including prepositions, through idioms or learn the usage of prepositions from the viewpoint of grammar.
  • Koreans who freely use postpositional words such as "-er, " " ⁇ , " ⁇ °l,” and "7>, " native English speakers, who use English as their native language, can use prepositions naturally even without specially learning the prepositions.
  • the present applicant has discovered that, in order to freely use prepositions like native English speakers, object-oriented diagrams appropriate for sentiments, usages, utilization and situations related to respective prepositions can be indicated in images and learners can enhance the understanding of sentiments, usages, utilization and situations related to the prepositions. Furthermore, the present applicant has discovered that the association between objects can be intuitively recognized only when an object-oriented diagram is displayed along with the objects influenced by a corresponding preposition. That is, a preposition is used in the structure ⁇ A + the preposition + B.” In an easier way, a preposition is used in the structure "the term A before the preposition + the preposition + the term B after the preposition.”
  • each object-oriented diagram corresponding to the meaning of each preposition may correspond to the preposition.
  • 400a and 400c are object- oriented diagrams corresponding to the respective prepositions "in” and "for.”
  • Each object-oriented diagram must be capable of most intuitively representing the intrinsic usage of a corresponding preposition.
  • the above- described object-oriented diagrams are referred to as prepositional object-oriented diagrams.
  • the conjunctional object- oriented diagram 400b is introduced as one element of a relational display means.
  • the conjunction "until” is represented using a form in which an arrow, reaching an object B, which comes after the conjunction, and a stop line, indicating that the arrow stops right before the object B, which comes after the conjunction, are combined.
  • a unique conjunctional object-oriented diagram 400b may correspond to each conjunction. If diagrams greatly influence the Learning effect, all of the diagrams may be displayed. In some cases, conjunctional object-oriented diagrams corresponding to specific conjunctions may not be displayed when it is not necessary to display the conjunctional object-oriented diagrams.
  • relative object-oriented diagrams and participial object-oriented diagrams may be introduced as for the preposition and the conjunction, even though specific embodiments are not taken for the relative and the participle. That is, since a relative generally has clauses before and after it, a relative object-oriented diagram may be generated in the same manner as a conjunction. Since a participle generally has nouns, rather than sentences, before and after it, a participial object- oriented diagram may be generated in the same manner as a preposition. Of course, the types of relatives are limited, and thus relative object-oriented diagrams can correspond to respective relatives in a 1:1 correspondence. However, since participles are basically based on verbs, a number of participles corresponding to the number of types of verbs exists.
  • the participles may be classified into two types; a passive type and an active type. They have a structure of "the term A before a participle + the participle + the term B after the participle.”
  • the passive type usually has a structure in which the term A and the participle have the relationship of "an objective + a verb,” and the passive type usually has a structure in which the A and the participle has the relationship of "a subject + a verb.” Accordingly, corresponding participial object-oriented diagrams may be generated in consideration of the relationships of the passive and active types.
  • the emphasis on the elements may be performed using a method of displaying only a target learning object as well as the above-described highlighting method.
  • a method of showing only an element of a sentence corresponding to an object-oriented diagram (or a target learning element) without showing the entire sentence may be included.
  • the fact that the appearing element is emphasized is used.
  • a voice corresponding to a sentence is recorded and played, a voice corresponding to an element of the sentence may be played in synchronization with an object-oriented diagram object.
  • the user interface module 350 allows a learner to freely set the object-oriented diagrams and the image parts shown in FIGS. 3 to 7.
  • the user interface module 350 provides the user devices 10a to 1On, which are connected thereto, over a network, with previously provided image parts or object-oriented diagrams. Learners who use the user devices 10a to 1On can select image parts or object- oriented diagrams according to their preference.
  • the user interface module 350 presents a plurality of object-oriented diagrams, which are previously provided for the object-oriented diagram 400a shown in FIG. 7 and can be substituted for the object-oriented diagram 400a, to a learner.
  • the learner may select a desired object-oriented diagram through the user interface module 350.
  • the diagram combination module 320 recreates learning content by combining one or more object-oriented diagrams, one or more target learning sentences, one or more image parts, and voice data, which are selected by the learner, and provides the created learning content to the user devices 10a to 1On.
  • the object-oriented diagram presented by the user interface module 350 to the user devices 10a to 1On is illustrated in FIG. 8.
  • the user interface module 350 may freely register one or more image parts or one or more object-oriented diagrams desired by a learner, and may apply the registered entities to a target learning sentence.
  • the user interface module 350 may receive an image of the learner, uploaded by the learner, through a corresponding one of the user devices 10a to 1On, and may apply the image part (the image part corresponding to the subject (I) ) , selected by the learner, to the user device of the corresponding learner. This will be described with reference to FIGS. 11 and 12 together.
  • FIG. 11 is a diagram illustrating an example of an interface screen that is provided by the interface module 350 to the user devices 10a to 1On.
  • An illustrated interface is provided with a file open menu option 41, an image combination menu option 42, and a preview menu option 43. Furthermore, a target learning sentence to be learned by a user is displayed on the interface provided by the interface module 350 to the user devices 10a to 1On, and the nouns 51 to 54 of a displayed target learning sentence have box frames. The nouns having the box frames may be selected using a mouse cursor 60. When a noun is selected, the selected noun and a replaceable image are displayed on an image box 80. For example, when a learner selects the word "lake” from among the nouns provided in the target learning sentence, the word "lake” and replaceable images 71, 72, and 73 are displayed in the image box 80.
  • a user may view and select more images using a direction key 74. Thereafter, the user selects a desired one from among the displayed images using the mouse cursor 60.
  • the user selects the image 71 using the mouse cursor 60 and then selects the preview menu 43, the user can view a preview screen, as indicated by the reference numeral "90."
  • FIG. 12 illustrates an example in which an image selected by a learner is applied to actual learning content .
  • An illustrated image is similar to that of FIG. 3, but it can be seen that the noun "lake” of the target learning sentence and the image corresponding to the noun “lake” have been respectively replaced with the term “swimming pool” and an image 71 corresponding to the term “swimming pool.”
  • a learner can perform various types of learning using various situations or characters .
  • the highlighting module 330 highlights a word (a phrase or a clause; this description will be omitted hereinafter) corresponding to an object-oriented diagram whenever the object-oriented diagram is displayed (or added) on the desktop.
  • the emphasis on words is performed in the same manner as described in 1) to 4) .
  • a learner sequentially understands situations after sequentially viewing words that are displayed on the basis of the sequence of the construction of a sentence.
  • Learning content presented in the present invention places great emphasis on a method in which a learner takes words that come after the subject 101 as they are and understands a language, rather than a method in which a learner views the entire sentence ("I swim in the lake until my grandmother calls me for lunch.") and then understands the meaning of the sentence.
  • a learner pays attention to words (highlighted words) that come in the sequence of ("swims in the lake all morning • • • for lunch") beginning from the subject 200a, that is, "I.”
  • words highlighted words
  • an image part for describing the highlighted word is displayed on the desktop.
  • an object- oriented diagram for describing the association between the image parts is displayed along with them.
  • the voice combination module 340 synchronizes voice data with any one of a highlighted word (a phrase or a clause), an object-oriented diagram, and an image part.
  • a learner can pay attention to the highlighted word of a target learning sentence through a display device provided in a user device.
  • the learner can listen to playback sound of the highlighted word.
  • FIG. 9 is a conceptual diagram of a language learning content providing system using image parts according to another embodiment of the present invention.
  • the illustrated language learning content system is similar to that described in conjunction with FIGS. 1 to 8, but differs in that the illustrated system is based on a web server 50 for providing learning content to a user device 20, a user interface screen is provided to the user device 20, and a user can select image parts and object-oriented diagrams through the user interface screen. Therefore, redundant descriptions given in conjunction with FIGS. 1 to 8 are omitted here.
  • a learner can select target learning content through the user interface screen and can upload desired images to the web server 50, and the web server 50 changes image parts and object-oriented diagrams, included in the learning content, to the images provided by the learner.
  • the changed learning content is provided to a user device.
  • Most VGA graphic cards provided in learners' computers are provided with an overlay function.
  • Various effects can be applied to original images using the overlay function.
  • current graphic cards have a function of overlaying Korean subtitles on a foreign movie (for example, an American movie) while the movie is played, thus displaying the Korean subtitles on a playback window in which the movie is played.
  • the current graphic cards can also perform this processing on still images as well as moving images.
  • the diagram combination module 320 performs overlay processing on an object-oriented diagram and provides processing results to a learner's device (for example, a learner's computer) along with individual image parts, and thus the object-oriented diagrams shown in FIGS.
  • 3 to 7 can be displayed on a monitor provided in the learner's device (for example, a computer) .
  • the elements of a corresponding sentence may also be displayed along with the overlaid object-oriented diagrams.
  • voice corresponding to the elements of a sentence may be played.
  • Image parts corresponding to voice are sequentially displayed while the voice is played, object-oriented diagrams are displayed between the image parts and within the image parts, and the displayed object- oriented diagrams and/or words (phrases, or clauses) corresponding to the image parts are highlighted.
  • the overlaid object-oriented diagrams may have independent file forms, they are preferably provided in the language learning content providing system 300 using image parts in advance. Alternatively, files provided by a learner device may be received and then substituted for those provided in the language learning content providing system 300 using image parts.
  • object-oriented diagrams may be provided in the language learning content providing system 300 using image parts in advance, and a learner may select desired ones from among the object-oriented diagrams.
  • the language learning content providing system 300 using image parts should present a plurality of object-oriented diagrams, capable of replacing a single object-oriented diagram, to a user.
  • a user can select a desired one from among the plurality of object- oriented diagrams.
  • overlaid object-oriented diagrams must be synchronized with a target learning sentence.
  • the diagram combination module 320 combines object-oriented diagrams, selected by a learner, with a target learning sentence, and provides combination results to a learner's user device.
  • an overlaid object-oriented diagram created by a learner or a third party, may be User Created Content (UCC) .
  • UCC User Created Content
  • a learner or a third party may create various pieces of learning content with which not only overlaid object-oriented diagrams but also object- oriented diagrams provided by the learning content providing system 300 of the present invention are combined.
  • the learning content providing system according to the present invention may also provide various pieces of learning content, created by the user, over a network so that a plurality of third parties can use them.
  • FIG. 10 is . a conceptual block diagram of still another embodiment of the present invention.
  • the illustrated embodiment is similar to that shown in FIG. 1, but differs from that of FIG. 1 in that learning content is provided to portable devices (for example, mobile phones, PDAs, PMPs, etc.) that can access the wireless Internet. Therefore, ' redundant descriptions related to FIGS. 1 to 8 are referenced here.
  • the same reference numerals are used for elements having the same functions .
  • a language learning content providing system 300 using image parts acquires information about the each of learners' portable devices 60a to 6On from a mobile communication company server 500, which provides a mobile communication service to the portable device, and converts the resolution into a resolution that can be supported by a display device ⁇ la of each of the portable devices 60a to 6On.
  • a database 310 shown in the drawing of the present embodiment is provided with information about the portable devices of each manufacturer.
  • An image processing module 360 acquires information about the learner's portable device from the mobile communication company server 500, and detects the resolution of each of the portable devices 60a to 6On through the comparison of the information.
  • the image processing module 360 is provided with a phone information acquisition module 365.
  • the phone information acquisition module 365 connects to the mobile communication company server 500 over a network.
  • the phone information acquisition module 365 acquires information about a corresponding one of the portable devices 60a to 6On through the mobile communication company server 500 when the corresponding one of the portable devices 60a to 6On requests access to the language learning content providing system using image parts over a wireless network.
  • the display devices 61a provided in the portable devices 60a to 6On can have different reproducible colors.
  • the portable devices 60a to 6On may have various abilities of color reproduction ranging from 256 colors to 16.7 million colors depending on the model.
  • the image processing module 360 determines the color rendering range for image parts, target learning sentences, and object-oriented diagrams depending on the color reproduction ability of a portable device with reference to information about the portable device. Accordingly, learners can learn a language using learning content anywhere and anytime regardless of the color reproduction ability and resolution of their portable devices 60a to 6On.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

Disclosed herein is a language learning content providing system using image parts. The system is implemented through a server that provides learning content, including target learning sentences and a plurality of image parts for describing content of each of the target learning sentences, to a learner's user device. The server is configured to sequentially provide the plurality of image parts to the user device in conformity with development of the content of the target learning sentence, so that the plurality of image parts is displayed on the user device. The respective image parts, displayed in conformity with the development of the content of the target learning sentence, are displayed in synchronization with object-oriented diagrams that describe correlations between the image parts.

Description

[DESCRIPTION]
[invention Title]
LANGUAGE LEARNING CONTENTS PROVIDING SYSTEM USING IMAGE PARTS
[Technical Field]
The present invention relates to a language learning content providing system using image parts, and, more particularly, to a language learning content providing system using image parts, which displays a plurality of image parts for representing a target learning sentence on a user device in conformity with the development of the story of the target learning sentence, and also displays object-oriented diagrams for helping to understand the correlation between respective image parts on the user device, thereby enabling a learner to easily understand the correlation and structure of respective elements constituting English.
[Background Art]
Many English learning methods and learning materials are being supplied to learners to such an extent that it is not an exaggeration to say that the supply is a flood. There are English learning methods that are still selected even after a lapse of years.
The goal of English learning is to enhance one's ability to speak, write, hear and read English naturally like a native English speaker. However, most Koreans who learn English as a foreign language do not realize their educational goals, with the exception of a minimal number of persons, and consume a lot of money and time while undergoing a plurality of challenges and failures in order to master English.
We understand our native language sequentially as soon as we read it, comprehend our native language sequentially as soon as we hear it, and speak our native language sequentially as soon as we think it. It would be apparent that this is the case with native English speakers .
The present invention uses one characteristic of English in which an English sentence has a structure in which the sentence sequentially expands temporally and spatially from a subject. In greater detail, English has a structure in which respective words and paragraphs are arranged through thoroughly logical steps in sequence from a physically close location to a physically remote location on the basis of the subject of a sentence, that is, a subject that is a basis.
Postpositional words have developed in Korean and Japanese. Since a postpositional word makes the position and use of the corresponding word clear, there is no "significant problem with the --transmission of the meaning of a corresponding sentence, even though the word sequence of the sentence is changed. However, since English does not have postpositional words, the meaning to be transmitted is distorted or becomes ambiguous when the word sequence is changed. Accordingly, in English, the positions and sequence of words decisively play important roles . The following examples will help easily understand the above fact. - Below -
€hcr ϋMH 4^OT- John Wendie loves
Figure imgf000004_0001
4%W W» John loves Wendie OT* 4D-W- €£ Wendie loves John OT* $^r 4%W- Wendie John loves ΛH"W ^r OT* loves John Wendie ΛHhtb4 OT* ^^r loves Wendie John As can be seen from the above examples, in English, the word sequence plays a very important role in the transmission of an idea, so that it can be seen that English sentences have a structure in which information should be expressed and understood sequentially. It would be apparent that English learning for naturally hearing, speaking, writing and reading English requires recognition of "the significance of sequence" and "sequence"-based learning training.
Korean Patent Application No. 10-2001-0005314 relates to an English lecturing method. This patent discloses that the meanings of sounds constituting respective English words are naturally understood by finding out a method of including meanings in the sounds of each English word, mentions the sequence of solving the word sequence of English in pictorial symbols, and presents a method of solving the problem of the word sequence "a subject + a verb" among the word sequence problems of English using hieroglyphic characters. Korean Patent Application No. 10-2001-0052657 discloses an English learning method using picture making.
Korean Patent Application No. 10-2001-0052657 discloses an English learning method of allowing a learner to draw a picture corresponding to English desired to be expressed and comparing the sequence of the English, desired to be expressed by the learner, based on the sequence of drawing of the figure with the sequence of expression based on a normal sequence.
However, with regard to Korean Patent Application No. 10-2001-0052657, figures merely illustrate a sentence or the unit elements of the sentence, but do not transmit information about the flow, sequence and connection relationship of the overall sentence corresponding to the figures, and are not particularly helpful in improving hearing, speaking, reading and writing English in the sequence of expression of English, like native English speakers, using the figures.
[Disclosure] [Technical Problem] Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a language learning content providing system using image parts, which helps to understand a language in the sequence of native English speakers having a word sequence, which is affected or influenced by the behavior of a subject on the basis of the subject of English.
Another object of the present invention is to provide a language learning content providing system using image parts, which sequentially expresses a plurality of image parts on a user device in conformity with the sequence of the development of the story of each target learning sentence and helps a learner easily understand the structure of an English sentence. A further object of the present invention is to provide a language learning content providing system using image parts, which represents a target learning sentence using a plurality of image parts that are sequentially played on a user device, thereby helping a learner overcome a habit of reinterpreting an English sentence after fully reading it.
Yet another object of the present invention is to present a language learning content providing system using image parts, which helps to easily generate content for language learning using neighboring illustrations, photographs and moving images . [Technical Solution]
The above-described objects of the present invention are accomplished by a language learning content providing system using image parts, wherein the system is implemented through a server that provides learning content, including target learning sentences and a plurality of image parts for describing content of each of the target learning sentences, to a learner's user device; the server is configured to sequentially provide the plurality of image parts to the user device in conformity with development of the content of the target learning sentence, so that the plurality of image parts is displayed on the user device; and the respective image parts, displayed in conformity with the development of the content of the target learning sentence, are displayed in synchronization with object- oriented diagrams that describe correlations between the image parts.
Preferably, the object-oriented diagrams are synchronized with the target learning sentence by the server.
The server may be provided with voice data for the target learning sentence, and the voice data may be synchronized with any of the image parts and the object- oriented diagrams . The server may be provided with a plurality of image files for the image parts, arid, when the user device selects any one of the plurality of image files, a selected image file may be synchronized with the target learning sentence. Preferably, the learning content includes the target learning sentence, and further includes any of image parts, object-oriented diagrams and voice data.
The target learning sentence may be English. Each of the object-oriented diagrams may be any one of a figure, an arrow, a straight line and a curve.
Preferably, the target learning sentence is divided into any of words, phrases, and clauses, and the resulting elements are respectively synchronized with the object- oriented diagrams by the server. The server may processes a word, a phrase or a clause, synchronized with each of the object-oriented diagrams, so that any one of a font, a letter size, a letter thickness, a letter incline, an underline, shading and a letter color thereof is different from that of one or more adjacent words, phrases or clauses.
Preferably, the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
The above-described objects of the present invention are accomplished by a language learning content providing system using image parts, including a database provided with target learning sentences, a plurality of image parts for describing each of the target learning sentences, and object-oriented diagrams for describing correlations between the image parts; and a diagram combination module for creating learning content by synchronizing the image parts, which are provided to and displayed on a learner's user device in a sequence of development of the target learning sentence, with the object-oriented diagrams and providing the created learning content to the user device. Preferably, the language learning content providing system using image parts further includes a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence, and each of the elements of the target learning sentence is any one of a word, a phrase and a clause.
Preferably, the highlighting module changes any one of a color and size of a word constituting each of the elements.
The highlighting module may increase a size of a word constituting each element compared to that of adjacent words .
Preferably, the database is provided with voice data for the target learning sentence, and the diagram combination module synchronizes any of the image parts and the object-oriented diagrams with the voice data.
The database may be provided with a plurality of image files for the image parts, and any one of the plurality of image files for the image parts may be selected by a learner' s user device and displayed on the user device.
Preferably, the target learning sentence is English. Preferably, each of the object-oriented diagrams is any one of a figure, an arrow, a straight line and a curve. Preferably, the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
The above-described objects of the present invention are accomplished by a language learning content providing system using image parts, including a diagram combination module for creating learning content by synchronizing each target learning sentence, a plurality of image parts for describing the target learning sentence, and an object- oriented diagram for describing correlations between the image parts; and an image processing module for acquiring information about a learner' s portable device from a mobile communication company and converting images into images that are reproducible in the portable device with reference to the information. The image processing module accesses a mobile communication company server for serving the portable device, and includes a phone information acquisition module for acquiring model information and resolution information of the portable device from the mobile communication company server. The image processing module may change sizes of the image parts, the target learning sentence and the object- oriented diagrams in conformity with the resolution information.
The image processing module may change colors of the image parts, the target learning sentence and the object- oriented diagrams to colors that are reproducible in the portable device in conformity with the resolution information.
Each of the object-oriented diagrams may be any one of a figure, an arrow, a straight line and a curve.
The target learning sentence may be divided into any of words, phrases, and clauses, the resulting elements may be respectively synchronized with the object-oriented diagrams by the diagram combination module. Preferably, the language learning content providing system using image parts further includes a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence, and each of the elements of the target learning sentence is any of a word, a phrase and a clause.
The highlighting module may change any one of a color and a size of a word constituting each element.
The highlighting module may increase a size of a word constituting each element compared to that of adjacent words .
The diagram combination module may combine voice data for the target learning sentence with the learning content.
In existing learning materials for learning English through images or photographs, a single piece of learning content is generally caused to correspond to a single image. In this case, there is no structural correspondence indicating that respective elements of the entire image are associated with respective parts of a sentence, thus making it impossible to systematically understand how a single image is connected to the entire sentence. Furthermore, images, such as photographs or figures, are generally used only for auxiliary, passive, and non-positive purposes, such as descriptions of English sentences or English paragraphs.
In order for English to be uttered through the mouth naturally and sequentially, 1) the overall flow of content, 2) the part being currently uttered, and 3) the part to be immediately connected to the currently uttered part must be connected well and organically in the mind. If not, the natural development of each English sentence cannot be smoothly performed.
[Advantageous Effects]
When the spirit of the present invention applies, a language learning content providing system using image parts according to the present invention combines images with terms and displays the terms corresponding to descriptions of the situations of images on a user device in real time, thereby helping a learner overcome the habit of interpreting an English sentence in a reverse sequence from the back to the front of the sentence after fully reading it.
Furthermore, a learner can recognize how the elements of a sentence and actual objects (images, screens, entities shown to the eyes, and entities occurring in the learner's mind) correspond to one another, and thus the user can associate the recognition of actual objects, the flow of perceptions and the elements of the sentence with one another. That is, when the elements of the sentence appear sequentially, corresponding actual objects are brought to the learner's mind sequentially and are directly understood. Accordingly, the learner can speak or express the sentence correspondingly to the sequence in which actual objects are recognized by the eyes or in the head. Through this, the learner can learn the method by which native English speakers understand and take their language, and the problem in which English is interpreted, written, and spoken and heard reversely from the back to the front thereof, which is the detrimental defect of Korean-style English learning, can be overcome .
When the elements of a sentence to be learned are caused to correspond to the above-described diagrams, clues for memory and association become rich, so that the elements of a sentence and the entire sentence can be memorized easily and the effect of bringing back memory is excellent, thereby- increasing the learning effect. This effect leads to an increase in the ability to sequentially speak a language.
Furthermore, the present invention combines images (moving images, or still images) with object-oriented diagrams and helps a learner understand the sequential flow of a sentence as the object-oriented diagrams are sequentially displayed.
The language learning content providing system using image parts according to the present invention creates learning content by dividing an image frame into a plurality of image frames and including an object-oriented diagram in each of the image frames, so that content for language learning can be easily created using neighboring illustrations, photographs and moving images.
Furthermore, since users can directly create object- oriented diagrams or object-oriented diagrams generated by third parties can be applied to the present learning content providing system, learners can come into contact with various object-oriented diagrams, and thus the learning effect can be increased.
Finally, the present invention can also provide learning content to portable devices and allows the above- described same effects to be provided even through the portable devices.
[Description of Drawings]
FIG. 1 is a conceptual block diagram of a language learning content providing system using image parts according to an embodiment of the present invention;
FIGS. 2 to 7 are views illustrating how a target learning sentence, image parts, a desktop, and an object- oriented diagram are displayed on the desktop of a user device; FIG. 8 is a view illustrating an example of an object- oriented diagram that is applicable to the preposition "in";
FIG. 9 is a conceptual diagram of a language learning content providing system using image parts according to another embodiment of the present invention; FIG. 10 is a conceptual block diagram of still another embodiment of the present invention;
FIG. 11 is a diagram illustrating an example of an interface screen that is provided from an interface module to user devices; and FIG. 12 is a diagram illustrating an example in which an image selected by a learner is applied in actual learning content .
^Description of reference numerals of principal elements in the drawings* 300: system for providing language learning content utilizing image parts
310 : database
320: diagram combination module 330: highlighting module
340: voice combination module 350: user interface module
[Mode for Invention]
The present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a conceptual block diagram of a language learning content providing system using image parts according to an embodiment of the present invention.
The illustrated language learning content providing system 300 using image parts includes a diagram combination module 320, a database 310, a highlighting module 330, a voice combination module 340, and a user interface module 350. The database 310 contains image parts, object-oriented diagrams, voice fonts, and target learning sentences. When learning content, in which images, object-oriented diagrams, voice fonts and phrases are combined together, is generated, the database 310 can store generated learning content. The diagram combination module 320 forms one piece of learning content by synchronizing and combining target learning sentences, image parts, and object-oriented diagrams, which are stored in the database 310. The Λobject-oriented diagram' refers to a diagram for describing the correlation between respective image parts.
An object-oriented diagram is shown between two (or more) image parts, and can have the form of a figure, an arrow, a straight line or a curve. When an object-oriented diagram is used to describe the meaning of a preposition, the object-oriented diagram may not be positioned between two image parts. This is described later with reference to FIG. 5.
The diagram combination module 320 may use any of target learning sentences, image parts, voice data and object-oriented diagrams as a reference, and may synchronize the rest with the reference.
For example, when the diagram combination module 320 uses target learning sentences as a reference, the diagram combination module 320 may synchronize an image part or an object-oriented diagram with each of the words (phrases, or clauses) constituting each target learning sentence.
Further, when the diagram combination module 320 uses a plurality of image parts as a reference, the diagram combination module 320 may synchronize elements (for example, words, phrases or clauses constituting a target learning sentence) constituting the target learning sentence when the respective image parts are played in one of user devices 10a to 1On. In this case, the highlighting module 330 may highlight an element (for example, any of a word, a phrase or a clause) that is synchronized with an image part that is played when the respective image parts are played in one of the user devices 10a to 1On. Here, if voice data or an object-oriented diagram, synchronized with each image part, exists, the corresponding voice data or object-oriented diagram is played (or displayed) along with the image part.
Learning content, generated in the diagram combination module 320, is played in one of the user devices 10a to 1On. The played learning content is represented as image parts for describing a target learning sentence and an object-oriented diagram for describing the correlation between image parts. Hereinafter, how target learning sentences, image parts, and object-oriented diagrams are actually represented on a desktop is described with reference to FIGS. 2 to 7.
Here, Λdesktop' refers to the entire playback region or part of the playback region of a display device (for example, an LCD, a PDP, a CRT, or an organic EL) on which learning content can be played. It is noted that this applies throughout the entire description of the present invention. FIG. 2 shows a start screen for learning content which is provided to one of the user devices 10a to 1On by the language learning content providing system 300 using image parts according to the present invention. A target learning sentence (I swim in the lake all morning until my grandmother calls me for lunch) to be learned in learning content is placed at the bottom of the drawing. When a learner clicks oh a start button 5, image parts shown in FIGS. 3 to 7 are played sequentially. FIGS. 3 to 7 show examples in which learning content provided to one of the user devices 10a to 1On of learners is played on the one of the user devices 10a to 1On of the learners in the language learning content providing system 300 using image parts. First, FIG. 3 shows an example in which an object- oriented diagram 400a is displayed on a desktop when the preposition "in" 310a is highlighted in the target learning sentence ("I swim in the lake all morning until my grandmother calls me for lunch") . The object-oriented diagram is displayed in the form of a figure, an arrow, a straight line, or a curve, and describes the correlation between the image parts . For example, the object-oriented diagram can represent the direction in which a subject (I) is oriented using an arrow in an image part corresponding to the subject (I) in order to indicate the direction in which the subject (I) is oriented.
In the drawing, respective words constituting the target learning sentence (I swim in the lake all morning until my grandmother calls me for lunch) may be highlighted sequentially or may be played in the form of voice. Of course, words corresponding to voice that is being played may be also highlighted at the same time that the voice is played. This applies throughout the entire detailed description of the present invention. In the drawing, when the preposition "in" 310a is highlighted, the preposition "in" 310a is differentiated from adjacent words. The process of highlighting a word (for example, the preposition "in") may be conducted using any one of: 1) a method of setting the size or thickness of a word (for example, the preposition "in") to a value greater than that of the size of adjacent words (for example, "swim" and "the") ,
2) a method of differentiating the color of a word (for example, the preposition "in") from that of adjacent words (for example, "swim" and "the"),
3) a method of italicizing a- word (for example, the preposition "in") or changing the font of the word, and
4) a method of differentiating a word (for example, the preposition "in") from adjacent words (for example, swim, and the) by providing a shading effect to the word or underlining the word. In the drawings, it is emphasized that the word to which a learner should pay attention is the preposition "in" by differentiating the size of the word from that of adjacent words. Here, with regard to the object-oriented diagram 400a, a word (here, the preposition "in") corresponding to the object-oriented diagram 400a is displayed along with the object-oriented diagram 400a (reference numeral 500a) , and thus a learner can easily understand the association between the word (in) to which attention should be paid and the object-oriented diagram corresponding to the word.
FIG. 4 shows an example in which an image part is displayed on the desktop when "all morning" of the elements of the sentence is highlighted. As shown in the drawing, when "all morning" of the sentence (I swim in the lake all morning until my grandmother calls me for lunch) is highlighted, an image part 20Od corresponding to "all morning" is displayed on the desktop. The Λimage part' refers to a portion that is extracted from the entire image (the start screen) shown in FIG. 2 in order to represent an element of the target learning sentence. Reference numerals 200a and 20Od are portions extracted from the entire image (start screen) shown in FIG. 2, and they are displayed on the desktop in synchronization with the words of the sentence (I swim in the lake all morning until my grandmother calls me for lunch) that are highlighted. A learner associates a highlighted term (here, "all morning") with the image part 20Od corresponding to the highlighted word.
FIG. 5 shows an example in which an object-oriented diagram is displayed on the desktop when the preposition "until" in the sentence is highlighted.
The highlighted preposition "until" has the meaning of "continuance to a specified time" within the sentence (I swim in the lake all morning until my grandmother calls me for lunch) . An object-oriented diagram 400b for the preposition "until" is constructed using an arrow and a straight line so that a learner can perceive a limit visually.
Through this, a learner is prevented from interpreting the clause "until a grandmother calls me for lunch", including the preposition "until," as one part (the behavior of interpreting English in this way results in the bad habit of reading, writing and speaking English sentences after interpreting the entire English sentences) . The present invention makes a learner learn to recognize only that the preposition "until" represents a temporal limit. That is, the present learner can view, hear and understand information in the sequence in which native speakers understand the information. The object-oriented diagram 400b helps a learner understand by displaying the corresponding preposition "in" along with the object-oriented diagram 400b (reference numeral 500b) in the same manner as the object-oriented diagram 400a shown in FIG. 2. Here, it can be seen that no image part exists on the right side of the preposition "until." This means that the preposition does not need to be positioned between two image parts. The preposition "until" does not need image parts corresponding to two words (phrases or clauses) because the preposition represents only continuance to a specified time. FIG. 6 shows an example in which image parts are displayed on the desktop when "calls me" in the sentence is highlighted.
When "calls me" in the sentence is highlighted, two image parts 20Of and 200g are displayed on the desktop sequentially (or at the same time) . It is preferred that the image parts have an appropriate configuration in which a grandmother 20Oe calls a subject 20Og. In the drawing, the image part corresponding to "calls me" of the sentence represents a form in which the grandmother 20Of calls someone and an object-oriented diagram 600c represents that the subject called by the grandmother 20Of is the subject 20Og.
FIG. 7 shows an example in which an image part and an object-oriented diagram are displayed on the desktop when "for lunch" of the sentence is highlighted.
When "for lunch" of the sentence is highlighted, an image part 20Oh corresponding to "for lunch" is displayed on the desktop, and a prepositional object-oriented diagram 400c corresponding to the preposition "for" is displayed in association with the subject 20Og. Accordingly, a learner can easily understand the usage of the preposition "for." When the meaning of the preposition "for" is not interpreted as "on behalf of ~" but is interpreted as "for the purpose of ~" in the same manner that native speakers understand it, a learner can be prevented from reversely understanding the meaning of the preposition "for" and can comprehend information included in the sentence in the same manner as native speakers who understand the sentence sequentially. The association, that is, the association between the subject 20Og and "lunch" 20Oh can be intuitively understood through this prepositional object- oriented diagram.
When the system of the present invention is applied, a learner can perform matching between image parts and the corresponding elements of the sentence and matching between object-oriented diagrams existing between the image parts and the elements of the sentence corresponding to the object-oriented diagrams in his mind, through the sequentially provided image parts, the object-oriented diagrams existing between the image parts, and the elements of a sentence highlighted and corresponding thereto. Of course, the elements of the sentence may also be matched to voice corresponding thereto. Accordingly, respective elements constituting an English sentence occur to a learner's mind naturally. Further, this enables the elements of a sentence and the connection between respective elements of the sentence to be understood more easily than the case in which a learner learns a sentence while simply viewing it. Accordingly, the ability of dictation can be improved.
One of the reasons why most learners do not naturally speak a sentence continuously is that they do not break the entire sentence into individual elements. Even when a learner breaks and separates respective elements constituting a sentence, the connection between the respective elements is not well performed. This makes it difficult to learn a foreign language (English) . Here, if image parts that are sequentially provided are used, a learner can understand and speak the entire sentence in the sequence thereof by separating and then understanding the elements of the sentence corresponding to the image parts in the sequence of display of object-oriented diagrams while bringing the image parts to his mind.
Furthermore, in the present invention, it is preferred that object-oriented diagrams (for example, reference numerals 600b and 600c) having the form of a directional arrow, which display directionality so that the elements of a sentence flow out naturally, be included. The object-oriented diagrams (for example, reference numerals 600b and 600c) having the form of a directional arrow do not directly correspond to the elements of the sentence, but can be displayed in synchronization with corresponding entities when the elements of the sentence (not the entire sentence, but only elements of a sentence constituting the entire sentence) are displayed, the elements of the sentence are highlighted (the entire sentence is displayed and only the specific elements of the sentence are highlighted) , or voice corresponding to the elements of a sentence is played.
In order to naturally connect respective elements of a sentence, connecting elements are necessary. Typically, prepositions, conjunctions, participles, and relatives function as these elements. People who learn English as a foreign language find it difficult to use prepositions and conjunctions naturally. In particular, unlike conjunctions, which have clear meanings to some extent and are used only to connect phrases, prepositions are used in various elements of sentences in various ways, and thus a number of learners recognize that prepositions must be memorized and used through idioms without understanding specific meanings .
Preposition form phrases along with nouns or other elements within a sentence, and play an important role in dividing each English sentence into elements. Typically, learners who study English as a foreign language have a lot of difficulty using prepositions freely and accurately because they do not have prepositions in their native languages, and they do not have instinct sentiments, images, and concepts related to prepositions. Accordingly, learners frequently memorize phrases, including prepositions, through idioms or learn the usage of prepositions from the viewpoint of grammar. However, like Koreans, who freely use postpositional words such as "-er, " "^, " λλ°l," and "7>, " native English speakers, who use English as their native language, can use prepositions naturally even without specially learning the prepositions.
Accordingly, the present applicant has discovered that, in order to freely use prepositions like native English speakers, object-oriented diagrams appropriate for sentiments, usages, utilization and situations related to respective prepositions can be indicated in images and learners can enhance the understanding of sentiments, usages, utilization and situations related to the prepositions. Furthermore, the present applicant has discovered that the association between objects can be intuitively recognized only when an object-oriented diagram is displayed along with the objects influenced by a corresponding preposition. That is, a preposition is used in the structure λλA + the preposition + B." In an easier way, a preposition is used in the structure "the term A before the preposition + the preposition + the term B after the preposition."
For example, in "swim + in + the lake," the relationship that is formed by a term before a preposition and (or) a term after the preposition and the preposition under a specific situation is important in the use of the preposition. Each object-oriented diagram corresponding to the meaning of each preposition may correspond to the preposition. In the drawings, 400a and 400c are object- oriented diagrams corresponding to the respective prepositions "in" and "for." Each object-oriented diagram must be capable of most intuitively representing the intrinsic usage of a corresponding preposition. The above- described object-oriented diagrams are referred to as prepositional object-oriented diagrams.
In the present invention, the conjunctional object- oriented diagram 400b is introduced as one element of a relational display means. The conjunction "until" is represented using a form in which an arrow, reaching an object B, which comes after the conjunction, and a stop line, indicating that the arrow stops right before the object B, which comes after the conjunction, are combined. As described above, a unique conjunctional object-oriented diagram 400b may correspond to each conjunction. If diagrams greatly influence the Learning effect, all of the diagrams may be displayed. In some cases, conjunctional object-oriented diagrams corresponding to specific conjunctions may not be displayed when it is not necessary to display the conjunctional object-oriented diagrams.
It will be evident that relative object-oriented diagrams and participial object-oriented diagrams may be introduced as for the preposition and the conjunction, even though specific embodiments are not taken for the relative and the participle. That is, since a relative generally has clauses before and after it, a relative object-oriented diagram may be generated in the same manner as a conjunction. Since a participle generally has nouns, rather than sentences, before and after it, a participial object- oriented diagram may be generated in the same manner as a preposition. Of course, the types of relatives are limited, and thus relative object-oriented diagrams can correspond to respective relatives in a 1:1 correspondence. However, since participles are basically based on verbs, a number of participles corresponding to the number of types of verbs exists. The participles may be classified into two types; a passive type and an active type. They have a structure of "the term A before a participle + the participle + the term B after the participle." The passive type usually has a structure in which the term A and the participle have the relationship of "an objective + a verb," and the passive type usually has a structure in which the A and the participle has the relationship of "a subject + a verb." Accordingly, corresponding participial object-oriented diagrams may be generated in consideration of the relationships of the passive and active types.
Here, the emphasis on the elements may be performed using a method of displaying only a target learning object as well as the above-described highlighting method. For example, a method of showing only an element of a sentence corresponding to an object-oriented diagram (or a target learning element) without showing the entire sentence may be included. When only one element of a sentence, rather than the entire sentence, appears visually, the fact that the appearing element is emphasized is used. When a voice corresponding to a sentence is recorded and played, a voice corresponding to an element of the sentence may be played in synchronization with an object-oriented diagram object.
The user interface module 350 allows a learner to freely set the object-oriented diagrams and the image parts shown in FIGS. 3 to 7. The user interface module 350 provides the user devices 10a to 1On, which are connected thereto, over a network, with previously provided image parts or object-oriented diagrams. Learners who use the user devices 10a to 1On can select image parts or object- oriented diagrams according to their preference. For example, the user interface module 350 presents a plurality of object-oriented diagrams, which are previously provided for the object-oriented diagram 400a shown in FIG. 7 and can be substituted for the object-oriented diagram 400a, to a learner. The learner may select a desired object-oriented diagram through the user interface module 350. In this case, the diagram combination module 320 recreates learning content by combining one or more object-oriented diagrams, one or more target learning sentences, one or more image parts, and voice data, which are selected by the learner, and provides the created learning content to the user devices 10a to 1On. Here, the object-oriented diagram presented by the user interface module 350 to the user devices 10a to 1On is illustrated in FIG. 8. Furthermore, the user interface module 350 may freely register one or more image parts or one or more object-oriented diagrams desired by a learner, and may apply the registered entities to a target learning sentence. For example, when a learner desires to substitute his actual image for an image part for the subject (I) in a target learning sentence, the user interface module 350 may receive an image of the learner, uploaded by the learner, through a corresponding one of the user devices 10a to 1On, and may apply the image part (the image part corresponding to the subject (I) ) , selected by the learner, to the user device of the corresponding learner. This will be described with reference to FIGS. 11 and 12 together. FIG. 11 is a diagram illustrating an example of an interface screen that is provided by the interface module 350 to the user devices 10a to 1On.
An illustrated interface is provided with a file open menu option 41, an image combination menu option 42, and a preview menu option 43. Furthermore, a target learning sentence to be learned by a user is displayed on the interface provided by the interface module 350 to the user devices 10a to 1On, and the nouns 51 to 54 of a displayed target learning sentence have box frames. The nouns having the box frames may be selected using a mouse cursor 60. When a noun is selected, the selected noun and a replaceable image are displayed on an image box 80. For example, when a learner selects the word "lake" from among the nouns provided in the target learning sentence, the word "lake" and replaceable images 71, 72, and 73 are displayed in the image box 80. A user may view and select more images using a direction key 74. Thereafter, the user selects a desired one from among the displayed images using the mouse cursor 60. When the user selects the image 71 using the mouse cursor 60 and then selects the preview menu 43, the user can view a preview screen, as indicated by the reference numeral "90."
FIG. 12 illustrates an example in which an image selected by a learner is applied to actual learning content . , An illustrated image is similar to that of FIG. 3, but it can be seen that the noun "lake" of the target learning sentence and the image corresponding to the noun "lake" have been respectively replaced with the term "swimming pool" and an image 71 corresponding to the term "swimming pool." Through this, a learner can perform various types of learning using various situations or characters .
The highlighting module 330 highlights a word (a phrase or a clause; this description will be omitted hereinafter) corresponding to an object-oriented diagram whenever the object-oriented diagram is displayed (or added) on the desktop. The emphasis on words is performed in the same manner as described in 1) to 4) . A learner sequentially understands situations after sequentially viewing words that are displayed on the basis of the sequence of the construction of a sentence. Learning content presented in the present invention places great emphasis on a method in which a learner takes words that come after the subject 101 as they are and understands a language, rather than a method in which a learner views the entire sentence ("I swim in the lake until my grandmother calls me for lunch.") and then understands the meaning of the sentence. That is, a learner pays attention to words (highlighted words) that come in the sequence of ("swims in the lake all morning • • • for lunch") beginning from the subject 200a, that is, "I." Whenever a word is highlighted, an image part for describing the highlighted word is displayed on the desktop. Here, when the number of image parts displayed on the desktop is two or more, an object- oriented diagram for describing the association between the image parts is displayed along with them. The voice combination module 340 synchronizes voice data with any one of a highlighted word (a phrase or a clause), an object-oriented diagram, and an image part. When any one of a word (a phrase or a clause) , an object- oriented diagram, and an image part, synchronized by the voice combination module 340, is displayed on a screen (or a word is highlighted) , corresponding voice data is played in the user devices 10a to 1On, so a learner can perform learning through visual and auditory association.
As a result, a learner can pay attention to the highlighted word of a target learning sentence through a display device provided in a user device. Here, the learner can listen to playback sound of the highlighted word.
FIG. 9 is a conceptual diagram of a language learning content providing system using image parts according to another embodiment of the present invention. The illustrated language learning content system is similar to that described in conjunction with FIGS. 1 to 8, but differs in that the illustrated system is based on a web server 50 for providing learning content to a user device 20, a user interface screen is provided to the user device 20, and a user can select image parts and object-oriented diagrams through the user interface screen. Therefore, redundant descriptions given in conjunction with FIGS. 1 to 8 are omitted here. In the present embodiment, a learner can select target learning content through the user interface screen and can upload desired images to the web server 50, and the web server 50 changes image parts and object-oriented diagrams, included in the learning content, to the images provided by the learner. The changed learning content is provided to a user device. Most VGA graphic cards provided in learners' computers are provided with an overlay function. Various effects can be applied to original images using the overlay function. For example, current graphic cards have a function of overlaying Korean subtitles on a foreign movie (for example, an American movie) while the movie is played, thus displaying the Korean subtitles on a playback window in which the movie is played. The current graphic cards can also perform this processing on still images as well as moving images. Using this, the diagram combination module 320 performs overlay processing on an object-oriented diagram and provides processing results to a learner's device (for example, a learner's computer) along with individual image parts, and thus the object-oriented diagrams shown in FIGS. 3 to 7 can be displayed on a monitor provided in the learner's device (for example, a computer) . Of course, the elements of a corresponding sentence may also be displayed along with the overlaid object-oriented diagrams. Furthermore, voice corresponding to the elements of a sentence may be played.
This is performed by the above-described voice combination module 340. Image parts corresponding to voice are sequentially displayed while the voice is played, object-oriented diagrams are displayed between the image parts and within the image parts, and the displayed object- oriented diagrams and/or words (phrases, or clauses) corresponding to the image parts are highlighted. Here, since the overlaid object-oriented diagrams may have independent file forms, they are preferably provided in the language learning content providing system 300 using image parts in advance. Alternatively, files provided by a learner device may be received and then substituted for those provided in the language learning content providing system 300 using image parts.
Alternatively, several object-oriented diagrams may be provided in the language learning content providing system 300 using image parts in advance, and a learner may select desired ones from among the object-oriented diagrams. In this case, the language learning content providing system 300 using image parts should present a plurality of object-oriented diagrams, capable of replacing a single object-oriented diagram, to a user. A user can select a desired one from among the plurality of object- oriented diagrams. Of course, overlaid object-oriented diagrams must be synchronized with a target learning sentence. The diagram combination module 320 combines object-oriented diagrams, selected by a learner, with a target learning sentence, and provides combination results to a learner's user device.
As described above, an overlaid object-oriented diagram, created by a learner or a third party, may be User Created Content (UCC) . Further, a learner or a third party may create various pieces of learning content with which not only overlaid object-oriented diagrams but also object- oriented diagrams provided by the learning content providing system 300 of the present invention are combined. The learning content providing system according to the present invention may also provide various pieces of learning content, created by the user, over a network so that a plurality of third parties can use them.
FIG. 10 is . a conceptual block diagram of still another embodiment of the present invention. The illustrated embodiment is similar to that shown in FIG. 1, but differs from that of FIG. 1 in that learning content is provided to portable devices (for example, mobile phones, PDAs, PMPs, etc.) that can access the wireless Internet. Therefore, ' redundant descriptions related to FIGS. 1 to 8 are referenced here. The same reference numerals are used for elements having the same functions .
When each of the portable devices 60a to 6On requests access, a language learning content providing system 300 using image parts according to the present embodiment acquires information about the each of learners' portable devices 60a to 6On from a mobile communication company server 500, which provides a mobile communication service to the portable device, and converts the resolution into a resolution that can be supported by a display device βla of each of the portable devices 60a to 6On. For this purpose, a database 310 shown in the drawing of the present embodiment is provided with information about the portable devices of each manufacturer. An image processing module 360 acquires information about the learner's portable device from the mobile communication company server 500, and detects the resolution of each of the portable devices 60a to 6On through the comparison of the information. Preferably, the image processing module 360 is provided with a phone information acquisition module 365. The phone information acquisition module 365 connects to the mobile communication company server 500 over a network. The phone information acquisition module 365 acquires information about a corresponding one of the portable devices 60a to 6On through the mobile communication company server 500 when the corresponding one of the portable devices 60a to 6On requests access to the language learning content providing system using image parts over a wireless network.
Meanwhile, the display devices 61a provided in the portable devices 60a to 6On can have different reproducible colors. For example, the portable devices 60a to 6On may have various abilities of color reproduction ranging from 256 colors to 16.7 million colors depending on the model. Thus, the image processing module 360 determines the color rendering range for image parts, target learning sentences, and object-oriented diagrams depending on the color reproduction ability of a portable device with reference to information about the portable device. Accordingly, learners can learn a language using learning content anywhere and anytime regardless of the color reproduction ability and resolution of their portable devices 60a to 6On.

Claims

[CLAIMS]
[Claim 1]
A language learning content providing system using image parts, wherein: the system is implemented through a server that provides learning content, including target learning sentences and a plurality of image parts for describing content of each of the target learning sentences, to a learner's user device; the server is configured to sequentially provide the plurality of image parts to the user device in conformity with development of the content of the target learning sentence, so that the plurality of image parts is displayed on the user device; and the respective image parts, displayed in conformity with the development of the content of the target learning sentence, are displayed in synchronization with object- oriented diagrams that describe correlations between the image parts.
[Claim 2]
The language learning content providing system using image parts according to claim 1, wherein the object- oriented diagrams are synchronized with the target learning sentence by the server.
[Claim 3]
The language learning content providing system using image parts according to claim 1, wherein: the server is provided with voice data for the target learning sentence; and the voice data is synchronized with any of the image parts and the object-oriented diagrams.
[Claim 4]
The language learning content providing system using image parts according to claim 1, wherein: the server is provided with a plurality of image files for the image parts; and when the user device selects any one of the plurality of image files, a selected image file is synchronized with the target learning sentence.
[Claim 5]
The language learning content providing system using image parts according to claim 1, wherein the learning content includes the target learning sentence, and further includes any of image parts, object-oriented diagrams and voice data.
[Claim 6]
The language learning content providing system using image parts according to claim 1, wherein the target learning sentence is English.
[Claim 7]
The language learning content providing system using image parts according to claim 1, wherein each of the object-oriented diagrams is any one of a figure, an arrow, a straight line and a curve.
[Claim 8]
The language learning content providing system using image parts according to claim 1, wherein: the target learning sentence is divided into any of words, phrases, and clauses; and the resulting elements are respectively synchronized with the object-oriented diagrams by the server.
[Claim 9]
The language learning content providing system using image parts according to claim 8, wherein the server processes a word, a phrase or a clause, synchronized with each of the object-oriented diagrams, so that any one of a font, a letter size, a letter thickness, a letter incline, an underline, shading and a letter color thereof is different from that of one or more adjacent words, phrases or clauses.
[Claim 10]
The language learning content providing system using image parts according to claim 1, wherein the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
[Claim 11]
A language learning content providing system using image parts, comprising: a database provided with target learning sentences, a plurality of image parts for describing each of the target learning sentences, and object-oriented diagrams for describing correlations between the image parts; and a diagram combination module for creating learning content by synchronizing the image parts, which are provided to and displayed on a learner' s user device in a sequence of development of the target learning sentence, with the object-oriented diagrams and providing the created learning content to the user device.
[Claim 12] The language learning content providing system using
■ image parts according to claim 11, further comprising a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence; wherein each of the elements of the target learning sentence is any one of a word, a phrase and a clause.
[Claim 13]
The language learning content providing system using image parts according to claim 12, wherein the highlighting module changes any one of a color and size of a word constituting each of the elements.
[Claim 14]
The language learning content providing system using image parts according to claim 12, wherein the highlighting module increases a size of a word constituting each element compared to that of adjacent words.
[Claim 15]
The language learning content providing system using image parts according to claim 11, wherein: the database is provided with voice data for the target learning sentence; and the diagram combination module synchronizes any of the image parts and the object-oriented diagrams with the voice data.
[Claim 16]
The language learning content providing system using image parts according to claim 11, wherein: the database is provided with a plurality of image files for the image parts; and any one of the plurality of image files for the image parts is selected by a learner' s user device and is displayed on the user device.
[Claim 17] The language learning content providing system using image parts according to claim 11, wherein the target learning sentence is English.
[Claim 18]
The language learning content providing system using image parts according to claim 11, wherein each of the object-oriented diagrams is any one of a figure, an arrow, a straight line and a curve.
[Claim 19]
The language learning content providing system using image parts according to claim 11, wherein the image parts are sequentially provided and displayed on the user device, and are cumulatively displayed.
[Claim 20]
A language learning content providing system using image parts, comprising: a diagram combination module for creating learning content by synchronizing each target learning sentence, a plurality of image parts for describing the target learning sentence, and an object-oriented diagram for describing correlations between the image parts; and an image processing module for acquiring information about a learner' s portable device from a mobile communication company and converting images into images that are reproducible in the portable device with reference to the information.
[Claim 21] The language learning content providing system using image parts according to claim 20, wherein the image processing module accesses a mobile communication company server for serving the portable device, and includes a phone information acquisition module for acquiring model information and resolution information of the portable device from the mobile communication company server.
[Claim 22]
The language learning content providing system using image parts according to claim 21, wherein the image processing module changes sizes of the image parts, the target learning sentence and the object-oriented diagrams in conformity with the resolution information.
[Claim 23]
The language learning content providing system using image parts according to claim 21, wherein the image processing module changes colors of the image parts, the target learning sentence and the object-oriented diagrams to colors that are reproducible in the portable device in conformity with the resolution information.
[Claim 24]
The language learning content providing system using image parts according to claim 20, wherein each of the object-oriented diagrams is any one of a figure, an arrow, a straight line and a curve.
[Claim 25]
The language learning content providing system using image parts according to claim 20, wherein: the target learning sentence is divided into any of ■■ . words, phrases, and clauses; and the resulting elements are respectively synchronized with the object-oriented diagrams by the diagram combination module .
[Claim 26]
The language learning content providing system using image parts according to claim 20, further comprising a highlighting module for highlighting elements of the target learning sentence, synchronized with the object-oriented diagrams, when the object-oriented diagrams are displayed on the user device in synchronization with the target learning sentence; wherein each of the elements of the target learning sentence is any of a word, a phrase and a clause.
[Claim 27]
The language learning content providing system using image parts according to claim 26, wherein the highlighting module changes any one of a color and a size of a word constituting each element .
[Claim 28]
The language learning content providing system using image parts according to claim 26, wherein the highlighting module increases a size of a word constituting each element compared to that of adjacent words.
[Claim 29] The language learning content providing system using image parts according to claim 20, wherein the diagram combination module combines voice data for the target learning sentence with the learning content.
PCT/KR2007/006183 2006-12-01 2007-12-03 Language learning contents providing system using image parts Ceased WO2008066359A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009539192A JP5553609B2 (en) 2006-12-01 2007-12-03 Language learning content provision system using partial images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2006-0120939 2006-12-01
KR1020060120939A KR100798153B1 (en) 2006-12-01 2006-12-01 Partial Image Utilization Partial Image Language Learning Content Provision System

Publications (1)

Publication Number Publication Date
WO2008066359A1 true WO2008066359A1 (en) 2008-06-05

Family

ID=39219351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2007/006183 Ceased WO2008066359A1 (en) 2006-12-01 2007-12-03 Language learning contents providing system using image parts

Country Status (4)

Country Link
JP (1) JP5553609B2 (en)
KR (1) KR100798153B1 (en)
CN (1) CN101553832A (en)
WO (1) WO2008066359A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011128362A (en) * 2009-12-17 2011-06-30 Cocone Corp Learning system

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100983109B1 (en) * 2008-04-30 2010-09-20 주식회사 세스교육 English word memorization system and online english word memorization method
KR101197106B1 (en) * 2010-09-02 2012-11-07 주식회사 오르사 Method for studying words using color contrast
KR101151886B1 (en) * 2011-07-07 2012-07-09 주식회사 나인드림스 E-book system for language learning and method for offering thereof
KR101279704B1 (en) 2012-05-02 2013-06-27 (주) 윈코어 Method for playing contents of e-learning and recording medium storing program thereof
KR101467937B1 (en) * 2013-03-06 2014-12-02 최정완 System and method for learning a natural science using part images
JP6450127B2 (en) * 2014-09-30 2019-01-09 正文 立原 Language training device
KR101891244B1 (en) * 2016-04-14 2018-08-27 주식회사 젭스 English study material
JP7079439B2 (en) * 2017-11-08 2022-06-02 修 木我 English learning devices, their control methods, programs, and English learning toys
KR102552857B1 (en) * 2018-05-15 2023-07-10 (주)우리랑코리아 Subtitle processing method for language education and apparatus thereof
KR102236847B1 (en) 2019-01-30 2021-04-06 주식회사 이볼케이노 Language learning system using concept maker of words
KR102307779B1 (en) 2019-03-14 2021-10-01 주식회사 이볼케이노 System for improving efficiency of language acquisition using the concept-image and method using the same
KR102444093B1 (en) * 2019-05-21 2022-09-16 주식회사 이볼케이노 English exam learning system using images
KR102377787B1 (en) * 2020-03-26 2022-03-23 주식회사 이볼케이노 Image-structuring system for learning english sentences
KR20220125438A (en) 2021-03-05 2022-09-14 김흥섭 Method and system for learning english using word order map
KR102389153B1 (en) * 2021-06-21 2022-04-21 김영은 Method and device for providing voice responsive e-book
KR102647555B1 (en) * 2023-11-15 2024-03-14 김상원 System and Method for Providing Foreign Language Learning Using Animation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4695975A (en) * 1984-10-23 1987-09-22 Profit Technology, Inc. Multi-image communications system
KR20030042996A (en) * 2001-11-26 2003-06-02 인벤텍 코오포레이션 Story interactive grammar teaching system and method
KR20050010541A (en) * 2003-07-21 2005-01-28 인벤텍 베스타 컴파니 리미티드 Animation-Assisted English Learning System And Method For Executing On Computers
JP2006189522A (en) * 2004-12-29 2006-07-20 Toichiro Sato Educational material for supporting learning

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06175572A (en) * 1992-02-26 1994-06-24 Nobuyoshi Nakamura Recording medium for education
JPH10283347A (en) * 1997-04-01 1998-10-23 Arusu:Kk Sentence display method and display device, and computer-readable recording medium recording sentence display control data
JP2000089660A (en) * 1998-09-09 2000-03-31 Matsushita Electric Ind Co Ltd Sign language learning support device and recording medium storing sign language learning support program
JP3439429B2 (en) * 2000-06-05 2003-08-25 エヌイーシーカスタマックス株式会社 Standby screen creation device, method, and recording medium recording the program
KR20020072418A (en) * 2001-03-09 2002-09-16 윤명석 System for educating interactive english with content-area instruction type and method of the same over network
JP2003029619A (en) * 2001-07-11 2003-01-31 Lexpo:Kk Language practice equipment
JP2003131552A (en) * 2001-10-24 2003-05-09 Ittetsu Yoshioka Language learning system and language learning method
JP2003323102A (en) * 2002-05-06 2003-11-14 Toichiro Sato Method for training listening comprehension of foreign language by discriminating tone, recording medium and accompanying textbook for the same
JP2004212646A (en) * 2002-12-27 2004-07-29 Casio Comput Co Ltd Voice display output control device and voice display output control processing program
JP4423859B2 (en) * 2003-01-31 2010-03-03 パナソニック株式会社 Image server
US20050069849A1 (en) 2003-09-30 2005-03-31 Iode Design Computer-based method of improving reading comprehension
JP2006215511A (en) * 2005-02-06 2006-08-17 Toichiro Sato Learning support teaching material

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4695975A (en) * 1984-10-23 1987-09-22 Profit Technology, Inc. Multi-image communications system
KR20030042996A (en) * 2001-11-26 2003-06-02 인벤텍 코오포레이션 Story interactive grammar teaching system and method
KR20050010541A (en) * 2003-07-21 2005-01-28 인벤텍 베스타 컴파니 리미티드 Animation-Assisted English Learning System And Method For Executing On Computers
JP2006189522A (en) * 2004-12-29 2006-07-20 Toichiro Sato Educational material for supporting learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011128362A (en) * 2009-12-17 2011-06-30 Cocone Corp Learning system

Also Published As

Publication number Publication date
JP5553609B2 (en) 2014-07-16
KR100798153B1 (en) 2008-01-28
JP2010511896A (en) 2010-04-15
CN101553832A (en) 2009-10-07

Similar Documents

Publication Publication Date Title
WO2008066359A1 (en) Language learning contents providing system using image parts
US6377925B1 (en) Electronic translator for assisting communications
Powers Transcription techniques for the spoken word
Knoblauch et al. Video analysis
US20070255570A1 (en) Multi-platform visual pronunciation dictionary
Khuddro Linguistic issues and quality assessment of English-Arabic audiovisual translation
WO2008066361A1 (en) Language learning contents provider system
Li et al. Effects of monolingual and bilingual subtitles on L2 vocabulary acquisition
Steinfeld The benefit of real-time captioning in a mainstream classroom as measured by working memory.
KR101281621B1 (en) Method for teaching language using subtitles explanation in video clip
Stone Pointing, telling and showing: Multimodal deictic enrichment during in-vision news sign language translation
Wald Learning through multimedia: Speech recognition enhancing accessibility and interaction
KR100505346B1 (en) Language studying method using flash
KR20140087951A (en) Apparatus and method for learning english grammar by using native speaker's pronunciation data and image data.
McKee Footing shifts in American sign language lectures
Steinfeld The benefit to the deaf of real-time captions in a mainstream classroom environment
Kato et al. Sign Language Writing System: Focus on the Representation of Sign Language-Specific Features
Zhu et al. The effects of watching subtitled videos on the perception of L2 connected speech by L1 Chinese-L2 English speakers
KR20140087953A (en) Apparatus and method for language education by using native speaker's pronunciation data and thoughtunit
Nwike et al. Translating social identity: A sociolinguistic analysis of code-switching and politeness strategies in multilingual subtitling
KR20140073768A (en) Apparatus and method for language education using meaning unit and pronunciation data of native speakers
Tunold Captioning for the DHH
Nagashima et al. 4 Social media English teaching and native-speakerism in Japan
Al-Junaydi Towards English/Arabic Subtitling Standards
Isbell et al. Visual Input and Second Language Listening

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780044542.7

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07851176

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2009539192

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07851176

Country of ref document: EP

Kind code of ref document: A1