US20170053541A1 - Interactive educational system and method - Google Patents
Interactive educational system and method Download PDFInfo
- Publication number
- US20170053541A1 US20170053541A1 US14/588,540 US201514588540A US2017053541A1 US 20170053541 A1 US20170053541 A1 US 20170053541A1 US 201514588540 A US201514588540 A US 201514588540A US 2017053541 A1 US2017053541 A1 US 2017053541A1
- Authority
- US
- United States
- Prior art keywords
- narrative
- hidden
- illustration
- user
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Definitions
- the present invention relates generally to educational systems and methods, and more particularly relates to a method of participating in an interactive story with illustrations and corresponding hidden narratives that are only revealed in response to a user's input, such as typing in and/or audibly reciting words from the hidden narrative.
- the hidden narrative describes only viewable features of the illustration.
- a system for presenting an interactive story including a display; a speech recognition module configured to receive at least a portion of an utterance, analyze the portion of the utterance, and identify a word corresponding to the portion of the utterance; a processing device communicatively coupled to the display and the speech recognition module.
- the processing device is configured to execute computer instructions, the computer instructions including instructions for displaying an illustration on the display without a corresponding narrative; and displaying on the display at least one word of a hidden narrative associated with the illustration in response to the speech recognition module identifying the word as corresponding to a user's utterance.
- the terms “about” or “approximately” apply to all numeric values, whether or not explicitly indicated. These terms generally refer to a range of numbers that one of skill in the art would consider equivalent to the recited values (i.e., having the same function or result). In many instances these terms may include numbers that are rounded to the nearest significant figure.
- the term “longitudinal” should be understood to mean in a direction corresponding to an elongated direction of the display.
- program “software application,” and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system.
- FIG. 3 is a screenshot of the exemplary software application of FIG. 2 depicting a revealed word corresponding to an utterance/input by a user in accordance with the present invention
- FIG. 6 is a screenshot of the exemplary software application of FIG. 2 depicting a revealed word corresponding to a software feature that reveals verbs in accordance with the present invention
- FIG. 1 shows several advantageous features of the present invention, but, as will be described below, the invention can be provided in several shapes, sizes, combinations of features and components, varying numbers and functions of the components, and can be performed in varying sequences and with different intermediary steps.
- FIGS. 2-7 depict screenshots from an exemplary software application that may implement the inventive process.
- the term “interactive” is defined as involving input from a user. The process flow starts at step 100 and moves directly to step 102 where an illustration is displayed on a display without a corresponding narrative.
- the illustration is associated with a descriptive narrative; however, the narrative is not displayed initially to a user, which provides the user with a chance to create the story himself, using the illustration as a guide.
- the term “display” is defined as an electronic device for the visual presentation of data.
- the display is a computer monitor.
- the display is a television monitor.
- the display is formed as a touchscreen of an electronic mobile device, such as a smart phone, a computer tablet, an e-Reader, or the like.
- the term “corresponding narrative” is intended to indicate a narrative that is associated with and/or describes the illustration.
- the period at the end of the sentence “A green giraffe is sitting outside” is revealed as a result of the user uttering or typing in the words “outside period.”
- commas are revealed, one at a time, when the user utters or types in the word “comma” with the word that immediately precedes the comma.
- the hidden text is “A bear, Teddy, loves ice cream.” Punctuation is revealed as a result of the user uttering or typing in the words “A bear comma Teddy comma loves ice cream period.” Accordingly, users can learn proper punctuation.
- the user can select a hard, medium, or easy setting, where, for example, no clues are provided in the hard setting, some clues are provided in the medium setting, and all available clues are provided in the easy setting.
- the hidden narrative describes only viewable features of the illustration, not additional story elements that cannot be observed, inferred or otherwise discerned from the illustration.
- viewable features is intended to indicate features that can be observed, inferred, or otherwise discerned from the visual depiction provided by the illustration.
- the exemplary illustration 200 FIG. 2
- any corresponding hidden narrative would not include any discussion of a house, a tree, or any other additional story elements not observable or inferable from the illustration.
- step 900 a process flow chart is presented for an exemplary method of storytelling, in accordance with the present invention.
- storytelling is intended to mean conveying related events, description, and/or facts, real or imagined, in words and/or still or dynamic images.
- the process begins at step 900 and immediately proceeds to step 902 , where an author provides an illustration configured to be viewable on a display without a corresponding narrative.
- the author provides the illustration by obtaining the illustration from an artist that creates the illustration.
- the author creates the illustration himself through, for example, sketching, drawing, painting, using computer drawing tools, and the like.
- step 904 the author creates a hidden narrative describing only viewable features of the illustration.
- the author creates the hidden narrative first and then, subsequently, the illustration is created or provided to correspond to the hidden narrative such that each of the plurality of words in the hidden narrative is observable from the illustration.
- the illustrations are fairly simplistic, such as the sitting green giraffe depicted in FIGS. 2-7 . These fairly simplistic illustrations may be included in, for example, younger children's hidden narrative stories, or for beginning foreign language students.
- the illustrations can be much more complex and involved, including multiple subjects and objects with very detailed scenes. These more complex illustrations can be included in, for example, more adult hidden narrative stories, or for advanced foreign language students. Creating stories from only viewable features of the illustrations is a novel and inventive new method of storytelling. The process ends at step 906 .
- Memory 1006 associated with the device 1000 may be, for example, one or more buffer, a flash memory, or non-volatile memory, such as random access memory (RAM).
- the electronic device 1000 may also include non-volatile storage.
- the non-volatile storage may represent any suitable storage medium, such as a hard disk drive or non-volatile memory, such as flash memory.
- software instructions associated with the speech recognition module 1014 are stored in non-volatile memory and are executable by the processing device 1008 .
- the speech recognition module 1014 is configured to receive at least a portion of a user's utterance, analyze the portion of the utterance, and identify a word corresponding to the portion of the utterance.
- FIGS. 12-17 another exemplary method of presenting an interactive story is illustrated in a series of screenshots.
- the exemplary method can be considered an interactive method of learning a medical procedure in accordance with an embodiment of the present invention.
- FIG. 12 shows a user interface displaying an illustration 1200 without a corresponding narrative.
- the illustration 1200 depicts a first step in a cardiopulmonary resuscitation (CPR) first aid medical procedure.
- CPR cardiopulmonary resuscitation
- FIG. 13 illustrates a user's utterance 1300 of the words “tilt head,” resulting in a determination that the user's utterance 1300 corresponds to words in the hidden text.
- the words “tilt head” are displayed on the display, while the unspoken words “lift chin, and check breathing” continue to remain hidden.
- FIG. 18 shows an illustration 1800 without a corresponding narrative.
- the illustration 1800 is a textual visual representation identifying a legal concept, namely a contract. It is noted that dotted line text appears below the illustration 1800 , representing a hidden narrative that is not yet revealed to the user via the display.
- FIG. 19 illustrates the user's utterance 1900 of the word “offer,” resulting in a determination that the user's utterance 1900 corresponds to words in the hidden text. As a result, the word “offer” is displayed to the user, while the words “acceptance,” “consideration,” and “mutuality” remain hidden until the user is able to input/utter each of the words in the hidden narrative.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
A method of presenting an interactive story including displaying an illustration on a display without a corresponding narrative; receiving a user's input; determining whether the user's input corresponds to at least one word in a hidden narrative associated with the illustration; and displaying the word in the hidden narrative on the display in response to determining that the user's input corresponds to the word.
Description
- The present invention relates generally to educational systems and methods, and more particularly relates to a method of participating in an interactive story with illustrations and corresponding hidden narratives that are only revealed in response to a user's input, such as typing in and/or audibly reciting words from the hidden narrative.
- It is well-known that educators encourage reading as providing many substantial benefits to both children and adults, such as, vocabulary expansion, developing language skills, and learning about various topics. In fact, reading is one of the primary methods of educating today. Unfortunately, reading is not highly interactive. On the contrary, it is a rather passive activity where words are observed by the human eye, interpreted by the human brain, and stored away. Creativity is required to write a story, but not much is required to read it. There is very little creativity and interactivity that is required through passive reading. Studies have shown that interactive learning improves critical thinking skills, and develops imagination and logic skills more than passive learning. Interactive learning also helps with retention and memory and is sometimes considered more entertaining and engaging than passive learning techniques.
- To increase the level of interactivity and engagement in books, some eBooks have incorporated features, such as 3-D animated graphic illustrations, and options to click on illustrated characters or objects to receive an audible message from the character or other additional information. However, these features are actually not much more interactive than reading, merely requiring clicking and viewing. These features do not require a higher level of involvement, such as requiring the reader to participate in the storytelling process, to fully engage the attention of the reader, requiring the reader to pay close attention to details of the illustration and allowing the reader to participate in the creative storytelling process.
- Therefore, a need exists to overcome the problems with the prior art as discussed above.
- The invention provides an interactive educational system and method that overcomes the hereinafore-mentioned disadvantages of the heretofore-known devices and methods of this general type.
- With the foregoing and other objects in view, there is provided, in accordance with the invention, a method of associating a multi-word narrative with an illustration, the method including displaying an illustration without a corresponding narrative; receiving a user's input; determining whether the user's input corresponds to at least one word in a hidden narrative associated with the illustration; and displaying the word in the hidden narrative in response to determining that the user's input corresponds to the at least one word.
- In accordance with another feature of the present invention, the user's input includes an utterance of the word.
- In accordance with a further feature of the present invention, the hidden narrative describes only viewable features of the illustration.
- In accordance with a further feature of the present invention, the illustration is formed as a single still image.
- In accordance with yet another feature of the present invention, the illustration is formed as a sequence of still images.
- In accordance with another feature of the present invention, the illustration is formed as a video.
- In accordance with a further feature of the present invention, displaying the illustration further comprises displaying the illustration on a display for a predetermined time period.
- In accordance with the present invention, a method of learning includes viewing an illustration on a display without a corresponding narrative; and determining a hidden narrative associated with the illustration by inputting at least one word from the hidden narrative such that the word becomes viewable on the display in response to inputting the word.
- In accordance with the present invention, an education method includes providing an illustration configured to be viewable by a user without a corresponding narrative; and creating a hidden narrative describing only viewable features of the illustration, where each of a plurality of words in the hidden narrative is configured to become viewable in response to a user's input of said word.
- In accordance with the present invention, a system for presenting an interactive story is provided, the system including a display; a speech recognition module configured to receive at least a portion of an utterance, analyze the portion of the utterance, and identify a word corresponding to the portion of the utterance; a processing device communicatively coupled to the display and the speech recognition module. The processing device is configured to execute computer instructions, the computer instructions including instructions for displaying an illustration on the display without a corresponding narrative; and displaying on the display at least one word of a hidden narrative associated with the illustration in response to the speech recognition module identifying the word as corresponding to a user's utterance.
- Although the invention is illustrated and described herein as embodied in an interactive story system and method, it is, nevertheless, not intended to be limited to the details shown because various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims. Additionally, well-known elements of exemplary embodiments of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.
- Other features that are considered as characteristic for the invention are set forth in the appended claims. As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one of ordinary skill in the art to variously employ the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting; but rather, to provide an understandable description of the invention. While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the following description in conjunction with the drawing figures, in which like reference numerals are carried forward. The figures of the drawings are not drawn to scale.
- Before the present invention is disclosed and described, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. The terms “a” or “an,” as used herein, are defined as one or more than one. The term “plurality,” as used herein, is defined as two or more than two. The term “another,” as used herein, is defined as at least a second or more. The terms “including” and/or “having,” as used herein, are defined as comprising (i.e., open language). The term “coupled,” as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The term “providing” is defined herein in its broadest sense, e.g., bringing/coming into physical existence, making available, and/or supplying to someone or something, in whole or in multiple parts at once or over a period of time.
- As used herein, the terms “about” or “approximately” apply to all numeric values, whether or not explicitly indicated. These terms generally refer to a range of numbers that one of skill in the art would consider equivalent to the recited values (i.e., having the same function or result). In many instances these terms may include numbers that are rounded to the nearest significant figure. In this document, the term “longitudinal” should be understood to mean in a direction corresponding to an elongated direction of the display. The terms “program,” “software application,” and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. A “program,” “computer program,” “software application,” or “computer instructions” may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
- The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and explain various principles and advantages all in accordance with the present invention.
-
FIG. 1 is a process flow chart representing an exemplary method of presenting an interactive story, in accordance with an embodiment of the present invention; -
FIG. 2 is a screenshot of an exemplary software application at least partially implementing the inventive process, the screenshot depicting an illustration with a hidden narrative presented on a display of a computing device in accordance with the present invention; -
FIG. 3 is a screenshot of the exemplary software application ofFIG. 2 depicting a revealed word corresponding to an utterance/input by a user in accordance with the present invention; -
FIG. 4 is a screenshot of the exemplary software application ofFIG. 2 depicting a visual clue including a leading question displayed to the user in accordance with the present invention; -
FIG. 5 is a screenshot of the exemplary software application ofFIG. 2 depicting a revealed word corresponding to an utterance/input by the user in response to the visual clue inFIG. 4 in accordance with the present invention; -
FIG. 6 is a screenshot of the exemplary software application ofFIG. 2 depicting a revealed word corresponding to a software feature that reveals verbs in accordance with the present invention; -
FIG. 7 is a screenshot of the exemplary software application ofFIG. 2 depicting a visual clue including an underline of hidden words in accordance with the present invention; -
FIG. 8 is another process flow chart representing an exemplary method of learning, in accordance with an embodiment of the present invention; -
FIG. 9 is yet another process flow chart representing an exemplary method of storytelling, in accordance with an embodiment of the present invention; -
FIG. 10 is a block diagram of an exemplary electronic device, in accordance with an embodiment of the present invention; -
FIG. 11 is a block diagram of a data processing system on which the software application ofFIG. 2 may run, in accordance with an embodiment of the present invention; -
FIG. 12 is a screenshot of another exemplary software application implementing an inventive electronic flash card depicting an illustration of a first step in a medical procedure with a hidden narrative in accordance with an embodiment of the present invention; -
FIG. 13 is a screenshot of the exemplary software application ofFIG. 12 depicting the illustration of the first step in the medical procedure with revealed words corresponding to a user's utterance/input in accordance with an embodiment of the present invention; -
FIG. 14 is a screenshot of the exemplary software application ofFIG. 12 depicting an illustration of a second step in the medical procedure with a hidden narrative in accordance with an embodiment of the present invention; -
FIG. 15 is a screenshot of the exemplary software application ofFIG. 12 depicting the illustration of the second step in the medical procedure with revealed words corresponding to a user's utterance/input in accordance with an embodiment of the present invention; -
FIG. 16 is a screenshot of the exemplary software application ofFIG. 12 depicting an illustration of a third step in the medical procedure with a hidden narrative in accordance with an embodiment of the present invention; -
FIG. 17 is a screenshot of the exemplary software application ofFIG. 12 depicting the illustration of the third step in the medical procedure with revealed words corresponding to a user's utterance/input in accordance with an embodiment of the present invention; -
FIG. 18 is a screenshot of another exemplary software application implementing another inventive electronic flash card depicting the illustration in the form of a textual visual representation with a hidden narrative in the form of a legal definition in accordance with an embodiment of the present invention; -
FIG. 19 is a screenshot of the exemplary software application ofFIG. 18 depicting the illustration with a revealed word corresponding to the user's utterance/input in accordance with an embodiment of the present invention; -
FIG. 20 is a screenshot of another exemplary software application implementing an inventive foreign language electronic flash card depicting the illustration in the form of a textual visual representation in a foreign language with a hidden narrative corresponding to a translation in a native language in accordance with an embodiment of the present invention; and -
FIG. 21 is a screenshot of the exemplary software application ofFIG. 20 depicting the illustration with revealed words corresponding to the user's utterance/input in accordance with an embodiment of the present invention. - While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the following description in conjunction with the drawing figures, in which like reference numerals are carried forward. It is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms.
- The present invention provides a novel and efficient method of presenting an interactive story that requires a user to participate in the storytelling process. Embodiments of the invention provide for displaying an illustration to the user with a hidden narrative and allowing the user to make observations of objects in the illustration and inferences from the illustration, such as emotion of the characters, to unveil words in the hidden narrative. In addition, embodiments of the invention provide for a user input module, such as a speech recognition module that is able to determine whether audible utterances by the user corresponds to one or more hidden words in the hidden narrative, which results in unveiling of the hidden word(s). In another embodiment, the user input module is configured to receive textual user input through typing, rather than audible user input. The inventive process develops attention skills, especially in children, by requiring them to pay close attention to objects and allows users to develop a story through detailed observations of the illustration. The inventive process also develops writing skills, grammar, sentence building, and storytelling skills and can be particularly useful as a language learning tool.
- Referring now to
FIG. 1 , one embodiment of the present invention is shown in a process flow chart.FIG. 1 shows several advantageous features of the present invention, but, as will be described below, the invention can be provided in several shapes, sizes, combinations of features and components, varying numbers and functions of the components, and can be performed in varying sequences and with different intermediary steps. The first example of a process for presenting an interactive story, as shown inFIG. 1 , will be described in conjunction withFIGS. 2-7 , which depict screenshots from an exemplary software application that may implement the inventive process. As used herein, the term “interactive” is defined as involving input from a user. The process flow starts atstep 100 and moves directly to step 102 where an illustration is displayed on a display without a corresponding narrative. As used herein, the term “illustration” is defined as a visualization or a depiction, such as a drawing, sketch, painting, photograph, cartoon, video, computer graphic, or other kind of visual representation, whether still or dynamic, of things seen, remembered, or imagined, including a graphic depiction of words, text, and punctuation marks. In one embodiment, the illustration is formed as a single still image. As used herein, the term “still image” is intended to indicate any static visual representation, such as a sketch, photograph, drawing, painting, or the like. In another embodiment, the illustration is formed as a sequence of still images, such as, for example, a cartoon or other animation. In yet another embodiment, the illustration is formed as a video, such as, for example, a short video clip or television clip that can be used to prompt the user to recite the corresponding narrative by making observations of and inferences from the video or television clip. - In a preferred embodiment, the illustration is associated with a descriptive narrative; however, the narrative is not displayed initially to a user, which provides the user with a chance to create the story himself, using the illustration as a guide. As used herein, the term “display” is defined as an electronic device for the visual presentation of data. In one embodiment, the display is a computer monitor. In another embodiment, the display is a television monitor. In yet another embodiment, the display is formed as a touchscreen of an electronic mobile device, such as a smart phone, a computer tablet, an e-Reader, or the like. As used herein, the term “corresponding narrative” is intended to indicate a narrative that is associated with and/or describes the illustration.
- Referring to
FIG. 2 , anexemplary illustration 200 is presented without a corresponding narrative. It is noted that dotted line text appears below theillustration 200, representing a hidden narrative that is not yet revealed to the user via the display. Theexemplary illustration 200 is formed as a green giraffe that is sitting. In one embodiment, the illustration is displayed on the display only for a limited, predetermined time period so as to require the user to make observations quickly and through recollection. This option may be available in a medium or a hard setting to further challenge the user and improve the user's memory. In one embodiment, a software application implementing one or more of the features described herein, can also include an easy, medium, and a hard setting that allows the user to tailor the difficulty to his desired level. In one embodiment, the predetermined time period is less than or equal to approximately 60 seconds. In another embodiment, the predetermined time period is approximately 30 seconds. In yet another embodiment, the predetermined time period is approximately 15 seconds. In further embodiments, the predetermined time period is outside of these ranges. In another embodiment, the hard setting has a longer predetermined time period than the medium and easy settings. In a further embodiment, the user can select the predetermined time period. In yet a further embodiment, the time period is determined at random by the software application. - In
step 104, the user's input is received. The user's input can be received through, for example, a user input interface, such as a keyboard, a mouse, or a touchscreen, or the user's input can be received through, for example, an audio input, such as a microphone. In one embodiment, the display and user input interface (and/or audio input) can be communicatively coupled to an electronic device, such as a computer tablet, a smart phone, a laptop, a personal computer, and the like, on which the software application can run/operate. In another embodiment, the display and user input interface (and/or audio input) can be communicatively coupled to an electronic device dedicated to presenting stories/books in accordance with the present invention, such as, for example, an e-Reader specifically dedicated to allowing users to read and interact with hidden text books of the present invention. As used herein, the term “input” is intended to indicate any information and/or data from the user that is put in or enters an electronic device for processing. In one embodiment, the input includes typed text/characters that are entered by the user through, for example, a keyboard or a touchscreen. In another embodiment, the input includes an utterance of at least a portion of a word by the user, which is received via a microphone. As used herein, the term “utterance” is defined as a vocal expression. In yet another embodiment, the present invention can include a speech recognition module configured to receive at least a portion of the utterance, analyze the portion of the utterance, and identify a word corresponding to the portion of the utterance. In this manner, the speech recognition module can assist in converting analog audio signals from the user to digital information that can be processed by a processor of the electronic device and used to implement one or more of the features described herein on the electronic device. In further embodiments, the input can include other types of user communication methods. - In
step 106, the electronic device determines whether the user's input corresponds to at least one word in the hidden narrative associated with the illustration. As used herein, the term “corresponds” is defined as: to be substantially similar or equivalent, or to match. In one embodiment, the speech recognition module determines whether the user's utterance corresponds to a spoken word and the software application further determines whether the spoken word corresponds to a hidden word in the hidden narrative. In another embodiment, the software application determines whether the user's typed textual input corresponds to a hidden word in the hidden narrative. As used herein, the term “narrative” is defined as including a spoken, written, printed, or other textual account of related events, description, definitions, translations, and/or facts, real or imagined, and punctuation marks. As used herein, the term “hidden narrative” is defined as a narrative that is hidden from display with its corresponding illustration, but can be displayable with its corresponding illustration in response to the occurrence of one or more user inputs. In one embodiment, the narrative can include a portion of a fictional story, such as a children's story or a novel. In another embodiment, the narrative can include a portion of a description of a factual or historical event or a scientific or mathematical understanding. In yet another embodiment, the narrative can be formed as a portion of a story presented in a language other than a primary language of the user, i.e. a foreign language. Accordingly, embodiments of the present invention can be used as an interactive learning tool for learning foreign languages. In a further embodiment, the narrative includes one or more steps in a medical procedure. In yet another embodiment, the narrative includes a definition and the illustration is a graphical representation of the word being defined by the narrative. This can be considered a novel flash card embodiment of the present invention. Preferably, the narrative includes observations, descriptions, and inferences that can be made from the illustration.FIG. 2 illustrates an exemplary embodiment of a hidden narrative 202 corresponding to theillustration 200. For illustrative purposes, inFIG. 2-7 , the hidden narrative 202 is presented in a dotted line format when the hidden narrative 202 is hidden, i.e. not displayed, and is presented in normal black text when portions of the hidden narrative 202 are revealed. - In
step 108, the word in the hidden narrative is displayed on the display in response to determining that the user's input corresponds to the word.FIG. 3 presents a user interface where, in response to theutterance 300 of the word “giraffe,” the text GIRAFFE is revealed/displayed beneath theillustration 200. In some embodiments, the exact wording is not required to reveal a word in the hidden narrative. For example, if the root word (e.g. sit) is uttered/spoken, typed, or otherwise input by the user, the corresponding full word (e.g. sitting) in the hidden narrative may be revealed. In a foreign language learning embodiment, as the user makes observations of the illustration, via utterances or typed input, in the user's native language, the hidden narrative is in a foreign language. For example, in response to the user's utterance of “giraffe,” which is an English word, the hidden narrative is revealed on the display in the foreign language equivalent, such as “jirafa,” which is the Spanish word for “giraffe.” In a further embodiment, in response to the hidden text being revealed, the software application also causes a computerized audio playback of the foreign language word(s) as they are revealed, so that the user may hear the proper pronunciation and intonations and learn sentence building and spelling in the foreign language. - In one embodiment, punctuation is automatically revealed with the
illustration 200. In another embodiment, punctuation is revealed only when the word that immediately precedes or follows the punctuation is revealed. For example, the sentence “A green giraffe is sitting outside” ends with a period. Accordingly, the period at the end of the sentence would only be revealed as a result of the word “outside” being revealed, as the word “outside” immediately precedes the period. In yet another embodiment, punctuation is only revealed when the punctuation is uttered or typed in. For example, the period at the end of the sentence “A green giraffe is sitting outside” would only be revealed when the user utters or types in the word “period.” As another example, commas would only be revealed when the user utters or types in the word “comma.” In yet another embodiment, punctuation is only revealed when the punctuation is uttered or typed with the word that immediately precedes the punctuation. For example, in one embodiment, the period at the end of the sentence “A green giraffe is sitting outside” is revealed as a result of the user uttering or typing in the words “outside period.” In another embodiment, commas are revealed, one at a time, when the user utters or types in the word “comma” with the word that immediately precedes the comma. For example, the hidden text is “A bear, Teddy, loves ice cream.” Punctuation is revealed as a result of the user uttering or typing in the words “A bear comma Teddy comma loves ice cream period.” Accordingly, users can learn proper punctuation. In yet another embodiment, all punctuation is revealed as a result of the user uttering or typing in the word for the punctuation. For example, all commas in the hidden text are revealed as a result of the user uttering or typing in the word “comma.” - In additional embodiments, the software application can include clues to assist the user with formulating the story from the illustration. In one embodiment, clicking on an object reveals one or more hidden words corresponding to the selected object. In another embodiment, leading questions can be displayed to assist the user, such as, what is the bear eating? Or, how many times should you push the patient's chest?
FIG. 4 shows an exemplaryleading question clue 400, prompting the user to answer as to what the giraffe is doing. Theleading question clue 400 is in the form of a printed/visual question enclosed in a speech bubble displayed on the display proximate the illustration. In another embodiment, theleading question clue 400 can include a computerized digital signal from the electronic device simulating an audible vocalized question. -
FIG. 5 illustrates that, in response to the user'sutterance 500 of the word “sitting,” the text SITTING is revealed/displayed within the hidden narrative. In yet another embodiment, the clue can be an option to reveal particular words by classification, e.g. by noun, pronoun, adjective, verb, adverb, conjunction, proposition, interjection, punctuation, and the like.FIG. 6 illustrates an option where theverbs 600 are revealed from the hidden narrative. In a further embodiment, the clue can include an underline of each unrevealed hidden word in the hidden narrative such that the user may observe the length and the number of words remaining in the hidden narrative. -
FIG. 7 illustrates an option where the remaining unrevealed hidden words are underlined 700. In yet a further embodiment, the clue can include a distinct underline of each letter of each unrevealed hidden word in the hidden narrative such that the user may observe how many letters are in each unrevealed hidden word. In another embodiment, if the user utters, types, or otherwise inputs a synonym for a hidden word in the hidden narrative, a message is communicated, such as, “very close” or “almost, the word we're looking for begins with an S,” or some other encouragement that the user is on the right track. In yet another embodiment, if the user utters, types, or otherwise inputs a synonym for a hidden word, the hidden word is revealed and a message is communicated, such as, “close enough,” or “these words are synonyms.” In yet a further embodiment, the clue can be in the form of highlighting or pointing arrows towards objects or other visual aspects of the illustration that have not been observed and uttered by the user yet. In one embodiment, the user can be presented with a list of clue types and the user can select which clue type to activate. In another embodiment, the clues can be provided at timed intervals controlled by the software application. In yet another embodiment, the user can select a hard, medium, or easy setting, where, for example, no clues are provided in the hard setting, some clues are provided in the medium setting, and all available clues are provided in the easy setting. In a preferred embodiment, the hidden narrative describes only viewable features of the illustration, not additional story elements that cannot be observed, inferred or otherwise discerned from the illustration. As used herein, the term “viewable features” is intended to indicate features that can be observed, inferred, or otherwise discerned from the visual depiction provided by the illustration. For example, the exemplary illustration 200 (FIG. 2 ) shows a green giraffe sitting outside. Accordingly, any corresponding hidden narrative would not include any discussion of a house, a tree, or any other additional story elements not observable or inferable from the illustration. - In one embodiment, for grammar learning purposes, the illustration will include the text as a non-hidden narrative viewable on the display, while the punctuation is hidden. In another embodiment, all the text is presented in lower case so that the user is prompted to reveal the grammatically proper text format as a result of the user uttering or typing in the punctuation following the word that immediately precedes it. As an example, the illustration is displayed on the display to include the non-hidden text: “It was a nice and sunny day mama bear was making pancakes as little bear was walking down the stairs.” The grammatically proper text and punctuation is revealed as a result of the user uttering or typing in the words of the text, including the words for the punctuation. For example, as a result of the user uttering or typing in the following: “It was a nice and sunny day period,” a period will be revealed between the words “day” and “mama,” and the letter “m” in “mama” will change on the display to a capital letter “M” because of the grammatical rule that the first letter in a new sentence is always capitalized. Further, as a result of the user uttering or typing in the following: “mama bear was making pancakes comma as little bear was walking down the stairs period,” the comma and period are revealed on the display in the proper location, along with the text. Accordingly, users can learn proper grammar and punctuation.
- In one embodiment, there may be a limited, predetermined number of attempts to reveal punctuation, after which the software application will prevent additional attempts, and/or present a score determined by the amount of attempts and the number of correct punctuation marks revealed. In another embodiment, the predetermined number of attempts depends on the level of complexity selected for a particular narrative, story, or session. For example, the hidden text may include ten commas, with fifteen total attempts allowed. In one exemplary embodiment, the illustration includes as viewable on the display the non-hidden text: “Marvin as he was known to those around him.” If the software application allows only a single attempt and the user utters or types in: “Marvin as he was known comma to those around him” the user would receive a failing grade or score and not be allowed any more attempts, because the proper grammar and punctuation is: “Marvin, as he was known to those around him,” with the comma immediately following the word “Marvin.” As another example, an illustration including a 200 character narrative may include 20 commas, with the user taking 30 attempts to reveal all 20 commas (or other punctuation). After the number of attempts is depleted, the user may receive a score or a grade, such as 60% or a C, for example, where the score is determined by the number of attempts taken and/or the number of revealed punctuation marks and/or the amount of time required to reveal all the punctuation marks. The process ends at
step 110. - Referring to
FIG. 8 , a process flow chart is presented for an exemplary method of using the software application, or of learning, in accordance with the present invention. The process begins atstep 800 and immediately proceeds to step 802, where the user views the illustration on the display without the corresponding narrative. Instep 804, the user determines the hidden narrative associated with the illustration by inputting at least one word from the hidden narrative such that the word becomes viewable on the display. As discussed above, the user makes this determination by observing the illustration and making inferences from the illustration and uttering, typing, or otherwise inputting his observations in the electronic device operating the software application. After the user determines/reveals the hidden narrative for the illustration by inputting each of the words in the hidden narrative, the user can scroll, tab, or swipe to the next page, where a second illustration is displayed without its corresponding narrative, and the process may repeat until the hidden narrative for the last page is determined. In this manner, the user can participate in the entire storytelling process from page to page by revealing the hidden narrative for each subsequent illustration. In one embodiment, the present invention can include a timed aspect, where the time period to complete the story is recorded so that the user can try to improve his time in subsequent runs, or compare his time to other users times. The process ends atstep 806. - Referring to
FIG. 9 , a process flow chart is presented for an exemplary method of storytelling, in accordance with the present invention. As used herein, the term “storytelling” is intended to mean conveying related events, description, and/or facts, real or imagined, in words and/or still or dynamic images. The process begins atstep 900 and immediately proceeds to step 902, where an author provides an illustration configured to be viewable on a display without a corresponding narrative. In one embodiment, the author provides the illustration by obtaining the illustration from an artist that creates the illustration. In another embodiment, the author creates the illustration himself through, for example, sketching, drawing, painting, using computer drawing tools, and the like. Instep 904, the author creates a hidden narrative describing only viewable features of the illustration. Viewable features of the illustration can include expressly observable subjects or characteristics, such as a subject's name, color, activity, clothing, location, and the like. Viewable features of the illustration may also include observations that can be inferred or deduced from the illustration, such as an emotion of a character, where the illustration includes a smile, or a movement of the character, where movement lines are included in the illustration. Each of a plurality of words in the hidden narrative is configured to become viewable on the display in response to a user's input of said word. In one embodiment, the author creates the hidden narrative by writing or typing the hidden narrative describing detailed observable features of the illustration. In another embodiment, the author creates the hidden narrative first and then, subsequently, the illustration is created or provided to correspond to the hidden narrative such that each of the plurality of words in the hidden narrative is observable from the illustration. In some embodiments, the illustrations are fairly simplistic, such as the sitting green giraffe depicted inFIGS. 2-7 . These fairly simplistic illustrations may be included in, for example, younger children's hidden narrative stories, or for beginning foreign language students. In other embodiments, the illustrations can be much more complex and involved, including multiple subjects and objects with very detailed scenes. These more complex illustrations can be included in, for example, more adult hidden narrative stories, or for advanced foreign language students. Creating stories from only viewable features of the illustrations is a novel and inventive new method of storytelling. The process ends atstep 906. - It is understood that the process steps described in reference to
FIGS. 1, 8, and 9 are exemplary and are not intended to limit the present invention. Other embodiments may be implemented in a different sequence, some steps may be omitted for brevity, and some steps may occur concurrently with one another. - Referring to
FIG. 10 , an exemplaryelectronic device 1000 is illustrated in a block diagram. The exemplaryelectronic device 1000 includes auser input interface 1002, anetwork interface 1004,memory 1006, aprocessing device 1008, adisplay 1010, an audio input/output 1012, and aspeech recognition module 1014. In one embodiment, theelectronic device 1000 is a mobile electronic device, such as a computer tablet, a smart phone, a laptop, a dedicated e-Reader, or the like. In another embodiment, theelectronic device 1000 is a personal computer. - The
user input interface 1002 functions to provide a user a method of providing input to theelectronic device 1000. Theuser input interface 1002 may also facilitate interaction between the user and thedevice 1000. Theuser input interface 1002 may be a keypad providing a variety of user input operations. For example, the keypad may include alphanumeric keys for allowing entry of alphanumeric information (e.g. telephone numbers, contact information, text, etc.). Theuser input interface 1002 may include special function keys (e.g. a camera shutter button, volume control buttons, back buttons, home button, etc.), navigation and select keys, a pointing device, and the like. Keys, buttons, and/or keypads may be implemented as a touchscreen associated with thedisplay 1010. The touchscreen may also provide output or feedback to the user, such as haptic feedback or orientation adjustments of the keypad according to sensor signals received by motion detectors, such as an accelerometer, located within thedevice 1000. - The network interfaces 1004 may include one or more network interface cards (NIC) or a network controller. In some embodiments, the
network interface 1004 may include a personal area network (PAN) interface. The PAN interface may provide the capability for theelectronic device 1000 to network using a short-range communication protocol, for example, a Bluetooth communication protocol. The PAN interface may permit oneelectronic device 1000 to connect wirelessly to anotherelectronic device 1000 via a peer-to-peer connection. - The network interfaces 1004 may also include a local area network (LAN) interface. The LAN interface may be, for example, an interface to a wireless LAN, such as a Wi-Fi network. The range of the LAN interface may generally exceed the range available via the PAN interface. Typically, a connection between two electronic devices via the LAN interface may involve communication through a network router or other intermediary device.
- Additionally, the
network interfaces 1004 may include the capability to connect to a wide area network (WAN) via a WAN interface. The WAN interface may permit a connection to a cellular mobile communications network, or the Internet. The WAN interface may include communications circuitry, such as an antenna coupled to a radio circuit having a transceiver for transmitting and receiving radio signals via the antenna. The radio circuit may be configured to operate in a mobile communications network, including but not limited to global systems for mobile communications (GSM), code division multiple access (CDMA), wideband CDMA (WCDMA), and the like. - The
electronic device 1000 may also include a near field communication (NFC) interface. The NFC interface may allow for extremely close range communication at relatively low data rates (e.g., 424 kb/s). The NFC interface may take place via magnetic field induction, allowing the NFC interface to communicate with other NFC interfaces located on otherelectronic devices 1000 or to retrieve information from tags having radio frequency identification (RFID) circuitry. The NFC interface may enable initiation and/or facilitation of data transfer from oneelectronic device 1000 to anotherelectronic device 1000 with an extremely close range (e.g. 4 centimeters). -
Memory 1006 associated with thedevice 1000 may be, for example, one or more buffer, a flash memory, or non-volatile memory, such as random access memory (RAM). Theelectronic device 1000 may also include non-volatile storage. The non-volatile storage may represent any suitable storage medium, such as a hard disk drive or non-volatile memory, such as flash memory. In one embodiment, software instructions associated with thespeech recognition module 1014 are stored in non-volatile memory and are executable by theprocessing device 1008. In a further embodiment, thespeech recognition module 1014 is configured to receive at least a portion of a user's utterance, analyze the portion of the utterance, and identify a word corresponding to the portion of the utterance. - The
processing device 1008 can be, for example, a central processing unit (CPU), a microcontroller, or a microprocessing device, including a “general purpose” microprocessing device or a special purpose microprocessing device. Theprocessing device 1008 executes code stored inmemory 1006 in order to carry out operation/instructions of theelectronic device 1000. Theprocessing device 1008 may provide the processing capability to execute an operating system, run various applications, and provide processing for one or more of the techniques described herein. - The
display 1010 displays information to the user such as an operating state, time, telephone numbers, various menus, application icons, pull-down menus, and the like. Thedisplay 1010 may be used to present various images, text, graphics, or videos to the user, such as photographs, mobile television content, Internet webpages, and mobile application interfaces. In a preferred embodiment, thedisplay 1010 is used to present illustrations and any revealed words from the hidden narrative in accordance with the present invention. Thedisplay 1010 may be any type of suitable display, such as an liquid-crystal display (LCD), a plasma display, a light-emitting diode (LED) display, or the like. - The
electronic device 1000 may include audio input andoutput structures 1012, such as a microphone for receiving audio signals, e.g. utterances, from the user and/or a speaker for outputting audio data, such as audio alerts, songs, ringtones, video tracks, audio playback of the revealed text, for example, for learning pronunciation in foreign language learning embodiments of the present invention, voice data received by theelectronic device 1000 over a cellular network, and the like. Theelectronic device 1000 may also include an audio port for connection to peripheral audio input and output structures, such as a headset, or peripheral speakers or microphones. - Referring to
FIG. 11 , a block diagram of a data processing system 1100 that may also be implemented as an electronic device or other computing device, in accordance with one embodiment of the present invention. The data processing system 1100 may be a symmetric multiprocessor (SMP) system including a plurality of 1102 and 1104 connected to system bus 1106. Alternatively, a single processor system may be employed. Also, connected to system bus 1106 is memory controller/processors cache 1108, which provides an interface tolocal memory 1110. An I/O bus bridge 1138 is connected to system bus 1106 and provides an interface to I/O bus 1112. The memory controller/cache 1108 and I/O bus bridge 1138 may be integrated as depicted. The 1102 or 1104 in conjunction withprocessor memory controller 1108 controls what data is stored inmemory 1110. Theprocessor 1102 and/or 1104 andmemory controller 1108 can serve as a data counter for counting the rate of data flow to thememory 1110 or from thememory 1110 and can also count the total volume of data accessed to or from thememory 1110. The 1102 or 1104 can also work in conjunction with any other memory device or storage location.processor - Peripheral component interconnect (PCI)
bus bridge 1114 connected to I/O bus 1112 provides an interface to PCI local bus 1116. A number ofmodems 1118, or wireless cards, may be connected to PCI bus 1116. Typical PCI bus implementations will support four PCI expansion slots or add-in connectors. PCI includes, but is not necessarily limited to, PCI-X and PCI Express components. Communications links to a network of computers, such as the Internet, may be provided through themodem 1118 andnetwork adapter 1120 connected to PCI local bus 1116 through add-in boards. - Additional
1122 and 1124 provide interfaces for additional PCI buses 1126 and 1128, from which additional modems or network adapters may be supported. In this manner, the data processing system 1100 allows connections to a multiple network of computers. A graphics adapter 1130 andPCI bus bridges hard disk 1132 may also be connected to I/O bus 1112 as depicted, either directly or indirectly. - Those of ordinary skill in the art will appreciate that the hardware depicted in
FIG. 11 may vary. For example, other peripheral devices, such as optical disk drives and the like, also may be used in addition to or in place of the hardware depicted. The depicted example is not meant to imply architectural limitations with respect to the present invention. - The processes explained in detail above can be embodied in a computer program. Computer programs (also called computer control logic) are stored in memory such as
main memory 1110,removable storage drive 1134,removable media 1136,hard disk 1132, and signals. Such computer programs, when executed, enable the computer system to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, cause theprocessor 1102 and/or 1104 to perform one or more of the features described herein. - In this document, the terms “computer program medium,” “computer usable medium,” and “computer readable medium” are used to generally refer to media such as
main memory 1110,removable storage drive 1134,removable media 1136,hard disk 1132, and signals. These computer program products are means for providing software to the computer system. The computer readable medium allows the computer system to read data, instructions, messages or message packets, and other computer readable information from the computer readable medium. The computer readable medium, for example, may include non-volatile memory, such as Floppy, ROM, Flash memory, Disk drive memory, CD-ROM, and other permanent storage. It is useful, for example, for transporting information, such as data and computer/programming instructions, between computer systems. Furthermore, the computer readable medium may comprise computer readable information in a transitory state medium such as a network link and/or a network interface, including a wired or wireless network, that allows a computer to read such computer readable information. - Referring to
FIGS. 12-17 , another exemplary method of presenting an interactive story is illustrated in a series of screenshots. The exemplary method can be considered an interactive method of learning a medical procedure in accordance with an embodiment of the present invention.FIG. 12 shows a user interface displaying anillustration 1200 without a corresponding narrative. Theillustration 1200 depicts a first step in a cardiopulmonary resuscitation (CPR) first aid medical procedure. It is noted that dotted line text appears below theillustration 1200, representing a hidden narrative that is not yet revealed to the user via the display.FIG. 13 illustrates a user'sutterance 1300 of the words “tilt head,” resulting in a determination that the user'sutterance 1300 corresponds to words in the hidden text. As a result, the words “tilt head” are displayed on the display, while the unspoken words “lift chin, and check breathing” continue to remain hidden. -
FIG. 14 shows asecond illustration 1400 without a corresponding narrative. Thesecond illustration 1400 depicts a second step in the CPR procedure. It is noted that dotted line text appears below thesecond illustration 1400, representing a hidden narrative that is not yet revealed to the user via the display.FIG. 15 illustrates the user'sutterance 1500 of the word “breathe,” resulting in a determination that the user'sutterance 1500 corresponds to a word in the hidden text. In this embodiment, the root word of “breaths” is “breathe,” therefore the hidden text BREATHS is displayed on the display, while the unspoken words “give two” remain hidden. In one embodiment, thesecond illustration 1400 is formed as an animated illustration where the person administering CPR is animated to illustrate giving two breaths so that the user is able to infer such information from theillustration 1400. -
FIG. 16 shows athird illustration 1600 without a corresponding narrative. Thethird illustration 1600 depicts a third step in the CPR procedure. It is noted that dotted line text appears below thethird illustration 1600, representing a hidden narrative that is not yet revealed to the user via the display.FIG. 17 illustrates the user'sutterance 1700 of the word “chest,” resulting in a determination that the user'sutterance 1700 corresponds to words in the hidden text. As a result, the word “chest” is displayed to the user, while the words “firmly push down on center of” and “15 times” remain hidden until the user is able to input/utter each of the words in the hidden narrative. -
FIG. 18 shows anillustration 1800 without a corresponding narrative. Theillustration 1800 is a textual visual representation identifying a legal concept, namely a contract. It is noted that dotted line text appears below theillustration 1800, representing a hidden narrative that is not yet revealed to the user via the display.FIG. 19 illustrates the user'sutterance 1900 of the word “offer,” resulting in a determination that the user'sutterance 1900 corresponds to words in the hidden text. As a result, the word “offer” is displayed to the user, while the words “acceptance,” “consideration,” and “mutuality” remain hidden until the user is able to input/utter each of the words in the hidden narrative. In other embodiments, theillustration 1800 may be a mathematical formula, such as “10+10,” with the hidden text being the answer to the mathematical formula, namely “20.” In yet other embodiments, theillustration 1800 is a chemical formula, such as “H2O,” with the hidden text being a description of the chemical formula, namely “two atoms of hydrogen and one atom of oxygen.” -
FIG. 20 shows anotherexemplary illustration 2000 without a corresponding narrative. Theillustration 2000 is a textual visual representation foreign language sentences. It is noted that dotted line text appears below theillustration 2000, representing a hidden narrative that is not yet revealed to the user via the display.FIG. 21 illustrates the user'sutterance 2100 of the words “what is your name?” resulting in a determination that the user'sutterance 2100 corresponds to words in the hidden text. As a result, the sentence “what is your name,” which is an English translation of one of the foreign language sentences, is displayed to the user, while the sentence “nice to meet you” remains hidden until the user is able to input/utter each of the words in the hidden narrative. In one embodiment, in response to the hidden text being revealed, the software application also causes a computerized audio playback of the foreign language sentence so that the user may hear the proper pronunciation and intonations. - A novel and efficient method of presenting an interactive story has been disclosed that requires the user to participate in the storytelling process. Embodiments of the invention provide for displaying each illustration to the user with a hidden narrative that the user must reveal by making observations about the illustration and making inferences from the illustration. In some embodiments, a speech recognition module is used to determine whether audible utterances by the user correspond to one or more of the words in the hidden narrative. The inventive process develops attention skills by requiring users to pay very close attention to the details in the illustration and allows users to reveal the story through detailed observations of the illustration. The inventive process also develops writing skills, grammar, sentence building, and storytelling skills, while interactively engaging users in the story, as well as, provides an interactive method for foreign language learning and learning specific terminology and specific subject matter.
Claims (23)
1. A method of associating a multi-word narrative with an illustration, the method comprising:
displaying an illustration without a corresponding narrative on a display communicatively coupled to a memory;
associating, by a processor, a plurality of words in a hidden narrative with the illustration;
receiving, by a microphone, an analog audio signal from a user representing a word spoken by the user;
using a speech recognition module to convert the analog audio signal to digital data representative of the word spoken by the user;
determining, by the processor, whether the digital data representative of the word spoken by the user corresponds to one of the plurality of words in the hidden narrative associated with the illustration, the hidden narrative including the plurality of words stored in the memory before receiving the audio analog signal; and
displaying on the display the word in the hidden narrative in response to determining that the digital data representative of the word spoken by the user corresponds to the word in the hidden narrative, while not displaying any of the plurality of words in the hidden narrative that do not correspond to the word spoken by the user or a word previously spoken by the user.
2. (canceled)
3. The method in accordance with claim 1 , wherein:
the hidden narrative describes only viewable features of the illustration.
4. The method in accordance with claim 1 , wherein:
the illustration is formed as a single still image.
5. The method in accordance with claim 1 , wherein:
the illustration is formed as a sequence of still images.
6. The method in accordance with claim 1 , wherein:
the illustration is formed as a video.
7. The method in accordance with claim 1 , wherein:
displaying the illustration further comprises displaying the illustration on a display for a predetermined time period.
8. A method of learning, the method comprising:
presenting an illustration on a display without a corresponding narrative, the display communicatively coupled to a memory;
receiving, by a microphone, an analog signal from a user, the analog signal representing a word spoken by the user;
using a speech recognition module to convert the analog audio signal to digital data representative of the word spoken by the user;
determining, by the processor, a hidden narrative having a plurality of words associated with the illustration by associating at least one word from the hidden narrative with the digital data such that the at least one word becomes viewable on the display in response to receiving the digital data associated with the at least one word, while any of the plurality of words in the hidden narrative that do not correspond to the digital data does not become viewable; and
wherein the hidden narrative is stored in the memory before the receiving of the digital data.
9. (canceled)
10. The method in accordance with claim 8 , wherein:
the hidden narrative describes only viewable features of the illustration.
11. The method in accordance with claim 8 , wherein:
the illustration is formed as a single still image.
12. The method in accordance with claim 8 , wherein:
the illustration is formed as a sequence of still images.
13. The method in accordance with claim 8 , wherein:
the illustration is formed as a video.
14. A method of storytelling, the method comprising:
providing, using a display of an electronic device, an illustration configured to be viewable by a user without a corresponding narrative;
creating, by a processor of the electronic device, a hidden narrative having a plurality of words describing only viewable features of the illustration, where each of a plurality of words in the hidden narrative is configured to become viewable in response to a of user speaking said word, while any of the plurality of words in the hidden narrative that do not correspond to at least one word spoken by the user does not become viewable;
wherein the hidden narrative is stored in a memory before the user's input of said word;
receiving, by a microphone of the electronic device, an audio signal representing a word spoken by the user;
using a speech recognition module of the electronic device to convert the analog signal to digital data representative of the word spoken by the user;
causing, by the processor, a word in the hidden narrative corresponding to the digital data to become viewable; and
maintaining the remaining words of the hidden narrative in a hidden state.
15. (canceled)
16. The method in accordance with claim 14 , wherein:
the illustration is formed as a single still image.
17. The method in accordance with claim 14 , wherein:
the illustration is formed as a sequence of still images.
18. The method in accordance with claim 14 , wherein:
the illustration is formed as a video.
19. A system for presenting an interactive story, the system comprising:
a display communicatively coupled to a memory;
a microphone configured to receive an audio signal representative of at least a portion of an utterance;
a speech recognition module configured to receive the audio signal, convert the audio signal to digital data, analyze the digital data, and identify a word corresponding to the digital data;
a processing device communicatively coupled to the display and the speech recognition module, the processing device configured to execute computer instructions, the computer instructions including instructions for:
displaying an illustration on the display without a corresponding narrative; and
displaying on the display at least one word of a hidden narrative having a plurality of words associated with the illustration in response to the speech recognition module identifying the word as corresponding to the digital data, while any of the plurality of words in the hidden narrative that do not correspond to the digital data is not displayed; and
wherein the hidden narrative is stored in the memory before receiving the audio signal.
20. The system in accordance with claim 19 , wherein:
the hidden narrative describes only viewable features of the illustration.
21. The system in accordance with claim 19 , wherein:
the processing device is configured to cause the illustration to be displayed for only a predetermined amount of time.
22. The system in accordance with claim 19 , wherein:
the hidden narrative includes a punctuation mark, and wherein the processing device is configured to automatically display the punctuation mark when a word preceding or following the punctuation mark in the hidden narrative is displayed.
23. The system in accordance with claim 19 , wherein:
the processing device is configured to present a plurality of difficulty levels associated with the hidden narrative to the user, wherein the processing device causes the hidden narrative to be displayed for a first predetermined amount of time when a first difficult level is selected and causes the hidden narrative to be displayed for a second predetermined amount of time when a second difficulty level is selected.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/588,540 US20170053541A1 (en) | 2015-01-02 | 2015-01-02 | Interactive educational system and method |
| US15/981,530 US11328616B2 (en) | 2015-01-02 | 2018-05-16 | Interactive educational system and method |
| US17/714,397 US11854431B2 (en) | 2015-01-02 | 2022-04-06 | Interactive education system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/588,540 US20170053541A1 (en) | 2015-01-02 | 2015-01-02 | Interactive educational system and method |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/981,530 Continuation US11328616B2 (en) | 2015-01-02 | 2018-05-16 | Interactive educational system and method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20170053541A1 true US20170053541A1 (en) | 2017-02-23 |
Family
ID=58158329
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/588,540 Abandoned US20170053541A1 (en) | 2015-01-02 | 2015-01-02 | Interactive educational system and method |
| US15/981,530 Active 2036-06-30 US11328616B2 (en) | 2015-01-02 | 2018-05-16 | Interactive educational system and method |
| US17/714,397 Active US11854431B2 (en) | 2015-01-02 | 2022-04-06 | Interactive education system and method |
Family Applications After (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/981,530 Active 2036-06-30 US11328616B2 (en) | 2015-01-02 | 2018-05-16 | Interactive educational system and method |
| US17/714,397 Active US11854431B2 (en) | 2015-01-02 | 2022-04-06 | Interactive education system and method |
Country Status (1)
| Country | Link |
|---|---|
| US (3) | US20170053541A1 (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160225187A1 (en) * | 2014-11-18 | 2016-08-04 | Hallmark Cards, Incorporated | Immersive story creation |
| US20160275967A1 (en) * | 2015-03-18 | 2016-09-22 | Kabushiki Kaisha Toshiba | Presentation support apparatus and method |
| CN109559578A (en) * | 2019-01-11 | 2019-04-02 | 张翩 | A kind of English study scene video production method and learning system and method |
| US10423700B2 (en) | 2016-03-16 | 2019-09-24 | Kabushiki Kaisha Toshiba | Display assist apparatus, method, and program |
| USD900871S1 (en) * | 2019-02-04 | 2020-11-03 | Apple Inc. | Electronic device with animated graphical user interface |
| USD917516S1 (en) | 2019-02-01 | 2021-04-27 | Apple Inc. | Electronic device with animated graphical user interface |
| US11210968B2 (en) * | 2018-09-18 | 2021-12-28 | International Business Machines Corporation | Behavior-based interactive educational sessions |
| US20220068283A1 (en) * | 2020-09-01 | 2022-03-03 | Malihe Eshghavi | Systems, methods, and apparatus for language acquisition using socio-neuorocognitive techniques |
| US11394799B2 (en) * | 2020-05-07 | 2022-07-19 | Freeman Augustus Jackson | Methods, systems, apparatuses, and devices for facilitating for generation of an interactive story based on non-interactive data |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12444062B2 (en) * | 2021-01-29 | 2025-10-14 | International Business Machines Corporation | Visual question generation with answer-awareness and region-reference |
| US12501263B2 (en) | 2022-04-29 | 2025-12-16 | T-Mobile Usa, Inc. | Authenticate a user before performing a sensitive operation associated with a UE in communication with a wireless telecommunication network |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5282651A (en) * | 1991-04-15 | 1994-02-01 | Frank Alonso | Trading cards and method of concealing and revealing information thereon |
| US5940121A (en) | 1997-02-20 | 1999-08-17 | Eastman Kodak Company | Hybrid camera system with electronic album control |
| US7001183B2 (en) * | 2002-06-10 | 2006-02-21 | Peter Dowrick | Diagnostically and audibly responsive computer learning memory game and system provided therefor |
| US7672845B2 (en) * | 2004-06-22 | 2010-03-02 | International Business Machines Corporation | Method and system for keyword detection using voice-recognition |
| US20130149689A1 (en) * | 2011-12-10 | 2013-06-13 | Lee M. DeGross | Pop up educational content and/or dictionary entry for images and complementing trivia |
| US20130236878A1 (en) * | 2012-03-12 | 2013-09-12 | Alexey Saltanov | Method for Testing and Developing Intelligence |
| US20140017652A1 (en) * | 2012-07-11 | 2014-01-16 | Maurice Laurence Henri Perdreau | Memorizing mechanism for enhancing and improving a presentation by a speaker, professional performer, or student |
-
2015
- 2015-01-02 US US14/588,540 patent/US20170053541A1/en not_active Abandoned
-
2018
- 2018-05-16 US US15/981,530 patent/US11328616B2/en active Active
-
2022
- 2022-04-06 US US17/714,397 patent/US11854431B2/en active Active
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160225187A1 (en) * | 2014-11-18 | 2016-08-04 | Hallmark Cards, Incorporated | Immersive story creation |
| US11250630B2 (en) * | 2014-11-18 | 2022-02-15 | Hallmark Cards, Incorporated | Immersive story creation |
| US20160275967A1 (en) * | 2015-03-18 | 2016-09-22 | Kabushiki Kaisha Toshiba | Presentation support apparatus and method |
| US10423700B2 (en) | 2016-03-16 | 2019-09-24 | Kabushiki Kaisha Toshiba | Display assist apparatus, method, and program |
| US11210968B2 (en) * | 2018-09-18 | 2021-12-28 | International Business Machines Corporation | Behavior-based interactive educational sessions |
| CN109559578A (en) * | 2019-01-11 | 2019-04-02 | 张翩 | A kind of English study scene video production method and learning system and method |
| USD917516S1 (en) | 2019-02-01 | 2021-04-27 | Apple Inc. | Electronic device with animated graphical user interface |
| USD900871S1 (en) * | 2019-02-04 | 2020-11-03 | Apple Inc. | Electronic device with animated graphical user interface |
| USD1076977S1 (en) | 2019-02-04 | 2025-05-27 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
| US11394799B2 (en) * | 2020-05-07 | 2022-07-19 | Freeman Augustus Jackson | Methods, systems, apparatuses, and devices for facilitating for generation of an interactive story based on non-interactive data |
| US20220068283A1 (en) * | 2020-09-01 | 2022-03-03 | Malihe Eshghavi | Systems, methods, and apparatus for language acquisition using socio-neuorocognitive techniques |
| US11605390B2 (en) * | 2020-09-01 | 2023-03-14 | Malihe Eshghavi | Systems, methods, and apparatus for language acquisition using socio-neuorocognitive techniques |
Also Published As
| Publication number | Publication date |
|---|---|
| US20180261116A1 (en) | 2018-09-13 |
| US11328616B2 (en) | 2022-05-10 |
| US11854431B2 (en) | 2023-12-26 |
| US20220230552A1 (en) | 2022-07-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11854431B2 (en) | Interactive education system and method | |
| Ahmed et al. | Speech-driven mobile games for speech therapy: User experiences and feasibility | |
| Lyons | Language and linguistics | |
| Dalton et al. | Pronunciation | |
| Whong | Language teaching: Linguistic theory in practice | |
| US20200175890A1 (en) | Device, method, and graphical user interface for a group reading environment | |
| Al-Kaisi et al. | The didactic potential of the voice assistant “Alice” for students of a foreign language at a university | |
| US20140223272A1 (en) | Selective synchronous presentation | |
| JP6592657B2 (en) | Word acquisition training system | |
| US20140315163A1 (en) | Device, method, and graphical user interface for a group reading environment | |
| Busà | Sounding natural: Improving oral presentation skills | |
| Matsuura et al. | Intelligibility and comprehensibility in English as a lingua franca: nativized English in Japanese | |
| Pimm | Language, paralinguistic phenomena and the (same-old) mathematics register | |
| Hirai et al. | Using speech-to-text applications for assessing English language learners’ pronunciation: A comparison with human raters | |
| US20170098389A1 (en) | Method and system for children learning languages | |
| Chaudhary et al. | Verbose: Designing a Context-based Educational System for Improving Communicative Expressions | |
| Rudenko-Morgun et al. | Intelligent interlocutors in teaching language through distance learning education | |
| Miguel | Assistive technology integration for children with specific learning disabilities: a systematic literature review perspective from Mexico | |
| CN111401082A (en) | Intelligent personalized bilingual learning method, terminal and computer readable storage medium | |
| Sholikhah | The Effectiveness of Duolingo Application Media in Vocabulary Mastery of SMK BP Subulul Huda | |
| KR20170009487A (en) | Chunk-based language learning method and electronic device to do this | |
| Butt | Teaching English: Linguistics and Literature Combined | |
| Arend et al. | Hey Siri, you are challenging the interface between the oral and the written. Some basic reflections on voice-controlled natural language human-Siri talk | |
| EP3864640A1 (en) | Conversion of text-to-speech pronunciation outputs to hyperarticulated vowels | |
| WO2013085863A1 (en) | Methods and systems for teaching a non-native language |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |