AU667347B2 - Real-time audio message system for aircraft passangers - Google Patents
Real-time audio message system for aircraft passangers Download PDFInfo
- Publication number
- AU667347B2 AU667347B2 AU36718/93A AU3671893A AU667347B2 AU 667347 B2 AU667347 B2 AU 667347B2 AU 36718/93 A AU36718/93 A AU 36718/93A AU 3671893 A AU3671893 A AU 3671893A AU 667347 B2 AU667347 B2 AU 667347B2
- Authority
- AU
- Australia
- Prior art keywords
- flight
- audio
- information
- data
- words
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R27/00—Public address systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
Description
i) 66734'7
AUSTRALIA
Patents Act 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name of Applicant(s): Actual Inventor(s): Address for Service: Invention Title: ASINC, INC.
RICHARD J. SALTER, JR.
MICHAEL C. SANDERS CULLEN CO., Patent Trade Mark Attorneys, 240 Queen.Street, Brisbane, Qld. 4000, Australia.
REAL-TIME AUDIO MESSAGE SYSTEM FOR AIRCRAFT PASSENGERS a 4 4, C *I C £I I
I*
a44 The following statement is a full description of this invention, including the best method of performing it known to us: i i
-IC-
REAL-TIME AUDIO MESSAGE SYSTEM FOR AIRCRAFT PASSENGERS BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates generally to improvements in aircraft passenger information systems and, more particularly, pertains to a new audio information system for the passengers of an aircraft. Still more specifically, the invention provides means for generating informational messages which are initially created on a ground-based computer system and transmitted up to an aircraft in flight to be converted from digital computer data to audio words and sentences and broadcast in multiple languages via the cabin audio system to the passengers.
2. Description of the Prior Art I a* A wide variety of information systems exist for providing audio messages to a listening audience. For entirely automatic systems; that is, systems which do not require an operator, audio messages have traditionally been *t prerecorded prior to broadcast. Such information systems are incapable of handling real time information to produce audio messages reciting the real time information. To remedy this, various prior art audio information systems have been developed which utilize a voice synthesizer device to convert real time digital information into spoken words !i 0 4 or phrases. Unfortunately, the resulting audio messages are often metallic- or artificial-sounding.
t -2- A particular application for an audio information system for automatically providing spoken messages is in the aircraft and air transportation arena. General information systems relating to aircraft abound in the prior art. Such general systems are utilized for a variety of purposes, such as tracking and analyzing information relating to air traffic control, displaying information on flights to provide for advanced planning and scheduling, and monitoring ground traffic at an airport. Other than U.S. Patent No. 4,975,696 (Salter, Jr. et al.) and t U.S application Serial No. 0,/7'-370 (Pitts), such systems are typically used for the administering of aircraft traffic.
In U.S. Patent No. 4,975,696, an electronics package connecting the airborne electronics of a passenger aircraft to the passenger visual display system of the aircraft was disclosed. The electronics package provides passengers with a variety of real-time video displays of flight information, such as ground speed, outside air temperature, or altitude. Other information displayed by
C
the electronics package includes a map of the area over which the aircraft flies, as well as destination information, such as a chart of the destination terminal including aircraft gates, baggage claims areas, and connecting flight information listings.
The electronics system of U.S. Patent applieati-n e:ial No. .76..,33 Gdisplays flight information with the flight information automatically tailored to the phases of flight of the aircraft.
Although the electronics systems of U.S. Patent f sa5,2J>2,5e0 No. 4,975,696 and U.S. a±icoation ael- No. 0/ 3,' 7 0 j i provide much useful information in video displays, the systems o not provide the information over audio channels.
Furthermore, as noted above, existing systems which do provide information over audio channels in other applications have not successfully provided natural-sounding, i r c' d -3automatically-generated spoken messages incorporating real time information.
OBJECTIVES AND SUMMARY OF THE INVENTION Accordingly, it is an object of the present invention to provide a flight information system wherein the system provides real-time flight information such as speed, altitude, and passing points of interest, destination airport terminal information such as connecting flights and gates, and other useful information, over an audio system to passengers in an aircraft.
It is another object of the present invention to provide an information system which automatically generates spoken messages in a natural-sounding voice.
In accordance with these objectives, the invention provides an information system for generating spoken audio messages incorporating real-time, i.e. "variable," input data by assembling digitized spoken words corresponding to the input data into complete messages or sentences. Each sentence to be assembled includes a framework of fixed 2 digitized words and phrases, into which variable digitized words are inserted. The particular digitized variable words which correspond to the specific input data are retrieved from digital computer memory. All anticipated input parameters are stored as digitized spoken words such that, during operation of the system, appropriate spoken words corresponding to the input data an be retrieved and 0 inserted into whe framework of the sentence. In this S manner, a complete natural-sounding spoken message which conveys the input data is automatically generated for broadcast.
SMore specifically, the system includes a memory ou means for storing digitized spoken words, a receiver for receiving input data, and a data processor The data processor means includes a retrieval means for retrieving 'I I -4selected digitized words corresponding to the input data and a message assembly means for assembling the retrieved words into audio messages.
Some of the digitized spoken words are stored in a variety of different inflection forms. The data processor means includes means for selecting digitized forms of the words having the proper inflection for inclusion in the spoken sentence, such that a natural-sounding spoken sentence is achieved.
The various digitized words and phrases may be recorded in a variety of languages, such that a spoken message may be generated in any of a variety of different languages.
In accordance with a preferred embodiment, the audio information system is mounted aboard a passenger aircraft for automatically generating informative messages 2 for broadcast to the passengers of the aircraft. The system includes a receiver for receiving flight information from the on-board navigation systems of the aircraft and from 27; "ground-based transmitters. The input flight information, such as the location of the aircraft or the travel time to t destination, is automatically communicated to the passengers in the form of natural-sounding spoken sentences. The system may also generate audio messages identifying points of interest in the vicinity of the aircraft.
In one embodiment, the system generates spoken messages describing destination terminal information received from a ground-based transmitter including 4*t4 connecting gates and baggage claim areas. The system assembles audio messages incorporating the destination terminal information received from the ground and broadcasts the assembled messages to the passengers. The system is alternatively configured to simultaneously provide the destination -tmeminal information in both video and audio form.
i JL_ In another embodiment, the invention provides audio messages to aircraft passengers wherein the messages are tailored to the phases of flight of the aircraft. In accordance with this embodiment, the system includes data processor means utilizing received flight information for determining a current phase of the flight plan and for inputting information corresponding to the current phase of the flight plan to the audio system for broadcast to the passengers. In this manner, a wide variety of informative spoken messages may be automatically provided to the passengers, with the content of the messages tailored to the various phases of flight of the aircraft. For example, the system may automatically generate one set of spoken messages during the takeoff phase of the flight of the aircraft, and a separate set of messages during the en route cruise phase of the aircraft. As with the previously-described embodiments, the messages are automatically generated by the system in response to input flight information which is received by the system.
.BRIEF DESCRIPTION OF THE DRAWINGS Other objects and many of the attendant advantages of this invention will become apparent as the invention S becomes better understood by reference to the following detailed description when considered in conjunction with the 2 accompanying drawings, in which likeo reference numerals designate like parts throughout the figures thereof, and wherein: Figure 1 is a flow chart representing a method in accordance with the invention for assembling sentences fronm digitized words; r .Figure 2 is a flow chart representing a method for selecting words of proper inflection for use in assembling sentences having numbers spoken in a natural-sounding Voice; V 1 U :i c I ~e
L
~i r Figure 3 is a block diagram, somewhat in pictorial form, of an aircraft passenger information system in accordance with a preferred embodiment of the present invention; Figure 4 is a block diagram of the data processor of Figure 3; Figure 5 is a representation of a screen that may be displayed by the system of the present invention while corresponding audio messages are broadcast; Figure 6 is another representation of a screen that may be displayed by the system of the present invention while corresponding audio messages are broadcast; Figure 7 provides a flow chart of an alternative embodiment of the invention wherein audio messages conveying flight information such as points of interest are generated; Figure 8 is a representation of a video display screen that may be displayed by the system of Figure 7 while corresponding audio messages are broadcast; and Figure 9 is a representation of another video display screen that may be displayed by the system of Figure 7 while corresponding audio messages are broadcast.
V4*000 ta.
6*4 DESCRIPTION OF THE PREFERRED EMBODIMENTS 0The following description is provided to enable any person skilled in the art to make and use the invention *0 a.
2 and sets forth the best modes contemplated by the inventor 9a*, "0 of carrying out his invention. Various modifications, however, will remain readily apparent to those skilled in the art, since the generic principles of the present invention have been defined herein specifically to provide an audio information system for receiving real time data and for generating natural-sounding spoken messages reciting the real.time data.
-7- Referring to Figure 1, a spoken message assembler system 200 is illustrated. Message assembler 200 receives input information in the form of digital alphanumeric data and generates natural-sounding spoken sentences which recite the received data for output to a listening audience through a speaker system, perhaps a public address (PA) system. To this end, message assembler 200 includes hundreds or thousands of digitized words and phrases covering all anticipated words which may be required to create sentences reciting the input data. The words and phrases are prerecorded from a human voice in a digitized format and stored in computer ROM. Message assembler 200 assembles sentences by retrieving appropriate digitized words and phrases and assembling the words and phrases into proper syntactic sentences. Preferably, some of the words and phrases are stored in a number of digitized forms, each having a different inflection, such that the assembled sentence has proper inflection in accordance with natural spGech.
In this manner, input information in the form of digital data can be communicated to a listening audience in V the form of natural-sounding spoken sentences. The input data is received and the spoken sentences are generated and broadcast entirely automatically without the need for a human operator or human speaker.
'In a preferred embodiment, discussed in detail below, the spoken message assembler is employed within an audio/video information system for use in the passenger compartment of an aircraft. In that embodiment, the message 30'00 assembler receives flight information such as ground speed, outside air temperature, destination terminal, connecting gate, or baggage claim area information. The message S assembler then constructs natural-sounding sentences for broadcasting the flight information to the passengers in the aircraft. The spoken messages may be broadcast over a I -8public address system of the aircraft for all passengers to hear, or may be broadcast over individual passenger headphone sets. Also, as will be described below, the spoken message assembler may be configured to generate sentences in a variety of different languages for either sequential broadcast or simultaneous broadcast over multiple channels.
The spoken message assembler of the system thus provides a wide range of useful and informative information to the passengers, while freeing the flight crew from having to provide the information to the passengers. As will be described below, the system may additionally include a video display system for simultaneously displaying the flight information over a video screen or the like.
Although advantageously implemented within an information system for passenger aircraft, the message assembler of the invention is ideally suited for any application benefitting from the automatic communication of input data to a listening audience.
Figure 1 provides a flow chart illustrating the operation of message assembler 200. Initially, at 202, the message assembler receives an input sentence over a data line 201 in a digital alphanumeric format suitable for input and manipulation by a computer or similar data processing device. The data is received within a sentence format having specific data fields. For example, one data field of the input sentence may provide the time of day. Within that data field, an alphanumeric sequence is received which provides the time of day, "12:32PM." A separate data field may provide a destination city for an aircraft flight, .3d* o "Los Angeles." Message assembler 200 may be preprogrammed to receive any of a number of suitable data formats.
6 Any format is sAita\le so long as the variable data is received within preselected fields such that the message assembler can determine the type of data contained within the received message.
ji -9- For each type of data, message assembler 200 stores all possible instances of the data type in a digitized spoken form in a mass storage device 211. For the example of destination cities, the message assembler stores the names of all cities that the airline flies into or out of in digitized spoken form. Thus, the message assembler stores the words "New York," "Los Angeles," "Chicago," etc.
in ROM.
For data types requiring numbers, such as the time of day, message assembler 200 stores alLnecessary component numbers in digitized form. To recite the time "12:10," message assembler 200 retrieves and combines the words "twelve" and "ten." To recite the time message assembler 200 retrieves and combines the words "one," "fifty," and "seven." To handle any input time of day, message assembler 200 need only store the component numbers 0-9 and 10, 20 50 in digitized form. The numbers 1-10 are assembled either as "one" or "oh-one," etc., to allow the handling of both hours and minute values between 1 and In this manner, the message assembler stores the various possible instances of the various possible data _ftypes that may be received within an incoming message. The specific data fields that are employed and the specific instances of the data stored for each data field are configurable parameters of the system. Although a digitized data base can be constructed to provide for almost any type of information, the system is preferably employed where a limited number of types of information must be conveyed to 3Q.* a listening audience, especially where each type of information has a fairly limited range of possible instances. In such case, the total number of digitized spoken words that must, be stored in ROM is fairly limited. A system, requiring a greater number of digitized words may be implemented using a computer with' a greater amount of ROM.
o o j 1 -i7-1 An exemplary input sentence format received by the system, at step 202, is provided in Table I.
Table I "(Airline Name) Flightl(Flight Number) will departl 1 2 3 4 (City Name)|fromITerminall(Terminal Name) gatel 6 7 8 9 (gate number) at (time)j(AM/PM)." 11 12 13 The exemplary sentence format of Table I provides the departure gate number and departing time for particular departing flights. Thus, for each flight departing from the destination terminal, the input sentence of Table I provides a framework for communicating the departing flight's airline name and flight number and the departing flight's gate number and departure time, along with the destination city and destination airport terminal.
An input sentence includes a framework of fixed words interlaced with variable words (shown in parentheses) 2' in Table I. In the input sentence shown in Table I, the fixed words are "flight," "will depart," "from," "terminal," "gate," and The variable data for inclusion within the sentence include the airline name, the flight number, the city name, the terminal name, the gate number, the departure time, and either "AM" or "PM" appended to the departure time. Each unit of the sentence, comprising either a single fixed or variable word or a fixed or variable phrase, is denoted by a position number. For example, the variable "airline name" is identified as position 1. The fixed word "flight'" is identified as TO: THE COMMISSIONER OF PATENTS
AUSTRALIA
-11-A position 2. In this manner, each fixed or variable data unit within the input sentence is represented by a unique number.
g! At 206, -The ,astem examines the first position within the input sentence, initially position 1. t 208, the system determines whether position 1 corresponds to a fixed word or a variable word. Continuing with the example of Table I, position 1 requires a variable wotd.
Accordingly, the system proceeds to step 210 to retrieve the digitized variable word from the data base of the system, which corresponds to the input airline name to be included at position 1.
The data base of variable digitized words is set up to include the names of currently operating airlines, with the names digitized from a recording of the spoken airline name, Thus, the data base may include, for example, "ABC Airlines" or "XYZ Airlines" in digitized form. To retrieve the digitized spoken name of the proper airline, the system examines the received message for an alphanumeric 2 representation of the airline name, then, based on the alphanumeric, retrieves the corresponding digitized spoken name from the system's data base. Once retrieved, the digitized data providing the spoken airline name is immediately broadcast to the passengers. Alternatively, the 2S digitized spoken airline name may be transferred to a temporary memory unit (not shown in Figure 1) of the system for subsequent broadcast. In Figure 1, the broadcast step is identified by reference numeral 212.
At step 214, the system determines whether the final position of the sentence format has been processed.
If not, the system increments a position pointer and returns along flow line 216 to process the next position within the sentence format. Thus, in the example of Table I, the Ssystem returns to process position 2. At step 208, the system determines that position 2 requires a fixed digitized -12- .word. Hence, the system proceeds to step 218 to retrieve the fixed word designated by tho- sentence format. In this case, the fixed word is "flight." Hence, the system retrieves digitized data presenting the spoken word "flight" from the data base and broadcasts the retrieved word.
Again, the system returns along data flow line 216 to process a new position within the sentence format. In the example of Table I, the next position, position 3, calls for a variable word setting forth the flight number.
Accordingly, the system proceeds to step 210, wherein the system retrieves the digitized data setting forth the spoken flight number corresponding to the alphanumeric flight number designation received in the input message. Thus, if the flight number received in the input message is represented by the alphanumeric sequence "1059," the system r'etrieves digitized data providing the spoken words "ten," "fifty," and "nine." To this end, the system maintains a "number" data base which stores spoken numbers for use with any data type requiring numbers. Exemplary data types such 2"' as flight number, gate number, baggage claim area, departure S' time, etc. thereby share a common data base. Thus, the digitized spoken words "ten," "fifty," and "nine" are retrieved in circumstances requiring that the number "1059" be spoken, such as if the departing gate number is "1059," the departure time is "10:59," or the baggage claim area is S "1059." As will be described below, the numbers are preferably stored in a variety of different styles and inflections to allow natural-sounding numbers to be recited *r in any circumstances.
Once the digitized words "ten," "fifty," and "nine" are retrieved from memory and brSeadat, the system proceeds to the next position wherein the system retrieves the .fixed digitized words "will depart." Execution continues, during which time the system processes each successive position within the sentence format. At each -13position, the appropriate variable or fixed digitized words are retrieved from the data base memory and immediately broadcast. Execution proceed. at a sufficient speed such that the words are broadcast one after the other in close succession to produce a natural-sounding sentence.
The assembled sentence is thereby "spoken" in the same manner in which a conventional compact disc system broadcasts words or music; that is, the digitized words are "played" in succession. Appropriate pauses may be included between words within the sentence to ensure a natural sentence flow.
Continuing with the example of Table I, the resulting "spoken" sentence might be "XYZ Airlines Flight ten fifty-nine will depart Chicago from Terminal One, gate twenty-three at twelve forty-seven PM." The sentence is broadcast by means described below to the passengers in the aircraft, who thereby hear a natural-sounding sentence as if spoken by a member of the flight crew. By assembling the sentence from digitized words and phrases, rather than 23"d" by using a voice synthesizer wherein words are created by phonetically "sounding out" individual syllables or words, a more natural-sounding sentence is achieved.
At step 222, the system returns to step 202 to receive and process a new message. The new message may 2B" provide the departing flight information for a different airline flight. Typically, an incoming message will provide the departing flight information for many connecting flights, perhaps 10-20 such flights. Thus, the system will reexecute the steps shown in Figure 1 a number of times Io process the input data corresponding to each of the '1 connecting flights, to thereby generate sentences reciting all of the connecting gate information.
In an alternative embodiment, the retrieved words i are stored in a temporary member for later broadcast. Such i a system might include parallel processing capability such i L' -14that, while a first sentence is being broadcast from temporary memory, a second sentence is being assembled.
Once all of the information within a particular incoming message is processed to generate one or more spoken sentences, the system waits to receive a new message. The new message may set forth different types of information within a different sentence format. Typically, the systemwill receive numerous input sentence formats to allow the system to broadcast a wide variety of natural-sounding sentences conveying a wide variety of possible input data.
Also, although generally described with respect to an exemplary flight information system for providing flight information to passengers of an aircraft, the message assembler shown in Figure 1 is advantageously employed in any environment where variable input information must be communicated to a listening audience over an audio system.
In particular, the system is advantageously employed wherever input data to be broadcast falls within a finite number of data types, each having a range of anticipated values which may be stored in digitized spoken form in a data base.
u ""With reference to Figure 2, a method by which the invention provides spoken numbers of proper style and inflection will now be described.
A natural-sounding sentence is composed of words of differing inflections. Automatically-generated sentences which do not use the proper inflection for component words may sound artificial or metallic. Accordingly, to assemble a natural-sounding sentence from digitized words, the proper inflection for the component words is preferably determined.
Generally, it has been found that three broad forms of inflection are necessary for use in achieving natural-sounding sentences incorporating numbers. The three i forms of inflection are falling, rising, and constant. A word spoken at the end of a sentence generally has a) falling inflection. A word spoken in the middle of a sentence generally has a rapidly rising inflection if it is closely followed by another word. A word spoken in the middle of a sentence generally has a slowly rising inflection if it is not followed closely by another word. In accordance with the invention, at least a portion of tie words used in assembling sentences are stored in three different cigitized forms corresponding to the three inflection forms. Thus, a version of the word having the proper inflection can be retrieved, depending upon the location of the word within the sentence. In a possible embodiment, all words in the data base of digitized words are recorded under all three different inflections.
In a preferred embodiment, only "number" words, words used to recite numeric strings, are stored under all three inflection forms. It has been found that input sentence formats may be selected wherein all other words need be stored under only one inflection to achieve sufficient natural-sounding sentences. For example, the 26" word "and" need only be stored under the slowly rising inflection form because the word "and" will always appear in cI"" mid-sentence not followed closely by another word.
«t tC 1
I
s Numbers are stored under all three inflections, since numbers may appear in a variety of positions within a 23 sentence or at the end of a sentence. For example, the number string "1024" may appear in the middle of a sentence followed closely by another word: "Flight 1024A will depart from gate 15." Alternatively, the number string "1024" may appear in the middle of a sentence not followed closely by 3Q another word: "Flight 1024 will depart from gate Finally, the string "1024" may appear at the end of a sentence: "Flight 15 will depart from gate 1024." Thus, all numbers are stored,under all three inflection forms such that the proper inflection form can be reteieved depending upon the position of the number within the sentence.
L I' o ll tions have not successfully provided natural-sounding, J i 1^J S-16- In the example just described, the numeric string "1024" is actually composed of three component numbers: "ten," "twenty," and "four." The system processes the inflection of each of the individual component words separately. In this example, the word "ten" is followed closely by the word "twenty" and the word "twenty" is followed closely by the word "four." Accordingly, the words "ten" and "twenty" both have a rapidly rising inflection, regardless of the position of "1024" 'in the sentence. In this example, only the word "four" will have a slowly rising, rapidly rising, or falling inflection, depending 1 upon the location of the number "1024" within the sentence.
The system also selects a proper style for reciting numbers. The system characterizes numbers according to one of two general numeric styles. In the first, "short" style, the words "hundreds" or "thousands" are not spoken. For example, in the short style, the number "1024" is spoken as "ten twenty-four." In a "long" numeric style, the words "hundreds" or "thousands" are inserted.
200 For example, the number "1024" is recited as "one thousand twenty-four." When embodied within an information system for a r passenger aircraft, the short style is used for reciting gate numbers, flight numbers, baggage claim areas, and the like. The long style is used for reciting altitudes, distances, temperatures, and the like. Thus, "flight 1024" is recited as "flight ten twenty-four," whereas "1024 feet" is recited as "one thousand twenty-four feet." uring assembly of sentences incorporating i i 3Q numbers, the message assembler determines the proper numeric u style and retrieves the digitized words appropriate to the selected numeric style. Thus, in the example, to recite a "flight 1024," the system retrieves the individual words "flight," "ten," "twenty," and "four" from the digitized word data base for playback in succession. To recite "1024 I r,,,if 1 1 -17feet," the system retrieves the individual digitized words "one," "thousand," "twenty," "four," and "feet." A method by which the invention accounts for numeric style and numeric inflection to generate naturalsounding spoken numbers is shown in Figure 2. The steps of Figure 2 are executed as a part of the execution of step 210 of Figure 1. However, the steps of Figure 2 are executed only for processing alphanumeric strings which include numbers. Thus, other variable words,' such as destination cities, "Los Angeles," are not processed using the procedure of Figure 2.
For alphanumeric strings with numbers, the system, at step 250, initially extracts all numeric strings from the input alphanumeric character string. Thus, for input string "1024A," the system extracts "1024." Also as an example, for the string "10B24," the system extracts the number strings "10" and Thus, an input character string may contain one or more numeric strings. For each extracted numeric string, the system, at step 252, determines the proper numeric style for the numeric string. Thus, if the numeric string is "i024," the system determines whether this should be recited in the long style or the short style.
This determination is made fronA an examination of the data type of the input character string. For each numeric data type, the system stores an indicator of the corresponding S" style. For example, if the data type is a "flight number," then the short style is used. If the data type for the input character string is an altitude, then the lpng style 0 CIY is selected. The proper data type may be 'estermined from the location of the character string within the input data block. Alternatively, the data block iay include headers immediately prior to each data type, eesignating the data type..
VI form.
'(L
Once the proper numeric style is determined, the system, at step 254, parses the numeric string into its component numbers according to the selected numeric style.
Thus, "1024" is parsed as "1000" and "24" for the long numeric style, and "10" and "24" for the short numeric style.
Next, at step 256, the system assembles a word equivalent of the alphanumeric string which includes any parsed numeric strings, as well as any letters or other characters. Once a word equivalent of the alphanumeric string is assembled in sequential order, the system, at step 258, determines the inflection of all component numbers included within the word equivalent of the alphanumeric string. To this end, the system examines each "number" word within the string to determine whether the word is positioned in the middle of the string or at the end of the string. If in the middle, then the rapidly rising inflection form is chosen. If the "number" word occurs at the end of the string, then the system must determine what words, if any, follow the alphanumeric string. If the alphanumeric string constitutes the final portion of a sentence, a "number" word at the end of the string therefore falls at the end of the sentence. Hence, the falling inflection is chosen. If, on the other hand, the alphanumeric string is positioned in the middle of a sentence, then a "numeric" word falling at the end of the string will be assigned the slowly rising inflection.
*Once the proper inflection form for each component number is determined at step 258, the system is ready to 3Q retrieve the digitized spoken words corresponding to all component of the word equivalent of the alphanumeric String. This retrieval is accomplished at step 260.
Processing continues at step 212 of Figure 1, whAch operates to broadcast the retrieved words. As the sentence is broadcast to the passengers, numbers recited within the k i 1 L -19sentence are thereby spoken in the proper style and with the proper inflection.
The system shown in Figures 1 and 2 may be configured to assemble sentences in any of a variety of languages. To handle various languages, the data base of digitized words must include the necessary foreign words and phrases. Also, each different language has different sentence formats. For example, for a German sentence, the sentence format may have the fixed verb of the sentence at the end of the sentence format, rather than near the beginning of the sentence format as commonly found in English sentences.
Each alternative language may be handled by a separate microprocessor device. Alternatively, a single microprocessor device may sequentially process all languages.
In accordance with a preferred embodiment shown in the remaining figures, the spoken message assembler described above is implemented within an on-board flight information system for providing flight information to airline passengers. In a first embodiment, the system rre provides connecting gate and baggage claim area information.
In a second embodiment, the system provides flight information such as air speed, altitude, and information regarding 2 points of interest over which the aircraft travels. This information may be tailored to the various phases of flight of the aircraft.
4 The heart of the system, a data processor 13, receives messages containing flight information over a data 3q bus 59 from various systems of the aircraft. Examples of oua such systems include an ACARS receiver 19, a navigation system 15, an aircraft air data system 17, and a maintenance computer 21. Each of these systems, from which information is received, is entirely conventional and will not be described in detail. Data processor 13 may be connected to any one or a multiple of these systems depending on the type of information desired to be displayed to the passengers of the aircraft. Data processor 13 may be controlled by a control unit 22, which includes various means for allowing for manual activation of the data processor and control over the functions of the data processor.
Data processo[c 13 generates audio messages using the message assembler described above and transmits the audio messages in the form of audio signals over an audio link line 91 to an audio selector unit 92 that routes the audio signal to a plurality of conventional audio systems.
For example, the audio signals may be transmitted over a link line 93 to a public address speaker 95 in the passenger compartment of the aircraft or over link line 97 to a plurality of individual passenger headphone sets 96 via individual multichannel selectors 94.
The data processor may also generate video display screens which set forth the data incorporated in the audio messages. The video display screens are output as a video 26 signal and transmitted over a video link line 31 to a conventional video selector unit 29 that routes the video oilo signal to a plurality of conventional video display systems.
For example, the video signal may be transmitted over link lines 39 to a preview monitor 33, or over link lines 43 to a video monitor 37, or over link lines 41 to a video projector 35, which projects the sequences of video screens received onto a video screen Message assembler 200 and its data base of digitized words and phrases are components of data processor 13 and, hence, are not shown separately in I Figure 3.
*sow 1I -21- It should be understood that this particular illustration of an aircraft audio/video display system is only set forth as an example of one of many such systems that may be utilized and, therefore, should not be considered as limiting the present invention.
The first embodiment, wherein connecting gate and baggage claim area information is processed, will now be described with particular reference to Figures 3-6. In Figure 3, a conventional ACARS/AIRCOM/SITA receiver 19 is shown. This receiver receives connecting gate and baggage claim area information from an airline central computer 47 via a transmitting antenna 51 over carrier waves 53. A link line 49 connects airline computer 47 to transmitting antenna 51. However, any transmitter receiver system could be used, including a satellite communication system, and this invention is not limited to the ACARS system referred to herein.
Destination airport information may also be entered into the system via an optional data entry terminal (not shown).
Assuming that the ground base station and the aircraft are communicating over an ACARS/AIRCOM/SITA communication system, information transmitted from ground base computer 47 is received by the AC4RS/AIRCOM/SITA 29 receiver 19. The data is output from the ACARS/AIRCOM/SITA receiver 19 to the data processor 13 in a format such as described in ARINC characteristic 597, 724, or 724B.
In order for the data processor 13 to promptly I process the information received, the data is assumed.to be in a specific fixed format when it is received from ACARS treceiver 19. The format illustrated in Table II is an example of a possible format for up-linked data: i -22- Table II AM01091712015TE /NCONNECTING GATES/.
LUFTHANSA/,FLIGHT/,966/-AO/,ARRIVING/,IN/, FRANKFURT/,AT/,11:45/-TO/,TERMINAL/,A/, GATE/,NUMBER/,17/,BAGGAGE CLAIM AREA/,C/.
/80301/-PO/,FLIGHT/,/-AO/, WILL BE/,DEPARTING/,FOR/,
/-PO/,FROM/,TERMINAL/,/-AO/,GATE/,-NO/,AT/,/-TO/./:
/903AIR FRANCE/,841/,PARIS/,A/,10/,12:15/, LUFTHANSA/,502/HAMBURG/,B/,5/,12:30/, SWISSAIR/,65/,ZURICH/,B/,2/,12:35/: The data format contains strings of characters which are utilized by data processor 13 to generate audio S messages and optional video displays. Exemplary strings are the flight number string "966," the destination airport string "Frankfurt," the arrival gate string and the baggage claim area string For audio messages, relevant data is extracted from the strings and incorporated into audio messages via message assembler 200. For video displays, these strings are used both to retrieve an airport chart representing the destination airport, and for direct S. inclusion in video displays.
From information contained within the exemplary data block of Table II, the following spoken audio messages may automatically be generated: S "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim area C."
I
v-.
0
A
-23- "Air France flight eight forty one will be departing for Paris from terminal A gate ten at twelve fifteen." "Lufthansa flight five oh two will be departing for Hamburg from terminal B gate five at twelve thirty." "Swissair flight sixty five will be departing for Zurich from terminal B gate two at twelve thirty five." To generate these spoken word audio messages, the data processor utilizes the message assembler, described above, to extract relevant data and to assemble messages reciting the data.
To generate the message "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim the message assembler extracts the variable data "Lufthansa," 1" "966," "Frankfurt," "11:45," and for incorporation into a sentence having fixed words "flight," 2'CC: "arriving in," "terminal," "gate number," and "baggage claim area." The message processor retrieves spoken word equivalents of the alphanumeric data extracted from the message in the manner described above. The numbers "966," "11:45," and "17" contained within the flight number, arrival time, and arrival gate may be processed according to the inflection and style manipulation procedure described above with reference to Figure J.
4 To generate the connecting flight information messages, the message assembler extracts the various fixed and variable words from the input message, retrieves spoken word equivalents for these alphanumeric values, and I t r
J
i -24broadcasts the spoken word equivalents in succession to produce complete sentences.
A total of four different audio messages are thereby generated from the data contained within the data block of Table II. The four messages are generated by executing the steps of Figure 1 a total of four times. Once completed, the system waits until a new input message is -received.
An extremely wide range of spoken messages can be generated providing a wide variety of useful information.
For example, input messages may provide flight information such as altitude, ground speed, outside air temperature, time or distance to destination, time or distance from destination, etc. Also, weather-related messages may be received and processed, such as messages describing the temperature and weather conditions at the destination airport. Alternatively, weather conditions within the vicinity of the aircraft may be described, including wind speed, visibility, ceiling, etc. Messages providing marinerelated information may be provided. For example, messages specifying the surf, tide, and marine visibility may be provided.
*a *t In general, any input message can be processed so long as each of the component words for inclusion in the ea 2 sentence is stored in the digitized memory of the system.
Thus, a wide variety of custom messages may be typed into a ground-based computer, then transmitted to the aircraft for 0 1 6 conversion to a spoken audio message. The variety of possible messages is limited only by the number of digitized words stored in the digitized memory of the system. Accordt= ingly, by providing a system with a larger vocabulary of digitized words, a wider range of audio messages can be generated.
ii i r ;A _r ~r~ll_ The system may also generate an optional video display for presentation to the passengers while the audio messages are simultaneously provided over the speaker system. To this end, the system may extract the abovedescribed flight information from the input message of Table II and format the information for a textual display.
Alternatively, rather than providing a simple textual display, the system may retrieve a map of the destination terminal and provide icons or the like identifying the locations of the various arrival and departure gates on the map.
Data processor generator 13 operates on the information it receives in a manner illustrated by the flowchart of Figure 4. The input to data processor 13 is from a digital data bus input port on an interrupt basis, 181. Whenever there is information to be received, the data processor interrupts whatever it is doing to read the new d ta. At 183, processor 13 reads the input message containing the connecting gate data from the bus until a completed message, 185, is received. The processor keeps returning to the interrupt, 187, until an end of message is received.
After receiving an end of message, the alphait ,c t numeric strings providing the fixed and variable words are 26- extracted, at 189, from the input message. At 90, the extracted alphanumeric strings are output to message assembler 200 for generation of audio messages based on data r contained within the fixed and variable alphanumeric strings. i The thus-generated audio message is output to the passenger audio system, at 194, via a link line 101 to an audio broadcast system 103 (Figure The audio messages may be broadcast over a public address speaker system within the passenger cabin or may be broadcast over a conventional multichannel individual headphone system to the passengers.
H
6, i il i_ i i DI iLl -26- Alternatively, the message assembler may provide the audio messages in a variety of languages,, each language either being provided over a separate audio channel or broadcast sequentially over a single channel. Background music may be provided to accompany the audio messages.
For the optional video display, the extracted connecting gate information is arranged into its predetermined page format, at 91, for display. A terminal chart signifying the destination airport specified in the input message is retrieved, at 93, from a data storage unit. An aircraft symbol is positioned at the arrival gate on the terminal chart and the arrival gate and baggage claim area information is written on the terminal chart for display.
The terminal chart, along with its information, is output as a video signal to the video display according to a specified sequence, at 195. The terminal chart is displayed, at 197, for a period of typically 10 to 60 seconds. Upon that display time being over, portions of the alphanumeric text containing the connecting gate information are displayed in a suitable format, at 199, for the specific period of time.
S" Preferably, the duration of the video displays is synchronized with the duration of the audio message which is simultaneously broadcast.
If multiple pages of terminal charts or connecting rr gate information are to be displayed, the pages are cycled onto the display. The entire process is continually repeated.
Upon the aircraft approaching its destination, a display, such as an exemplary-- display illustrated in 3d: Figure 5, may be presented to the passengers while audio messages reciting the displayed information are simultaneously broadcast.
In order to familiarize the passengers with the layout of the terminal and all the gates of the terminal, as well as the baggage claim areas, a display shown in Figure 6
(C
-27may be provided to the passengers while an audio message reciting the baggage claim area is simultaneously broadcast.
As can be seen, the terminal chart of Figure 6 illustrates all the gates and terminal buildings for a particular airport, along with baggage claim areas. In addition, the aircraft symbol is located next to the arrival gate.
The connecting gate information may be processed to produce audio messages and video displays immediately after the information is received over- the ACARS system, or i 10 the information may be stored until the aircraft begins its approach to its destination.
The audio portion may be provided as a stand-alone system with no video display generation hardware or software required. In such case, only the audio messages are generated and broadcast. All of the information provided in a combined audio/video system is provided in a stand-alone audio system, with the exception that graphic displays such as flight plan maps and destination airport charts are not provided.
The stand-alone audio system is ideally suited for aircraft not possessing passenger video display systems. In such aircraft, the stand-alone audio system merely interfaces with a conventional multichannel passenger audio 00 0t S• broadcast system, and provides flight information, as described above, through the passenger audio system.
Referring to Figures 7-9, an alternative system for providing flight information to the passengers in the 0 aircraft passenger compartment is illustrated. The alternative system may tailor the information to various phases of o the flight.
"o ~An alternative data processor 13' utilizes the i received flight information and determines a current phase of the flight of the aircraft, the system determines whether the aircraft is in "en route cruise," "descent," etc. Once the current phase of the' flight has been -00 -28determined, data processor 13' generates audio messages and optional sequences of video display screens tailored to the current phase of the flight for presentation to the passengers of the aircraft. For example, if the aircraft is in an "en toute cruise" phase, data processor 13' may generate an audio message reciting the ground speed and outside air temperature and simultaneously generate a video display screen for displaying the same information. If the aircraft is in a "descent" phase, data processor 13' may generate a sequence of audio messages reciting the time to destination and the distance to destination screen and simultaneously generate the same information.
Each audio message provides useful information appropriate to the current phase of the flight plan. For example, during power on, preflight, engine start, and taxi out, various digitized audio messages may be provided which welcome passengers aboard the aircraft, describe the 'o aircraft and, in particular, provide safety instructions to the passengers.
During flight phases such as takeoff, climb, and en route cruise, various audio messages may be generated which indicate points of interest over which the aircraft is flying or recite flight information received via message *t c handler 63'. For example, if an input message is received 2 providing ground speed, outside air temperature, time to destination, and altitude, an audio message may be gener--,ted by message assembler 200 reciting the information. A video display screen such as shown in Figure 8 may be simultaneously provided. If the aircraft has approached a point 30.0 of interest, an audio may be assembled and broadcast to the passengers indicating the proximity of the aircraft to the point of interest. A video display screen such as the one iK)wn in Figure 9 may be simultaneously provided.
i ?i miles per hour. The current outside air temperature is 29minus 67 degrees Fahrenheit." The audio message is then broadcast to the passengers.
Data processor 13' includes: a message handler 63' for receiving flight information messages; a flight information processor 65' for determining the current flight phase and for generating audio messages and video display sequences corresponding to the current flight phase or point of interest; and a data storage unit 69' for maintaining flight information and digitized data.
Message handler 63' receives flight phase information as encoded messages over data bus 59'. As each new flight information message is received, message handler 63' generates a software interrupt. Flight information processor 65' responds to the software interrupt to retrieve the latest flight information from message handler 63'.
Once retrieved, flight information processor 65' stores the flight information in a flight information block 104' in data storage unit 69'.
In addition to maintaining digitized words and phrases for use in assembling audio messages, storage unit 69' also maintains specific sequences of graphic displays 120'. Storage unit 69' also maintains "range" tables 114, which allow flight information processor 65' to determine the current phase of the flight plan. For example, for the "en route cruise" phase, range table 114' may define an altitude range of-a-t-least 25,000 feet such that, if the received flight information includes the 9 current altLitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise" phase and generate audio messages and
I"
optional video displays appropriate to the "en route cruise" phase of the flight plan.
Range tables 114' also include points of interest along the flight route of the aircraft. For each point of interest, range tables 114' provide the location of the point of interest and a "minimum range distance" for the point of interest. If the received flight information includes the location of the aircraft, flight information prdc'-essor 65' determines whether the, aircraft is located within the minimum range associated with any of the points of interest. Thus, once the aircraft has reached the vicinity of a point of interest, the system automatically generates audio messages and optional video display screens informing the passengers of the approaching point of interest.
The audio message may recite the name of the point of interest and the distance and travel time to the point of interest and the relative location of the point of interest to the aircraft, "left" or "right." The audio messages may be provided in a variety of languages, with s each language broadcast on a different audio channel.
Alternatively, digitized monologues describing the points of interest may be accessed from a mass storage device for playback while the aircraft is in the vicinity of the point of interest. In such an embodiment, the message assembler need not be used to assemble audio messages.
Rather, fixed digitized monologues are simply broadcast.
wrr These may be accompanied by background music.
The optional video screens may provide, for example, the name of the point of interest, the distance and travel time to the point of interest, and a map including the point of interest, with the flight route of the aircraft superimposed "thereon.
i i r 7 -31- Considering points of interest in greater detail, periodically, flight information processor 65' compares the Scurrent location of the aircraft with the location of points of interest in the data base tables and determines whether the aircraft has reached the vicinity of a point of interest. As can be seen from an exemplary range table 114' provided in Table III, range table 114' can include points of interest such as cities and, for each point of interest, include the location in latitude and ldngitude and a ninimum range distance.
Table III POINTS OF INTEREST .9 4 91 9 .4, *9 .*4 9L .44 Minimum Item Latitude Longitude Range City A 45 degrees 112 degrees 100 miles City B 47 degrees 114 degrees 10 miles city C 35 degrees 110 degrees 5 miles Thus, for example, city A is represented as having a particular location and a minimum range distance of 100 miles, whereas city B has a different location and a minimum range distance of 10 miles. Flight information processor 65 includes an algorithm for comparing the current location of the aircraft to the locaion of each city and for calculating the distance between the aircraft and the city. Once the distance to the city is calculated, flight information processor 65 determines whether the distance is greater than or less than the minimum range specifieb for that .city.
-y i 1?" -32- Taking as an example City A, if the aircraft is 200 miles from city A, flight information processor 65 will determine that the aircraft has not yet reached the vicinity of city A. Whereas, if the distance between the aircraft and city Acis 90 miles, flight information processor 65 can determine that the aircraft has reached the vicinity of city A and initiate a sequence of displays, previously described, informing the passengers. The algorithm for calculating the distance between the aircraft and each point of interest, based on the latitudes and longitudes, is conventional in nature and will not be described further.
The algorithm may take considerable processing time and, hence, is only executed periodically. For example, the point-of-interest table is only accessed after a certain number of miles of flight or after a certain amount of time has passed.
Range table 114' may include the location of a wide variety of points of interest, including cities, moloe S landforms, the equator, the International Date Line, and the North and South Poles.
What has been described is a spoken message S• assembler for generating natural-sounding spoken sentences conveying input data. As a specific application, the S' message assembler has been described in combination with a flight information system for aircraft passengers that provides useful information to the passengers en route to their destination. The system connects into a conventional passenger audio broadcast system. In one embodiment, the system provides destination terminal information such as 3 connecting gates and baggage claim areas and flight information. In another embodiment, the flight information is tailored to the current phase of the flight plan of the aircraft. For example, messages describing points of interest are generated as the aircraft reaches the vicinity of the points of interest. The systems can be combined to J (K S 1 m I- -33provide both types of information. In such a combined system, the destination terminal information may be automatically presented once the aircraft reaches the "approach" phase of the flight. The system may aP--o provide the information in video form over a video system.
Various modifications are co.templated, and they obviously will be resorted to by those skilled in the art without departing from the spirit and scope of the invention as hereinafter defined by the appended claims, as only a preferred embodiment of the invention has been disclosed.
4 4 It (I t i Q I 4t «tl i f, i
Claims (12)
- 2. The audio information system of Claim i,
- 4. wherein said input data includes connecting flight informa- *4*4 tion data including one or more of flight numbers, destination terminals, gate numbers, baggage claim area numbers, and arrival and departure times, and wherein said memory means stores digitized spoken words corresponding to said connecting flight information, such that said complete audio messages provide a recitation of the flight 44aa information in a natural-sounding sentence. SI 'I 3. The audio information system of Claim 1, wherein at least some of said digitized spoken words are stored in a plurality of inflection forms, each form having a different vocal inflection, and wherein said data processor further includes: means for determining a proper vocal inflec- tion form for said words, said proper inflection being determined by the relative placement of said words in said audio message; and means for selecting said proper inflection form of said selected digitized words for inclusion in said complete audio message. 4. The audio information system of Claim 1, wherein at least some of said digitized words are stored in a plurality of forms, each form being a different language version of said word, and wherein said data processor A S further includes means for retrieving and assembling words of matching languages. I 5. The audio information system of claim 4, wherein said data processor, assembles a plurality of messages conveying the same input data, said messages being 'r in different languages.
- 6. The audio information system of Claim wherein said system includes means for outputting said plurality of messages of different languages in sequential order through a single output channel. S.9
- 7. The audio information system of Claim wherein said system includes means for outputting said i: plurality of messages of different languages simultaneously ove 1 a plurality of separate output channels. -36-
- 8. The audio information system of Claim 1, wherein the digitized spoken words are maintained in digital form on a mass storage device.
- 9. The audio information system of Claim 1, wherein said system is mounted aboard a passenger aircraft and includes means for broadcasting said complete audio messages to passengers within said aircraft.
- 10. The audio information system of Claim 9, further including a receiver for receiving flight information identifying the location of the aircraft, and wherein said memory means also stores the names and locations of a plurality of points of interest in digital form; said data processor means further including W' means for determining a current point of interest by: comparing the location of the aircraft 2 with the locations of points of interest *stored by the memory means to identify, out Sof the plurality of points of interest, a S0 point of interest in the vicinity of the V current location of the aircraft; 2*c retrieving digitized words identifying the name and relative location of the point of interest in the vicinity of the aircraft; and assembling a complete audio message 3* providing the name and relative location of the point of interest such that, as points of interest are reached during the flight of the aircraft, the system automatically broadcasts an audio message identifying the point of interest to the passengers. T I -37-
- 11. The audio information system of Claim 9, wherein the aircraft follows a flight plan having a plurality of phases, and wherein said data processor means further incl!ides: means for determining a current phase of the flight plan; means for selectively retrieving flight information from said input data, said selected flight information being selected according to the determined current phase of flight, said selected flight information being used by said message assembly means for generating said audio message such that, as each phase of the flight plan is reached, the system assembles and broadcasts an audio message reciting useful flight information tailored to the current phase of the flight plan to the passengers. b Ct t
- 12. The audio information system of Claim 11 wherein said data processor means also retrieves a sequence of video display information corresponding to the determined current phase of flight and inputs the retrieved sequence of video display information to a video display system for display to the passengers, such that, as each phase of the 2b flight plan is reached, the system -d-iplay a sequence of video 4isplays tailored to the current phase of the flight .plan to the passengers along with the audio messages.
- 13. The audio information system of Claim 11, 3,c wherein the memory means further includes a table means for storing a range of flight information corresponding to each phase of the flight plan and wherein the data processor determines the current phase of the flight plan by determining a phase having a range corresponding to the received flight information. -3 F-
- 14. An audio information system for automatically generating audio messages for a listening audience, said audio messages having preselected sentence formats, said system comprising: receiving means for receiving input data including one or more fixed units of data and one or more variable units of data; memory means for storing digitized spoken words including fixed words corresponding to portions of said preselected sentence formats and variable words corresponding to said variable units of data, with each variable word being a digitized spoken equivalent of a corresponding unit of data; data processor means for generating complete audio messages based on the input data, said data processor means including: means for determining a sentence format corresponding to the input data; e means for retrieving digitized fixed words corresponding to the sentence format; and means for retrieving digitized variable words corresponding to the variable units of 2. data within said input data; and message assembly means for assembling said ~retrieved fixed and variable words into complete ~audio messages, such that audio messages are generated which convey the input data in natural- sounding sentences. 'h0 .Akii i. i .i I I I- 1- -39- aircraft II SS S. 2 0 J, I 3Ot A terminal and gate information system for passengers in an aircraft comprising: an audio broadcast system; a receiver for receiving destination airport terminal information regarding one or more of connecting flight numbers, departure times, departure gates and destinations, and baggage claim areas from a ground-based transmitter; memory means for storing a plurality of digitized words corresponding to said destination terminal information; audio message assembly means for creating audio messages incorporating said destination airport terminal information by selectively retrieving and assembling said digitized words; and means for inputting said audio messages to said audio system for broadcast to the passengers.
- 16. The audio information system of Claim wherein said memory means further stores data for a plurality of airport charts representative of destination airport terminals; with said receiver receiving information regard- ing flight numbers and destination airports from a ground-based transmitter; and data processor means utilizing the received flight numbers and airport information to retrieve the data for the airport chart of the destination airport terminal from said memory means and inputting the data to a video display system for display. DATED this 6th ,day of April 1993 ASINC, INC. By their Patent Attorneys CULLEN \CO. 1/ c pU-~ AUDIO/VIDEO INFORMATION SYSTEM ABSTRACT OF THE DISCLOSURE An audio message assembler is provided for generating natural-sounding spoken messages for communi- cating real time data to a listening audience. The system maintains a spoken-word data base having hundreds or thousands of digitally-stored words and phrases. The system receives digital information in the -form of alphanumeric character strings, retrieves a preset sentence format appropriate to the particular input data received, and retrieves fixed or variable digitized words and phrases for inclusion in the preset format. For each specific input alphanumeric string, the system retrieves a digitized spoken word equivalent of the alphanumeric string. The system then assembles the retrieved digitized words into complete sentences for broadcast to the listening audience. In a particular embodiment, the system is implemented within an aircraft flight information system for providing flight information to the passengers of an aircraft. In this application, the system receives flight information such as connecting flights and en-route aircraft data, and generates audio messages for broadcast to the passengers which recite the flight information in natural-sounding sentences. .r 99 0 1 9 9 9**9r .9 4 *t *r a 4r .449 94 at a ci
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU36718/93A AU667347B2 (en) | 1993-04-06 | 1993-04-06 | Real-time audio message system for aircraft passangers |
EP93302701A EP0620697A1 (en) | 1993-04-06 | 1993-04-06 | Audio/video information system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU36718/93A AU667347B2 (en) | 1993-04-06 | 1993-04-06 | Real-time audio message system for aircraft passangers |
Publications (2)
Publication Number | Publication Date |
---|---|
AU3671893A AU3671893A (en) | 1994-10-13 |
AU667347B2 true AU667347B2 (en) | 1996-03-21 |
Family
ID=3723905
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU36718/93A Ceased AU667347B2 (en) | 1993-04-06 | 1993-04-06 | Real-time audio message system for aircraft passangers |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU667347B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7986249B2 (en) * | 2008-11-24 | 2011-07-26 | Honeywell International Inc. | System and method for displaying graphical departure procedures |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5177800A (en) * | 1990-06-07 | 1993-01-05 | Aisi, Inc. | Bar code activated speech synthesizer teaching device |
US5181250A (en) * | 1991-11-27 | 1993-01-19 | Motorola, Inc. | Natural language generation system for producing natural language instructions |
US5208590A (en) * | 1991-09-20 | 1993-05-04 | Asinc, Inc. | Flight phase information display system for aircraft passengers |
-
1993
- 1993-04-06 AU AU36718/93A patent/AU667347B2/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5177800A (en) * | 1990-06-07 | 1993-01-05 | Aisi, Inc. | Bar code activated speech synthesizer teaching device |
US5208590A (en) * | 1991-09-20 | 1993-05-04 | Asinc, Inc. | Flight phase information display system for aircraft passengers |
US5181250A (en) * | 1991-11-27 | 1993-01-19 | Motorola, Inc. | Natural language generation system for producing natural language instructions |
Also Published As
Publication number | Publication date |
---|---|
AU3671893A (en) | 1994-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5208590A (en) | Flight phase information display system for aircraft passengers | |
EP2858067B1 (en) | System and method for correcting accent induced speech in an aircraft cockpit utilizing a dynamic speech database | |
US6160497A (en) | Visual display of aircraft data link information | |
US6175314B1 (en) | Voice annunciation of data link ATC messages | |
US8306675B2 (en) | Graphic display system for assisting vehicle operators | |
US7580377B2 (en) | Systems and method of datalink auditory communications for air traffic control | |
WO2002036427A3 (en) | Weather information network including graphical display | |
US9704407B2 (en) | Aircraft systems and methods with enhanced NOTAMs | |
US20230060442A1 (en) | Portable Flight Navigation Tool Adapted to Assist Pilots in Compliance with International Flight Procedures and Navigation | |
US20140122070A1 (en) | Graphic display system for assisting vehicle operators | |
US20100332122A1 (en) | Advance automatic flight planning using receiver autonomous integrity monitoring (raim) outage prediction | |
US6639522B2 (en) | System and method of automatically triggering events shown on aircraft displays | |
WO2005038748A3 (en) | Integrated flight management and textual air traffic control display system and method | |
US9002544B1 (en) | System, device, and method for presenting instrument approach procedure advisory information to a pilot on an aircraft | |
EP0620697A1 (en) | Audio/video information system | |
AU667347B2 (en) | Real-time audio message system for aircraft passangers | |
Lamel et al. | Generation and synthesis of broadcast messages | |
US20020123830A1 (en) | Instrument reference flight display system for horizon representation of direction to next waypoint | |
JP2012238305A (en) | Announcement information presentation system, announcement information presentation apparatus, and announcement information presentation method | |
Prinzo et al. | US airline transport pilot international flight language experiences, Report 1: Background information and general/pre-flight preparation | |
Kratchounova | Pilot Reports (PIREPs) end-user (Pilots and Controllers) focus groups | |
Baxley et al. | The development of cockpit display and alerting concepts for Interval Management (IM) in a near-term environment | |
Cartwright et al. | A history of aeronautical meteorology: personal perspectives, 1903–1995 | |
Prinzo et al. | US airline transport pilot international flight language experiences, report 5: Language experiences in native English-speaking airspace/airports | |
Cox et al. | Automatic Barometric Updates from Ground-Based Navigational Aids |