[go: up one dir, main page]

WO2018128245A1 - Artificial intelligence server for prioritized air traveling itinerary - Google Patents

Artificial intelligence server for prioritized air traveling itinerary Download PDF

Info

Publication number
WO2018128245A1
WO2018128245A1 PCT/KR2017/009921 KR2017009921W WO2018128245A1 WO 2018128245 A1 WO2018128245 A1 WO 2018128245A1 KR 2017009921 W KR2017009921 W KR 2017009921W WO 2018128245 A1 WO2018128245 A1 WO 2018128245A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
data
mobile terminal
available
prioritized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/009921
Other languages
French (fr)
Inventor
Theodore CHANG
Jongsung Bae
Insuk CHOI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2018128245A1 publication Critical patent/WO2018128245A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/14Travel agencies
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Electronic shopping [e-shopping] by investigating goods or services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Recommending goods or services

Definitions

  • An embodiment of the present invention relates to an artificial intelligence server networked with mobile terminals providing optimized traveling and a method to connect to mobile terminals of users in an interactive and real-time manner.
  • An Artificial Intelligence (“AI") server of an example of the present invention comprises a memory unit configured to store a current location data of a mobile terminal and air flight data near the current location, respectively received from API providers.
  • the memory unit may additionally store a wish-visit-list received from a mobile terminal.
  • the AI server of an example of the present invention comprises a memory unit configured to store a current location data of a mobile terminal, where the current location data include GPS coordinate of the mobile terminal and a map data at least comprising a route from the mobile terminal to a boarding gate for an airliner available.
  • the current location data further include current traffic information for the route from the mobile terminal to a boarding gate for an airliner available.
  • the AI server of an example of the present invention further comprises a processing unit configured to calculate a prioritized itinerary list by applying a genetic algorithm or a neural network algorithm on the data.
  • the prioritized itinerary may comprise air travel itinerary and/or available stand-by ticket information.
  • the AI server of an example of the present invention may comprise a memory unit configured to store a current location data of a mobile terminal and available ticket data, respectively received from API providers.
  • the AI server of an example of the present invention may further comprise a processing unit configured to calculate and push to the mobile terminal a prioritized ticket list available to enter the gate within time.
  • Users on mobile terminal may find available stand-by tickets with the help of AI server and the most efficient way to travel around a plurality of places
  • FIG. 1 is a schematic diagram of an AI server networked with mobile terminals and a API provider.
  • FIG. 2 is a hardware configuration of the AI server comprising GPU, CPU, RAM, ROM, and auxiliary memory.
  • FIG. 3 is a sequence diagram showing an example of data communication among AI server, mobile terminals, and a API provider to serve air flight tickets based on an optimized air travel itinerary.
  • FIG. 4 is a flowchart showing genetic algorithm to find prioritized list of air traveling itinerary.
  • FIG. 5 is a sequence diagram showing an example of data communication among AI server, mobile terminals, and a API provider to provide stand-by tickets alarming service.
  • FIG. 6 is a schematic diagram of a basic one cell neural network
  • FIG. 7 is a diagram of an example of Feed-Forward Neural Network (FFNN) including two hidden layers
  • FIG. 8 is a diagram of an example of Recurrent Neural Network (RNN)
  • FIG. 9 is a flowchart of an example process used to create a deep learning artificial neural network for obtaining an optimized air travel itinerary in accordance with aspects of the present invention
  • FIGS. 1 to 4 An AI server 100 networked with mobile terminals 1000 according to an example embodiment of the present invention will be described in FIGS. 1 to 4.
  • the same elements will be denoted by the same reference signs, without redundant description.
  • FIG. 1 is a block diagram showing a configuration of the AI server 100 networked with mobile terminals 1000 according to an embodiment of the present invention.
  • the AI server 100 is a system that communicates with users having mobile terminals 1000 and may comprise a solution engine 10 and a chatbot 20.
  • the chatbot 20 may comprise data receiver 21 retrieving data from several resources such as Application Program Interface (API) providers 200, database 300 prepared inside the AI server, and users on their mobile terminals 1000.
  • the chatbot 20 may also comprise data transmitter 22 which transmits outcome from the solution engine 10 to the user's mobile terminals 1000.
  • the data received or transmitted may be auditory, visual, or textual.
  • the chatbot 20 comprises a computer program and a user interface such as a chatbot character and a dialog system. Such programs are designed to convincingly simulate how a human would behave as a conversational partner and may comprise sophisticated natural language processing systems.
  • the solution engine 10 may generate optimal outcomes and messages based on the received data from data receiver 21 on accounts of factors such as costs, gains, and time and provide the optimal outcomes and messages to the chatbot 20.
  • the chatbot 20 may communicate about plans, reservation, and confirmation with users in an interactive and real-time manner.
  • the AI server 100 may refer to a hardware of computing system comprising at least one Central Processing unit (CPU) 1, Graphic Processing Unit (GPU) 2, Random Access Memory(RAM) 3, and Read Only Memory (ROM) 4 as main memory and further comprising a server program 4 operated in the system that provides services to other computer programs and their users in the same or other computers.
  • CPU Central Processing unit
  • GPU Graphic Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • FIG. 2 The schematic diagram regarding a physical structure of the AI server 100 is shown in FIG. 2.
  • An embodiment of the present invention comprises GPU-accelerated computing, which represents a use of GPU 2 together with a plurality of CPUs 1 to accelerate deep learning, genetic algorithms, and other complex analytics.
  • a GPU 2 has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously, while a CPU 1 consists of a few cores optimized for sequential serial processing.
  • the functions implemented by the AI server 100 are realized in such a manner that predetermined programs are retrieved onto the hardware such as GPU 2 and RAM shown in FIG. 2, thereby to let the solution engine 10, data receiver 21, and data transmitter 22 under control of the GPU 2 and data is read out from and written into the RAM 3, main memory ROM 4, and the auxiliary memory 5.
  • An exemplary processing module for implementing the inventive methodology as described above may be hard-wired or stored in a separate memory that is read into a main memory of a processor or a plurality of processors from a computer-readable medium such as a ROM or other type of hard magnetic drive, optical storage, tape or flash memory.
  • a program stored in a memory media execution of sequences of instructions in the module causes the processor to perform the process steps described herein.
  • the exemplary embodiments of aspects of the present disclosure are not limited to any specific combination of hardware and software and the computer program code required to implement the foregoing can be developed by a person of ordinary skill in the art.
  • a computer-readable medium refers to any tangible machine-encoded medium that provides or participates in providing instructions to one or more processors.
  • a computer-readable medium may be one or more optical or magnetic memory disks, flash drives and cards, a read-only memory or a random access memory such as a DRAM, which typically constitutes the main memory.
  • Such media excludes propagated signals, which are not tangible. Cached information is considered to be stored on a computer-readable medium.
  • Common expedients of computer-readable media are well-known in the art and need not be described in detail here.
  • the individual personnel using the methodology of aspects of the present invention may input information to AI server via mobile terminals 1000 or a separate WAN (not shown).
  • the above-described method may be implemented by program modules that are executed by a computer, as described above.
  • program modules include routines, objects, components, data structures and the like that perform tasks or implement particular abstract data types.
  • the term "program” as used herein may connote a single program module or multiple program modules acting in concert.
  • the disclosure may be implemented on a variety of types of computers, including personal computers (PCs), hand-held devices, multi-processor systems, microprocessor-based programmable consumer electronics, network PCs, mini-computers, mainframe computers, and the like.
  • the disclosure may also be employed in distributed computing environments, where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, modules may be located in both local and remote memory storage devices.
  • the AI server 100 may be established independently in a server computer in a house or at an office, which may be independently dedicated or used for other purpose as well.
  • a server of the embodiments may comprise other types of physical presence such as co-location, hosting, and clouding.
  • Colocation is the practice of housing privately-owned servers and networking equipment in a third party data center instead of keeping servers in-house, in offices or at a private data center. Companies may choose to 'co-locate' their equipment by renting space in a colocation center. A colocation provider will rent out in a data center in which customers may install their equipment, but will also provide the power, bandwidth, IP address and cooling systems that the customer will require in orders to successfully deploy their server. Space is rented out in terms of racks and cabinets.
  • a rack is a standardized frame for mounting equipment and hardware, usually horizontally. A full size of rack is often called cabinet.
  • Hosting is a way of renting cyber space to operate a server program in a server of a commercial provider.
  • the cyber space may be shared between the hosting provider's clients or may be dedicated to one client, with no one else sharing it.
  • a client may utilize multiple servers which are all dedicated to their use.
  • Clouding is one way of hosting server. However, it is differentiated in that a computer program may physically move around multiple servers available to each client depending on demand. When there is more demand placed on the servers, capacity can be automatically increased to match demand without needing to keep a large capacity on a permanent basis. Resource can be scaled up or scaled down accordingly, making it more flexible.
  • the mobile terminals 1000 may comprise any type of hand-held computing device utilizing wireless data protocols such as PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code Division Multiple Access)-2000, W-CDMA (Wide Band Code Division Multiple Access), and WIBRO (Wireless Broadband Internet).
  • the mobile terminals 1000 may comprise smart phone, smart note, tablet PC, smart camera, smart watch, and any type of wearable computer.
  • the data receiver 21 in the chatbot 20 may gather a wish-visit-list 1001 to travel that users responded through mobile terminals 1000 to given queries 11 generated by the solution engine 10 and pushed by the data transmitter 22.
  • Users on mobile terminals 1000 may facially communicate with a chatbot 20 transferring data in auditory, visual, or textual in natural language.
  • the chatbot 20 may send queries to user's mobile terminal such as "List top 10 moments you may want to take a picture of during your journey.” Users may input auditory or textual information in his or her natural language about what and where users expect to do or go on vacation.
  • the solution engine 10 in the AI server 100 may search and determine a recommendable place 12 for each item on the wish-visit-list 1001.
  • the chatbot 20 may provide additional queries 11 to a user to clarify description.
  • the feedback from the user may be used to narrow down and find a recommendable place 12 for each description.
  • the chatbot 20 may repeat providing queries and receiving feedbacks from the user, until the solution engine 10 may conclude the right place with a certain level of probability predetermined.
  • the chatbot 20 may identify auditory information such as words and phrases in spoken language and convert them to a machine-readable format.
  • the chatbot 20 may use algorithms through acoustic and language modeling.
  • Acoustic modeling represents the relationship between linguistic units of speech and audio signals.
  • a language modeling matches sounds with words sequences to help distinguish between words that sound similar.
  • Hidden Markov model may be used as well to recognize temporal patterns in speech to improve accuracy within the system.
  • the solution engine 10 may analyze textual information by decomposing its input written phrases or sentences into syntaxial pieces and extracting parts of words having semantic meaning. Generally, noun phrases or verb phrases has semantic meaning whereas article, preposition, pronouns, conjunction, and adverb phrases do not have significant semantic meaning. Using the semantic parts of words, the solution engine 10 may extract the recommendable place 12 using travel information 400 either from API provider 200 or a database 300 prepared by the AI server 100 in its capacity.
  • the API provider may be an independent web search service corporation.
  • the solution engine 10 may extract the recommendable place 12 by detecting the most frequently found geographical proper nouns from top 5 relevant articles comprising the parts of words having semantic meaning.
  • the wish-visit-list input information may comprise visual images such as drawings, photos, or video clips.
  • the solution engine 10 analyzes images received and matches with similar images on web sites that have semantically the same categories of features. For example, if a user input a drawing that a man standing on river scooping up fish with a net in the wish-visit-list, then the solution engine 10 may retrieve distinctive features from photo such as river, fish net, man standing in the river and scooping up and may identifies similar images on web sites and find "Russian river at Alaska" as one of recommendable places.
  • the solution engine 10 may find an appropriate airport 13 for each recommendable place 12 when it has been confirmed by user on mobile terminals 1000 and may find the most efficient way of connecting all airports for all recommendable places 12 to travel around.
  • the solution engine 10 may retrieve air travel information 401, from API provider 200, such as air flight schedules, airlines, airliners, departure/arrival time, flight distance, and flight fares.
  • the solution engine 10 may present a list of available sets of air travel itinerary 14 considering the air travel information 401.
  • the solution engine 10 may first extract the airport 13 for each recommendable place 12 geographically located in the nearest location, possibly within a national boundary of the country where each recommendable place 12 is located. Next.
  • the solution engine 10 may construct an air travel itinerary 14 connecting the places to travel based on airline routes and their flight schedules available in the designated airport.
  • the airline routes and their flight schedule for each airport may be retrieved from either a database that an API provider 200 presents or a database 300 prepared by the AI server 100 in its capacity.
  • the API provider 200 may be an independent airline corporation, a travel agency, or airline associations thereof.
  • the recommendable air travel itinerary 14 may be calculated in consideration of information such as endurable budget scope, allowable numbers of stop-by, and time periods for entire traveling as the users provide.
  • the solution engine 10 may generate an air travel itinerary 14 by making reservation and confirmation for stand-by tickets 402 becoming available due to vacancies unsold, no-show passengers, and impending cancellation.
  • Stand-by tickets 402 is on-sale tickets for vacancies and may be available to be obtained, for example, one day to one hour before closing the boarding gate 402 depending on where a user is literally stand by at the time when the stand-by ticket becomes available. Users may purchase stand-by tickets 402 at lower costs in case airliner should depart with vacancies otherwise. However, users may fall into jeopardy in that they should show up urgently and may endure an unexpected change of his or her air travel itinerary 14 depending on availability of stand-by tickets 402.
  • a genetic algorithm may be applied to find best sequential match through a huge combination of parameters.
  • the basic step of finding a optimal air travel itinerary is similar as a generally known "the traveling salesman problem.”
  • the traveling salesman problem [Potvin, J.V. 1996, Genetic Algorithms for the travelling salesman problem, Annals of Operations Research, 63, 339-370]
  • the salesman has to visit each one of the cities starting from a certain one (e.g. hometown) and returning to the same city.
  • the challenge of the problem is that the traveling salesman wants to minimize the total length of the trip.
  • the solution engine 100 may apply the genetic algorithm 2000 to find prioritized list of air traveling itinerary.
  • FIG. 4 shows process of genetic algorithm according to an example embodiment of the present invention.
  • the recommendable places 12, called individuals 2001, extracted from the wish-visit-list 1001 may be numbered as 1, 2, 3,...,10, if users input 10 items, for example, in the wish-visit-list 1001.
  • an initial set of sequence comprising 10 places and a total number of sets, called "population" 2001, to calculate are determined.
  • a limitation may be set up.
  • two recommendable places 12 located in the same country or the same continent may be preferably neighbored compared to two places in different country or continent. Every data on the recommendable place 12 may be tagged with index indicating country or continent where it is located.
  • each member of the population 2002 may be evaluated by calculating a "fitness" for that individual.
  • the fitness is calculated by how well it fits with our desired requirements.
  • several different target values may be adopted for the fitness calculation such as the lowest air flight fare connecting all recommendable places 12, the lowest air flight distance, and the lowest time consumed including waiting time for transits.
  • selection may be implemented to improve each population's overall fitness. Selection helps us to do this by discarding the bad designs and only keeping the best individuals in the population. There are a few different selection methods but the basic idea is the same, make it more likely that fitter individuals will be selected for our next generation.
  • Fifth, “crossover” may be implemented to create new individuals by combining aspects of the selected individuals. This is a mimicking how sex works in nature. By combining certain traits from two or more individuals 2001, an even 'fitter' offspring may be generated which will inherit the best traits from each of its parents.
  • Sixth “Mutation”, a little bit randomness, may be added into our populations' genetics otherwise every combination of solutions would be in our initial population. Mutation typically works by making very small changes at random to an individual 2001. Seventh, the previous steps from third to sixth may be repeated until a termination condition is obtained converging to a certain level stably.
  • crossover operation may comprise one-point crossover, multi-point crossover, uniform crossover, whole arithmetic recombination, and Davis' Order crossover.
  • one-point crossover a random crossover point is selected and the tails of its two parents are swapped to get new off-springs.
  • Multi-point crossover is a generalization of the one-point crossover wherein alternating segments are swapped to get new off-springs.
  • each sequence is treated separately. Like flipping a coin, each part of the sequence may be determined according to whether or not it'll be included in the off-spring. The coin may be biased to one parent, to have more genetic material in the child from that parent.
  • mutation operation may comprise Bit Flip Mutation, Random Resetting, Swap Mutation, Scramble Mutation, and Inversion Mutation. Like the crossover operations, this is not an exhaustive list and the genetic algorithm designer may find a combination of these approaches or a problem-specific mutation operator more useful.
  • Bit Flip Mutation one or more random bits and flip are selected.
  • Random Resetting is an extension of the bit flip for the integer representation. A random value from the set of permissible values may be assigned to a randomly chosen part.
  • Swap Mutation two positions on the part of sequence at random may be selected and then the values may be interchanged. This is common in permutation based encodings. Scramble mutation is also popular with permutation representation.
  • a subset of part is chosen and their values may be scrambled or shuffled randomly.
  • In Inversion Mutation a subset of parts of sequence will be selected like in Scramble Mutation. However, instead of shuffling the subset, the entire string in the subset may be inverted.
  • the data receiver 21 gathers available stand-by tickets 402 information and the data transmitter 22 may alarm it to the mobile terminals 1000. Reservation or confirmation data may be generated in the mobile terminals 1000 and feedback to the data receiver 21 again.
  • the data transmitter 22 may send reservation or confirmation data to the API provider 200 for stand-by tickets 402 and the data receiver 21 may receive ticket issue data 403 from the API provider 200.
  • the data transmitter 22 may send the ticket issue data 403 to the mobile terminals 1000.
  • a push service of the AI server 100 for offering stand-by tickets 402 is shown in FIG. 5.
  • Boarding gate terminals 502 at airport terminal 500 may input data of available stand-by tickets 402, created from vacancies of seats unsold or no-show passengers, on a system of API provider 200 that airline companies may operate.
  • the solution engine 10 may regularly check the available stand-by tickets 402, for example, 7 days, 1 day, 3 hours, 60 minutes, 30 minutes before the closing time of boarding gate 501 at airport terminal 500.
  • the solution engine 10 may search and retrieve users' information who potentially are interested in or who already made reservation on potential stand-by tickets 402 to travel around. Then the solution engine 10 may generate location queries15 and transmit the queries 15 to the mobile terminals 1000 via data transmitter 202.
  • the solution engine 10 may retrieve user geographic data 201 from the API provider 200 such as GPS coordinates, map data, and time to boarding gate 501 from their location based on distance and traffic condition.
  • the AI server 100 may contain such use geographic data 201 in its database 300.
  • the solution engine 10 may confirm targeting users by its own logic and may transmit offering query 16 and the user geographic data 201 to users on the mobile terminals 1000 via data transmitter 202 who may possibly come to the boarding gate 501 within a time before the gate is closed. For example, 3 hours offer will be sent to the mobile terminals 1000 of users whose air travel wish-visit-list 1001 in the database 300 of AI server 100 matches with the schedule of the specific air flight ticket.
  • Whether a user can arrive at airport within a time may be calculated based on distance between GPS coordinates of departing airport terminal 500 and the position of user's mobile terminals 1000 detected from its GPS signal.
  • the solution engine 10 may receive confirmation from users on the mobile terminals 1000 for ticketing via the data receiver 22 and transmit ticket issue data 403 of the users to the database 300 of its own AI server 100 or the API provider 200.
  • the solution engine 10 may generate e-ticket and return to the users on the mobile terminals 1000 via the data transmitter 202.
  • the push service 1100 can be waived by users on the mobile terminals 1000.
  • the users may receive any piece of information for stand-by tickets 402 at user's requests regardless of his or her GPS coordinate of the mobile terminals 1000.
  • the users may request stand-by ticket 402 information by selecting an airport and time widow that users may expect to depart.
  • available transportation, traffic condition, and an estimated time consumed at airport terminal 500 may be counted on and provided by the AI server 100.
  • the estimated time consumed at airport terminal 500 may be calculated by several factors such as distance from custom inspection 503 to boarding gate 501 and an estimated time to check in and go through immigration 504.
  • the time consumed to pass immigration 504 may be calculated by time data gathered from the mobile terminals 1000 of users who passed by the same way at the airport terminal 500 just a certain short time ahead of the user who must show up to the boarding gate 501.
  • the time consumed to pass immigration 504 may be also statistically calculated by the number of passengers for flight in a designated time frame and average capacity of immigration. Depending on seasons and local circumstances, the time consumed varies much, which is critical to the users who purchased a stand-by ticket departing within an hour or 30 minutes at the airport.
  • the AI server of an example of the present invention may apply for potential customers who want to or reserved to buy tickets for any wish-visit-list comprising sports game, concert, theater, theme park, restaurant, exhibition and so on.
  • the AI server may comprise a memory unit configured to store a current location data of a mobile terminal and available ticket data, respectively received from API providers.
  • the AI server may further store a wish-visit-list received from a mobile terminal.
  • the current location data of the mobile terminal comprise GPS coordinate of the mobile terminal and a map data at least comprising a route from the mobile terminal to a boarding gate for an airliner available.
  • the current location data may further comprise current traffic information for the route from the mobile terminal to a boarding gate for an airliner available.
  • the AI server of an example of the present invention may further comprise a processing unit configured to calculate and push to the mobile terminal a prioritized ticket list available to enter the gate within time.
  • the prioritized itinerary list may comprise an available ticket information or an available itinerary to visit multiple events with corresponding tickets information.
  • the solution engine 100 may apply deep learning with artificial neural networks methodologies to find prioritized list of air traveling itinerary.
  • FIG. 6 shows a basic artificial neural network 3010 that includes a neuron cell 3012.
  • the set of weighted inputs is then summed and subjected to a defined activation function 3016.
  • the result from the activation function is then provided as the output 3018 from neuron cell 3012.
  • Output 3018 may then be transmitted and applied as an input to other neuron cells, or provided as the output value of the artificial neural network itself.
  • FIG. 7 illustrates an exemplary artificial neural network 3020 that includes a first hidden layer 3022 and a second hidden layer 3024 positioned in the network between an input layer 3026 and an output layer 3028.
  • neural network 3020 is referred to as a "deep feedforward network with two hidden layers” (or a “deep learning” neural network).
  • FFNN feedforward neural network
  • the signals move in only one direction (i.e. “feed in the forward direction") from input layer 3026, through hidden layers 3022 and 3024, and ultimately exiting at output layer 3028.
  • Input layer 3026 consists of input neuron cells, shown as nodes 3030, 3032, and 3034 in this network.
  • a bias node 3036 (designated as "+1") is also included within input layer 3026.
  • First hidden layer 3022 is shown as including a set of three neuron cells 3038, 3040 and 3042, each processing the collected set of weighted inputs by the defined activation function.
  • a bias node 3044 also provides an input at hidden layer 3022.
  • the created set of output signals is then applied as inputs to second hidden layer 3024.
  • Second hidden layer 3024 itself is shown as including a pair of neuron cells 3046, 3048 (as well as a bias node 3050), where, as explained above, each neuron cell applies the activation function to the weighted signals arriving as inputs.
  • the outputs created by these neuron cells are shown as being applied as input signals to neuron cells 3052 and 3054 of output layer 3028.
  • the activation function is associated with each neuron cell 3052 and 3054 and is applied to the weighted sum of the signals received from first hidden layer 3022.
  • the output signals produced by cells 3052 and 3054 are defined as the output signals of artificial neural network 3020. In this case, the provision of two separate outputs defines this particular network configuration as providing a "two-step-ahead" forecast.
  • the number of hidden layers in a given deep learning feedforward network can be different for different datasets.
  • FIG. 7 it is clear from a review of FIG. 7 that the inclusion of additional hidden layers results in introducing more parameters, which may lead to overfitting problems for some predictive analytics applications.
  • the use of a larger number of hidden layers also increases the computational complexity of the network. In accordance with aspects of the present invention, it has been found that only one or two hidden layers is necessary to provide accurate time series predictions of power plant operations.
  • FIG. 8 illustrates a first type of recurrent neural network, referred to in the art as an "Elman recurrent network” and is illustrated as network 3060 in the configuration of FIG. 8.
  • recurrent neural network 3060 consists of a single hidden layer 3062 positioned between an input layer 3064 and an output layer 3066. Also included in recurrent network 3060 is a context layer 3068, which in this case includes a first context node 3070 and a second context node 3072. In this configuration of a recurrent network, the outputs from the hidden layer are fed back to context layer 3068 and used as additional inputs, in combination with the newly-arriving data at input layer 3064. As shown, the output from a first neuron cell 3074 of hidden layer 3062 is stored in first context node 3070 (as well as being transmitted to a neuron cell 3076 of output layer 3066).
  • a feedback arrow 3078 shows the return path of signal flow from the output of neuron cell 3074 to first context node 3070.
  • the output signal created by a second neuron cell 3080 of hidden layer 3062 is stored in second context node 3072 of context layer 3068 (and also forwarded as an input to a neuron cell 3082 in output layer 3066).
  • a feedback arrow 84 shows the return path of signal flow from the output of neuron cell 3080 to second context node 3072.
  • context nodes 3070 and 3072 The previous output signals held in context nodes 3070 and 3072 (hereinafter referred to as "context values"), are then, together with the current training data values appearing as inputs x1, x2 and x3 (as appropriately weighted) at the current time step, applied as inputs to neuron cells 3074 and 3080 of hidden layer 3062.
  • context values the current training data values appearing as inputs x1, x2 and x3 (as appropriately weighted) at the current time step, applied as inputs to neuron cells 3074 and 3080 of hidden layer 3062.
  • the neuron cells are described as applying an "activation function" (denoted as fin the drawings) to the collected group of weighted inputs in order to create the output signal.
  • activation function is the well-known sigmoid function as follows:
  • activation functions may be used as activation functions.
  • activation the output from a node (neuron) is defined as the "activation" of the node.
  • activation the output from a node (neuron) is defined as the "activation" of the node.
  • the value of "z" in the above equations is defined as the weighted sum of the inputs in the previous layer.
  • the inputs to the artificial neural network are typically the past values of the time series (for example, past values of energy demand for performing demand forecasting) and the output is the predicted future energy demand value(s).
  • the predicted future energy demand is then used by power plant personnel in scheduling equipment and supplies for the following time period.
  • the neural network in general terms, performs the following function mapping:
  • yt is the observation at time t and m is an independent variable defining the number of past values utilized in the mapping function to create the predicted value.
  • an artificial neural network Before an artificial neural network can be used to perform electric load demand forecasting (or any other type of power plant-related forecasting), it must be “trained” to do so. As mentioned above, training is the process of determining the proper weights Wi (sometimes referred to as arc weights) and bias values bi that are applied to the various inputs at activation nodes in the network. These weights are a key element to defining a proper network, since the knowledge learned by a network is stored in the arcs and nodes in terms of arc weights and node biases. It is through these linking arcs that an artificial neural network can carry out complex nonlinear mappings from its input nodes to its output nodes.
  • Wi sometimes referred to as arc weights
  • bias values bi bias values
  • the training mode in this type of time series forecasting is considered as a "supervised" process, since the desired response of the network (testing set) for each input pattern (training set) is always available for use in evaluating how well the predicted output fits to the actual values.
  • the training input data is in the form of vectors of training patterns (thus, the number of input nodes is equal to the dimension of the input vector).
  • the total available data (referred to at times hereinafter as the "training information”) is divided into a training set and a testing set.
  • the training set is used for estimating the arc weights and bias values, with the testing set then used for measuring the "cost" of a network including the weights determined by the training set.
  • the learning process continues until a set of weights and bias node values is found that minimizes the cost value.
  • the methodology utilized in accordance with aspects of the present invention to obtain a "deep learning" neural network model useful in generating an optimized air travel itinerary follows the flowchart as outlined in FIG. 9.
  • the process begins at step 3500 by selecting a particular neural network model to be used (e.g., FFNN, RNN, or other suitable network configuration), as well as the number of hidden layers to be included in the model and the number of nodes to be included in each layer.
  • An activation function is also selected to characterize the operation to be performed on the weighted sum of inputs at each node.
  • an initial set of weights and bias values are used to initiate the process.
  • the training process continues at step 3520 by computing the gradients associated with both the determined weights and bias values for this model.
  • one approach to computing these gradients is to use a "backpropagation" method, which starts at the output of the network model and works backwards to determine an error term that may be attributed to each layer (calculating for each individual node in each layer), working from the output layer, through the hidden layers, and back to the input layer.
  • the next step in the process (shown as step 3530) is to perform an optimization on all of the gradients generated in step 3520, selecting an optimum set of weights and bias values that is defined as an "acceptable" set of parameters for the neural network model that best fits the time series being studied. As will be discussed below, it is possible to use more than one historical time series in this training process. With that in mind, the following step in the process is a decision point 3540, which asks if there is another "training information" set that is to be used in training the model. If the answer is "yes”, the process moves to step 3550, which defines the next "training information” set to be used, returning the process to step 3520 to compute the gradients associated with this next set of training information.
  • step 3540 inquires if there are multiple sets of optimized ⁇ W, b ⁇ . If so, these values are first averaged (step 3570) before continuing.
  • step 3580 is to determine if there is a set of validation data that is to be used to perform one final "check" of the fit of the current neural network model with the optimized set ⁇ W, b ⁇ to a following set of time series values (i.e., the validation set).
  • this final set of optimized ⁇ W, b ⁇ values are defined as the output from the training process and, going forward, are used in the developed neural network to perform the time series forecasting task (step 3590).
  • step 3600 If there is a set of validation data present, a final cost measurement is performed (step 3600). If the predicted values from the model sufficient match the validation set values (at step 3610), the use of this set of ⁇ W, b ⁇ values is confirmed, and again the process moves to step 3590. Otherwise, if the validation test fails, it is possible to re-start the entire process by selecting a different neural network model (step 3620) and returning to step 3500 to try again to find a model that accurately predicts the time series under review.
  • the elements of the deep learning neural network methodology as described above may be implemented in a computer system comprising a single unit, or a plurality of units linked by a network or a bus.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Theoretical Computer Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Game Theory and Decision Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Automation & Control Theory (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

An embodiment of the present invention relates to an artificial intelligence server networked with mobile terminals providing optimized air travel itinerary from user's wish-visit-list to visit and a method to the mobile terminals of users who are capable of reaching to boarding gates in time for vacant seats in an interactive and real-time manner.

Description

ARTIFICIAL INTELLIGENCE SERVER FOR PRIORITIZED AIR TRAVELING ITINERARY
An embodiment of the present invention relates to an artificial intelligence server networked with mobile terminals providing optimized traveling and a method to connect to mobile terminals of users in an interactive and real-time manner.
In recent years, a variety of network system were disclosed providing a platform to plan activities or transact items using mobile terminals. On the other hand, some literatures disclosed technologies applying location information collected from the mobile terminal carried by the user.
Users have difficulty in finding available stand-by tickets and the most efficient way to travel around a plurality of places.
An Artificial Intelligence ("AI") server of an example of the present invention comprises a memory unit configured to store a current location data of a mobile terminal and air flight data near the current location, respectively received from API providers. The memory unit may additionally store a wish-visit-list received from a mobile terminal.
The AI server of an example of the present invention comprises a memory unit configured to store a current location data of a mobile terminal, where the current location data include GPS coordinate of the mobile terminal and a map data at least comprising a route from the mobile terminal to a boarding gate for an airliner available. The current location data further include current traffic information for the route from the mobile terminal to a boarding gate for an airliner available.
The AI server of an example of the present invention further comprises a processing unit configured to calculate a prioritized itinerary list by applying a genetic algorithm or a neural network algorithm on the data. The prioritized itinerary may comprise air travel itinerary and/or available stand-by ticket information.
The AI server of an example of the present invention may comprise a memory unit configured to store a current location data of a mobile terminal and available ticket data, respectively received from API providers. The AI server of an example of the present invention may further comprise a processing unit configured to calculate and push to the mobile terminal a prioritized ticket list available to enter the gate within time.
Users on mobile terminal may find available stand-by tickets with the help of AI server and the most efficient way to travel around a plurality of places
FIG. 1 is a schematic diagram of an AI server networked with mobile terminals and a API provider.
FIG. 2 is a hardware configuration of the AI server comprising GPU, CPU, RAM, ROM, and auxiliary memory.
FIG. 3 is a sequence diagram showing an example of data communication among AI server, mobile terminals, and a API provider to serve air flight tickets based on an optimized air travel itinerary.
FIG. 4 is a flowchart showing genetic algorithm to find prioritized list of air traveling itinerary.
FIG. 5 is a sequence diagram showing an example of data communication among AI server, mobile terminals, and a API provider to provide stand-by tickets alarming service.
FIG. 6 is a schematic diagram of a basic one cell neural network
FIG. 7 is a diagram of an example of Feed-Forward Neural Network (FFNN) including two hidden layers
FIG. 8 is a diagram of an example of Recurrent Neural Network (RNN)
FIG. 9 is a flowchart of an example process used to create a deep learning artificial neural network for obtaining an optimized air travel itinerary in accordance with aspects of the present invention
An AI server 100 networked with mobile terminals 1000 according to an example embodiment of the present invention will be described in FIGS. 1 to 4. In the description of the drawings, the same elements will be denoted by the same reference signs, without redundant description.
FIG. 1 is a block diagram showing a configuration of the AI server 100 networked with mobile terminals 1000 according to an embodiment of the present invention. The AI server 100 is a system that communicates with users having mobile terminals 1000 and may comprise a solution engine 10 and a chatbot 20.
The chatbot 20 may comprise data receiver 21 retrieving data from several resources such as Application Program Interface (API) providers 200, database 300 prepared inside the AI server, and users on their mobile terminals 1000. The chatbot 20 may also comprise data transmitter 22 which transmits outcome from the solution engine 10 to the user's mobile terminals 1000. The data received or transmitted may be auditory, visual, or textual. The chatbot 20 comprises a computer program and a user interface such as a chatbot character and a dialog system. Such programs are designed to convincingly simulate how a human would behave as a conversational partner and may comprise sophisticated natural language processing systems.
The solution engine 10 may generate optimal outcomes and messages based on the received data from data receiver 21 on accounts of factors such as costs, gains, and time and provide the optimal outcomes and messages to the chatbot 20. The chatbot 20 may communicate about plans, reservation, and confirmation with users in an interactive and real-time manner.
The AI server 100 may refer to a hardware of computing system comprising at least one Central Processing unit (CPU) 1, Graphic Processing Unit (GPU) 2, Random Access Memory(RAM) 3, and Read Only Memory (ROM) 4 as main memory and further comprising a server program 4 operated in the system that provides services to other computer programs and their users in the same or other computers. The schematic diagram regarding a physical structure of the AI server 100 is shown in FIG. 2.
An embodiment of the present invention comprises GPU-accelerated computing, which represents a use of GPU 2 together with a plurality of CPUs 1 to accelerate deep learning, genetic algorithms, and other complex analytics. A GPU 2 has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously, while a CPU 1 consists of a few cores optimized for sequential serial processing. The functions implemented by the AI server 100 are realized in such a manner that predetermined programs are retrieved onto the hardware such as GPU 2 and RAM shown in FIG. 2, thereby to let the solution engine 10, data receiver 21, and data transmitter 22 under control of the GPU 2 and data is read out from and written into the RAM 3, main memory ROM 4, and the auxiliary memory 5.
An exemplary processing module for implementing the inventive methodology as described above may be hard-wired or stored in a separate memory that is read into a main memory of a processor or a plurality of processors from a computer-readable medium such as a ROM or other type of hard magnetic drive, optical storage, tape or flash memory. In the case of a program stored in a memory media, execution of sequences of instructions in the module causes the processor to perform the process steps described herein. The exemplary embodiments of aspects of the present disclosure are not limited to any specific combination of hardware and software and the computer program code required to implement the foregoing can be developed by a person of ordinary skill in the art.
The term "computer readable medium" as employed herein refers to any tangible machine-encoded medium that provides or participates in providing instructions to one or more processors. For example, a computer-readable medium may be one or more optical or magnetic memory disks, flash drives and cards, a read-only memory or a random access memory such as a DRAM, which typically constitutes the main memory. Such media excludes propagated signals, which are not tangible. Cached information is considered to be stored on a computer-readable medium. Common expedients of computer-readable media are well-known in the art and need not be described in detail here.
The individual personnel using the methodology of aspects of the present invention may input information to AI server via mobile terminals 1000 or a separate WAN (not shown). The above-described method may be implemented by program modules that are executed by a computer, as described above. Generally, program modules include routines, objects, components, data structures and the like that perform tasks or implement particular abstract data types. The term "program" as used herein may connote a single program module or multiple program modules acting in concert. The disclosure may be implemented on a variety of types of computers, including personal computers (PCs), hand-held devices, multi-processor systems, microprocessor-based programmable consumer electronics, network PCs, mini-computers, mainframe computers, and the like. The disclosure may also be employed in distributed computing environments, where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, modules may be located in both local and remote memory storage devices.
The AI server 100 may be established independently in a server computer in a house or at an office, which may be independently dedicated or used for other purpose as well. However, a server of the embodiments may comprise other types of physical presence such as co-location, hosting, and clouding.
Colocation is the practice of housing privately-owned servers and networking equipment in a third party data center instead of keeping servers in-house, in offices or at a private data center. Companies may choose to 'co-locate' their equipment by renting space in a colocation center. A colocation provider will rent out in a data center in which customers may install their equipment, but will also provide the power, bandwidth, IP address and cooling systems that the customer will require in orders to successfully deploy their server. Space is rented out in terms of racks and cabinets. A rack is a standardized frame for mounting equipment and hardware, usually horizontally. A full size of rack is often called cabinet.
Hosting is a way of renting cyber space to operate a server program in a server of a commercial provider. The cyber space may be shared between the hosting provider's clients or may be dedicated to one client, with no one else sharing it. In some instances, a client may utilize multiple servers which are all dedicated to their use.
Clouding is one way of hosting server. However, it is differentiated in that a computer program may physically move around multiple servers available to each client depending on demand. When there is more demand placed on the servers, capacity can be automatically increased to match demand without needing to keep a large capacity on a permanent basis. Resource can be scaled up or scaled down accordingly, making it more flexible.
The mobile terminals 1000 may comprise any type of hand-held computing device utilizing wireless data protocols such as PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code Division Multiple Access)-2000, W-CDMA (Wide Band Code Division Multiple Access), and WIBRO (Wireless Broadband Internet). The mobile terminals 1000 may comprise smart phone, smart note, tablet PC, smart camera, smart watch, and any type of wearable computer.
According to an example embodiment of the invention shown in FIG. 3, the data receiver 21 in the chatbot 20 may gather a wish-visit-list 1001 to travel that users responded through mobile terminals 1000 to given queries 11 generated by the solution engine 10 and pushed by the data transmitter 22. Users on mobile terminals 1000 may facially communicate with a chatbot 20 transferring data in auditory, visual, or textual in natural language. For example, the chatbot 20 may send queries to user's mobile terminal such as "List top 10 moments you may want to take a picture of during your journey." Users may input auditory or textual information in his or her natural language about what and where users expect to do or go on vacation.
The solution engine 10 in the AI server 100 may search and determine a recommendable place 12 for each item on the wish-visit-list 1001. The chatbot 20 may provide additional queries 11 to a user to clarify description. The feedback from the user may be used to narrow down and find a recommendable place 12 for each description. The chatbot 20 may repeat providing queries and receiving feedbacks from the user, until the solution engine 10 may conclude the right place with a certain level of probability predetermined.
The chatbot 20 may identify auditory information such as words and phrases in spoken language and convert them to a machine-readable format. The chatbot 20 may use algorithms through acoustic and language modeling. Acoustic modeling represents the relationship between linguistic units of speech and audio signals. A language modeling matches sounds with words sequences to help distinguish between words that sound similar. Hidden Markov model may be used as well to recognize temporal patterns in speech to improve accuracy within the system.
The solution engine 10 may analyze textual information by decomposing its input written phrases or sentences into syntaxial pieces and extracting parts of words having semantic meaning. Generally, noun phrases or verb phrases has semantic meaning whereas article, preposition, pronouns, conjunction, and adverb phrases do not have significant semantic meaning. Using the semantic parts of words, the solution engine 10 may extract the recommendable place 12 using travel information 400 either from API provider 200 or a database 300 prepared by the AI server 100 in its capacity. The API provider may be an independent web search service corporation. For example, the solution engine 10 may extract the recommendable place 12 by detecting the most frequently found geographical proper nouns from top 5 relevant articles comprising the parts of words having semantic meaning.
The wish-visit-list input information may comprise visual images such as drawings, photos, or video clips. In this case, the solution engine 10 analyzes images received and matches with similar images on web sites that have semantically the same categories of features. For example, if a user input a drawing that a man standing on river scooping up fish with a net in the wish-visit-list, then the solution engine 10 may retrieve distinctive features from photo such as river, fish net, man standing in the river and scooping up and may identifies similar images on web sites and find "Russian river at Alaska" as one of recommendable places.
According to an example embodiment of the invention shown in FIG. 3, the solution engine 10 may find an appropriate airport 13 for each recommendable place 12 when it has been confirmed by user on mobile terminals 1000 and may find the most efficient way of connecting all airports for all recommendable places 12 to travel around. The solution engine 10 may retrieve air travel information 401, from API provider 200, such as air flight schedules, airlines, airliners, departure/arrival time, flight distance, and flight fares. The solution engine 10 may present a list of available sets of air travel itinerary 14 considering the air travel information 401. To be specific, the solution engine 10 may first extract the airport 13 for each recommendable place 12 geographically located in the nearest location, possibly within a national boundary of the country where each recommendable place 12 is located. Next. the solution engine 10 may construct an air travel itinerary 14 connecting the places to travel based on airline routes and their flight schedules available in the designated airport. The airline routes and their flight schedule for each airport may be retrieved from either a database that an API provider 200 presents or a database 300 prepared by the AI server 100 in its capacity. The API provider 200 may be an independent airline corporation, a travel agency, or airline associations thereof. The recommendable air travel itinerary 14 may be calculated in consideration of information such as endurable budget scope, allowable numbers of stop-by, and time periods for entire traveling as the users provide.
According to an example embodiment of the invention shown in FIG. 3, the solution engine 10 may generate an air travel itinerary 14 by making reservation and confirmation for stand-by tickets 402 becoming available due to vacancies unsold, no-show passengers, and impending cancellation. Stand-by tickets 402 is on-sale tickets for vacancies and may be available to be obtained, for example, one day to one hour before closing the boarding gate 402 depending on where a user is literally stand by at the time when the stand-by ticket becomes available. Users may purchase stand-by tickets 402 at lower costs in case airliner should depart with vacancies otherwise. However, users may fall into jeopardy in that they should show up urgently and may endure an unexpected change of his or her air travel itinerary 14 depending on availability of stand-by tickets 402.
A genetic algorithm may be applied to find best sequential match through a huge combination of parameters. The basic step of finding a optimal air travel itinerary is similar as a generally known "the traveling salesman problem." [Potvin, J.V. 1996, Genetic Algorithms for the travelling salesman problem, Annals of Operations Research, 63, 339-370] The salesman has to visit each one of the cities starting from a certain one (e.g. hometown) and returning to the same city. The challenge of the problem is that the traveling salesman wants to minimize the total length of the trip.
According to an example embodiment of the present invention, the solution engine 100 may apply the genetic algorithm 2000 to find prioritized list of air traveling itinerary.
The following FIG. 4 shows process of genetic algorithm according to an example embodiment of the present invention.
First, the recommendable places 12, called individuals 2001, extracted from the wish-visit-list 1001 may be numbered as 1, 2, 3,...,10, if users input 10 items, for example, in the wish-visit-list 1001. Second, an initial set of sequence comprising 10 places and a total number of sets, called "population" 2001, to calculate are determined. To reduce time to calculate, a limitation may be set up. For example, two recommendable places 12 located in the same country or the same continent may be preferably neighbored compared to two places in different country or continent. Every data on the recommendable place 12 may be tagged with index indicating country or continent where it is located.
Third, each member of the population 2002 may be evaluated by calculating a "fitness" for that individual. The fitness is calculated by how well it fits with our desired requirements. According to the example embodiment of the present invention, several different target values may be adopted for the fitness calculation such as the lowest air flight fare connecting all recommendable places 12, the lowest air flight distance, and the lowest time consumed including waiting time for transits.
Fourth, "selection" may be implemented to improve each population's overall fitness. Selection helps us to do this by discarding the bad designs and only keeping the best individuals in the population.  There are a few different selection methods but the basic idea is the same, make it more likely that fitter individuals will be selected for our next generation. Fifth, "crossover" may be implemented to create new individuals by combining aspects of the selected individuals. This is a mimicking how sex works in nature. By combining certain traits from two or more individuals 2001, an even 'fitter' offspring may be generated which will inherit the best traits from each of its parents. Sixth, "Mutation", a little bit randomness, may be added into our populations' genetics otherwise every combination of solutions would be in our initial population. Mutation typically works by making very small changes at random to an individual 2001. Seventh, the previous steps from third to sixth may be repeated until a termination condition is obtained converging to a certain level stably.
According to an example embodiment of the present invention, crossover operation may comprise one-point crossover, multi-point crossover, uniform crossover, whole arithmetic recombination, and Davis' Order crossover. In one-point crossover, a random crossover point is selected and the tails of its two parents are swapped to get new off-springs. Multi-point crossover is a generalization of the one-point crossover wherein alternating segments are swapped to get new off-springs. In a uniform crossover, each sequence is treated separately. Like flipping a coin, each part of the sequence may be determined according to whether or not it'll be included in the off-spring. The coin may be biased to one parent, to have more genetic material in the child from that parent. Whole arithmetic recombination is commonly used for integer representations and works by taking the weighted average of the two parents. Obviously, if probability is 0.5, then both the children will be identical. Davis' Order crossover is used for permutation based crossovers with the intention of transmitting information about relative ordering to the off-springs. First, two random crossover points in the parent is created and the segment between them is copied from the first parent to the first offspring. Second, starting from the second crossover point in the second parent, remaining unused numbers from the second parent to the first child is copied, wrapping around the list. Third, reiteration is made for the second child with the parent's role reversed. Apart from the above crossover operation, different kinds of crossover may also be applied such as Partially Mapped Crossover (PMX), Order based crossover (OX2), Shuffle Crossover, and Ring Crossover.
According to an example embodiment of the present invention, mutation operation may comprise Bit Flip Mutation, Random Resetting, Swap Mutation, Scramble Mutation, and Inversion Mutation. Like the crossover operations, this is not an exhaustive list and the genetic algorithm designer may find a combination of these approaches or a problem-specific mutation operator more useful. In the Bit Flip Mutation, one or more random bits and flip are selected. Random Resetting is an extension of the bit flip for the integer representation. A random value from the set of permissible values may be assigned to a randomly chosen part. In Swap Mutation, two positions on the part of sequence at random may be selected and then the values may be interchanged. This is common in permutation based encodings. Scramble mutation is also popular with permutation representation. From the entire parts of sequence, a subset of part is chosen and their values may be scrambled or shuffled randomly. In Inversion Mutation, a subset of parts of sequence will be selected like in Scramble Mutation. However, instead of shuffling the subset, the entire string in the subset may be inverted.
The data receiver 21 gathers available stand-by tickets 402 information and the data transmitter 22 may alarm it to the mobile terminals 1000. Reservation or confirmation data may be generated in the mobile terminals 1000 and feedback to the data receiver 21 again. The data transmitter 22 may send reservation or confirmation data to the API provider 200 for stand-by tickets 402 and the data receiver 21 may receive ticket issue data 403 from the API provider 200. The data transmitter 22 may send the ticket issue data 403 to the mobile terminals 1000.
According to an example embodiment of the invention, a push service of the AI server 100 for offering stand-by tickets 402 is shown in FIG. 5. Boarding gate terminals 502 at airport terminal 500 may input data of available stand-by tickets 402, created from vacancies of seats unsold or no-show passengers, on a system of API provider 200 that airline companies may operate. The solution engine 10 may regularly check the available stand-by tickets 402, for example, 7 days, 1 day, 3 hours, 60 minutes, 30 minutes before the closing time of boarding gate 501 at airport terminal 500. When the solution engine 10 detects the available stand-by tickets 402, the solution engine 10 may search and retrieve users' information who potentially are interested in or who already made reservation on potential stand-by tickets 402 to travel around. Then the solution engine 10 may generate location queries15 and transmit the queries 15 to the mobile terminals 1000 via data transmitter 202.
Granted that users on the mobile terminals 1000 return consent agreement for their location to the API provider 200, the solution engine 10 may retrieve user geographic data 201 from the API provider 200 such as GPS coordinates, map data, and time to boarding gate 501 from their location based on distance and traffic condition. The AI server 100 may contain such use geographic data 201 in its database 300. The solution engine 10 may confirm targeting users by its own logic and may transmit offering query 16 and the user geographic data 201 to users on the mobile terminals 1000 via data transmitter 202 who may possibly come to the boarding gate 501 within a time before the gate is closed. For example, 3 hours offer will be sent to the mobile terminals 1000 of users whose air travel wish-visit-list 1001 in the database 300 of AI server 100 matches with the schedule of the specific air flight ticket. Whether a user can arrive at airport within a time may be calculated based on distance between GPS coordinates of departing airport terminal 500 and the position of user's mobile terminals 1000 detected from its GPS signal. The solution engine 10 may receive confirmation from users on the mobile terminals 1000 for ticketing via the data receiver 22 and transmit ticket issue data 403 of the users to the database 300 of its own AI server 100 or the API provider 200. The solution engine 10 may generate e-ticket and return to the users on the mobile terminals 1000 via the data transmitter 202.
The push service 1100 can be waived by users on the mobile terminals 1000. The users may receive any piece of information for stand-by tickets 402 at user's requests regardless of his or her GPS coordinate of the mobile terminals 1000. The users may request stand-by ticket 402 information by selecting an airport and time widow that users may expect to depart.
According to an example embodiment of the invention, available transportation, traffic condition, and an estimated time consumed at airport terminal 500 may be counted on and provided by the AI server 100. The estimated time consumed at airport terminal 500 may be calculated by several factors such as distance from custom inspection 503 to boarding gate 501 and an estimated time to check in and go through immigration 504. The time consumed to pass immigration 504 may be calculated by time data gathered from the mobile terminals 1000 of users who passed by the same way at the airport terminal 500 just a certain short time ahead of the user who must show up to the boarding gate 501. The time consumed to pass immigration 504 may be also statistically calculated by the number of passengers for flight in a designated time frame and average capacity of immigration. Depending on seasons and local circumstances, the time consumed varies much, which is critical to the users who purchased a stand-by ticket departing within an hour or 30 minutes at the airport.
The AI server of an example of the present invention may apply for potential customers who want to or reserved to buy tickets for any wish-visit-list comprising sports game, concert, theater, theme park, restaurant, exhibition and so on. The AI server may comprise a memory unit configured to store a current location data of a mobile terminal and available ticket data, respectively received from API providers.
The AI server may further store a wish-visit-list received from a mobile terminal. The current location data of the mobile terminal comprise GPS coordinate of the mobile terminal and a map data at least comprising a route from the mobile terminal to a boarding gate for an airliner available. The current location data may further comprise current traffic information for the route from the mobile terminal to a boarding gate for an airliner available.
The AI server of an example of the present invention may further comprise a processing unit configured to calculate and push to the mobile terminal a prioritized ticket list available to enter the gate within time. The prioritized itinerary list may comprise an available ticket information or an available itinerary to visit multiple events with corresponding tickets information.
According to an example embodiment of the present invention, the solution engine 100 may apply deep learning with artificial neural networks methodologies to find prioritized list of air traveling itinerary.
Artificial neural networks are known as abstract computational models, inspired by the way that a biological central nervous system (such as the human brain) processes received information. Artificial neural networks are generally composed of systems of interconnected "neurons" that function to process information received as inputs. FIG. 6 shows a basic artificial neural network 3010 that includes a neuron cell 3012. Neuron cell 3012 functions similarly to a cell body in a neuron of a human brain and sums up a plurality of inputs 3014 (here, shown as x1, x2, . . . , x5) with possibly different weights wi (i=1, 2, . . . , 5) applied to each input (also defined as "arc weights"), as shown along the toward neuron cell 3012. The set of weighted inputs is then summed and subjected to a defined activation function 3016. The result from the activation function is then provided as the output 3018 from neuron cell 3012. Output 3018 may then be transmitted and applied as an input to other neuron cells, or provided as the output value of the artificial neural network itself.
Artificial neural networks may be configured to include additional layers between the input and output, where these intermediate layers are referred to as "hidden layers" and the deep learning methodology relates to the particular ways that these hidden layers are coupled to each other (as well as the number of nodes used in each hidden layer) in forming a given artificial neural network. FIG. 7 illustrates an exemplary artificial neural network 3020 that includes a first hidden layer 3022 and a second hidden layer 3024 positioned in the network between an input layer 3026 and an output layer 3028.
In this particular configuration, neural network 3020 is referred to as a "deep feedforward network with two hidden layers" (or a "deep learning" neural network). In this feedforward neural network (FFNN), the signals move in only one direction (i.e. "feed in the forward direction") from input layer 3026, through hidden layers 3022 and 3024, and ultimately exiting at output layer 3028. In each layer, only selected nodes function as "neurons" in the manner described above in association with FIG. 6. Input layer 3026 consists of input neuron cells, shown as nodes 3030, 3032, and 3034 in this network. A bias node 3036 (designated as "+1") is also included within input layer 3026. First hidden layer 3022 is shown as including a set of three neuron cells 3038, 3040 and 3042, each processing the collected set of weighted inputs by the defined activation function. A bias node 3044 also provides an input at hidden layer 3022. The created set of output signals is then applied as inputs to second hidden layer 3024.
Second hidden layer 3024 itself is shown as including a pair of neuron cells 3046, 3048 (as well as a bias node 3050), where, as explained above, each neuron cell applies the activation function to the weighted signals arriving as inputs. The outputs created by these neuron cells are shown as being applied as input signals to neuron cells 3052 and 3054 of output layer 3028. Again, the activation function is associated with each neuron cell 3052 and 3054 and is applied to the weighted sum of the signals received from first hidden layer 3022. The output signals produced by cells 3052 and 3054 are defined as the output signals of artificial neural network 3020. In this case, the provision of two separate outputs defines this particular network configuration as providing a "two-step-ahead" forecast.
The number of hidden layers in a given deep learning feedforward network can be different for different datasets. However, it is clear from a review of FIG. 7 that the inclusion of additional hidden layers results in introducing more parameters, which may lead to overfitting problems for some predictive analytics applications. In addition, the use of a larger number of hidden layers also increases the computational complexity of the network. In accordance with aspects of the present invention, it has been found that only one or two hidden layers is necessary to provide accurate time series predictions of power plant operations.
In contrast to the "feedforward" neural network shown in FIG. 7, it is possible to create networks that include "feedback" paths, where this type of artificial neural network is referred to as a "recurrent neural network" (RNN). A recurrent neural network is able to take into accounts the past values of the inputs in generating an output. Introducing greater history of the inputs into the process necessarily increases the input dimension of the network, which may be problematic in some cases. However, the ability to include this information tends to improve the accuracy of the predictions. FIG. 8 illustrates a first type of recurrent neural network, referred to in the art as an "Elman recurrent network" and is illustrated as network 3060 in the configuration of FIG. 8.
As shown, recurrent neural network 3060 consists of a single hidden layer 3062 positioned between an input layer 3064 and an output layer 3066. Also included in recurrent network 3060 is a context layer 3068, which in this case includes a first context node 3070 and a second context node 3072. In this configuration of a recurrent network, the outputs from the hidden layer are fed back to context layer 3068 and used as additional inputs, in combination with the newly-arriving data at input layer 3064. As shown, the output from a first neuron cell 3074 of hidden layer 3062 is stored in first context node 3070 (as well as being transmitted to a neuron cell 3076 of output layer 3066). A feedback arrow 3078 shows the return path of signal flow from the output of neuron cell 3074 to first context node 3070. Similarly, the output signal created by a second neuron cell 3080 of hidden layer 3062 is stored in second context node 3072 of context layer 3068 (and also forwarded as an input to a neuron cell 3082 in output layer 3066). A feedback arrow 84 shows the return path of signal flow from the output of neuron cell 3080 to second context node 3072.
The previous output signals held in context nodes 3070 and 3072 (hereinafter referred to as "context values"), are then, together with the current training data values appearing as inputs x1, x2 and x3 (as appropriately weighted) at the current time step, applied as inputs to neuron cells 3074 and 3080 of hidden layer 3062. By incorporating the previous hidden layer output values with the current input values, it is possible to better predict sequences that exhibit time-varying patterns.
In each of the various artificial neural networks described above, the neuron cells are described as applying an "activation function" (denoted as fin the drawings) to the collected group of weighted inputs in order to create the output signal. One common choice of activation function is the well-known sigmoid function as follows:
Figure PCTKR2017009921-appb-I000001
The derivative of the sigmoid function thus takes the following form:
Figure PCTKR2017009921-appb-I000002
Another activation function used at times in artificial neural networks is the hyperbolic tangent function,
Figure PCTKR2017009921-appb-I000003
which has an output range of [-1, 1] (as opposed to [0,1] for the sigmoid function). The derivative of the hyperbolic tangent function is expressed as:
Figure PCTKR2017009921-appb-I000004
Other functions, such as other trigonometric functions, may be used as activation functions. Regardless of the particular activation function used, the output from a node (neuron) is defined as the "activation" of the node. The value of "z" in the above equations is defined as the weighted sum of the inputs in the previous layer.
For the power plant-related forecasting applications of aspects of the present invention, the inputs to the artificial neural network are typically the past values of the time series (for example, past values of energy demand for performing demand forecasting) and the output is the predicted future energy demand value(s). The predicted future energy demand is then used by power plant personnel in scheduling equipment and supplies for the following time period. The neural network, in general terms, performs the following function mapping:
Figure PCTKR2017009921-appb-I000005
where yt is the observation at time t and m is an independent variable defining the number of past values utilized in the mapping function to create the predicted value.
The following discussion of using a created artificial neural network model to predict future values of a power plant-related set of time series data values will utilize a feedforward neural network model, for the sake of clarity in explaining the details of the invention. It is to be understood, however, that the same principles apply to the utilization of a recurrent neural network in developing a forecasting model for power plant operations.
Before an artificial neural network can be used to perform electric load demand forecasting (or any other type of power plant-related forecasting), it must be "trained" to do so. As mentioned above, training is the process of determining the proper weights Wi (sometimes referred to as arc weights) and bias values bi that are applied to the various inputs at activation nodes in the network. These weights are a key element to defining a proper network, since the knowledge learned by a network is stored in the arcs and nodes in terms of arc weights and node biases. It is through these linking arcs that an artificial neural network can carry out complex nonlinear mappings from its input nodes to its output nodes.
The training mode in this type of time series forecasting is considered as a "supervised" process, since the desired response of the network (testing set) for each input pattern (training set) is always available for use in evaluating how well the predicted output fits to the actual values. The training input data is in the form of vectors of training patterns (thus, the number of input nodes is equal to the dimension of the input vector). The total available data (referred to at times hereinafter as the "training information") is divided into a training set and a testing set. The training set is used for estimating the arc weights and bias values, with the testing set then used for measuring the "cost" of a network including the weights determined by the training set. The learning process continues until a set of weights and bias node values is found that minimizes the cost value.
It is usually recommended that about 10-25% of the time series data be used as the testing set, with the remaining data used as the training set, where this division is defined as a typical "training pattern". At a high level, the methodology utilized in accordance with aspects of the present invention to obtain a "deep learning" neural network model useful in generating an optimized air travel itinerary follows the flowchart as outlined in FIG. 9. As shown, the process begins at step 3500 by selecting a particular neural network model to be used (e.g., FFNN, RNN, or other suitable network configuration), as well as the number of hidden layers to be included in the model and the number of nodes to be included in each layer. An activation function is also selected to characterize the operation to be performed on the weighted sum of inputs at each node. Lastly, an initial set of weights and bias values are used to initiate the process.
Once all of the input information is gathered and the model is initialized, the training process continues at step 3520 by computing the gradients associated with both the determined weights and bias values for this model. As will be explained in detail below, one approach to computing these gradients is to use a "backpropagation" method, which starts at the output of the network model and works backwards to determine an error term that may be attributed to each layer (calculating for each individual node in each layer), working from the output layer, through the hidden layers, and back to the input layer.
The next step in the process (shown as step 3530) is to perform an optimization on all of the gradients generated in step 3520, selecting an optimum set of weights and bias values that is defined as an "acceptable" set of parameters for the neural network model that best fits the time series being studied. As will be discussed below, it is possible to use more than one historical time series in this training process. With that in mind, the following step in the process is a decision point 3540, which asks if there is another "training information" set that is to be used in training the model. If the answer is "yes", the process moves to step 3550, which defines the next "training information" set to be used, returning the process to step 3520 to compute the gradients associated with this next set of training information.
Ultimately, when the total number of sets of training information to be used is exhausted, the process moves from step 3540 to step 3560, which inquires if there are multiple sets of optimized {W, b}. If so, these values are first averaged (step 3570) before continuing. The next step (step 3580) is to determine if there is a set of validation data that is to be used to perform one final "check" of the fit of the current neural network model with the optimized set {W, b} to a following set of time series values (i.e., the validation set).
If there is no need to perform this additional validation process, this final set of optimized {W, b} values are defined as the output from the training process and, going forward, are used in the developed neural network to perform the time series forecasting task (step 3590).
If there is a set of validation data present, a final cost measurement is performed (step 3600). If the predicted values from the model sufficient match the validation set values (at step 3610), the use of this set of {W, b} values is confirmed, and again the process moves to step 3590. Otherwise, if the validation test fails, it is possible to re-start the entire process by selecting a different neural network model (step 3620) and returning to step 3500 to try again to find a model that accurately predicts the time series under review.
The elements of the deep learning neural network methodology as described above may be implemented in a computer system comprising a single unit, or a plurality of units linked by a network or a bus.

Claims (7)

  1. An Artificial Intelligence ("AI") server, comprising:
    a memory unit configured to store a current location data of a mobile terminal and air flight data near the current location, respectively received from API providers;
    a processing unit configured to calculate a prioritized itinerary list by applying a genetic algorithm or a neural network algorithm on the data.
  2. The AI server of claim 1, wherein the memory unit further stores a wish-visit-list received from a mobile terminal.
  3. The AI server of claim 1, wherein the current location data comprise GPS coordinate of the mobile terminal and a map data at least comprising a route from the mobile terminal to a boarding gate for an airliner available.
  4. The AI server of claim 1, wherein the current location data further comprise current traffic information for the route from the mobile terminal to a boarding gate for an airliner available.
  5. The AI server of claim 1, wherein the prioritized itinerary list comprises air travel itinerary.
  6. The AI server of claim 1, wherein the prioritized itinerary list comprises available stand-by ticket information.
  7. An Artificial Intelligence ("AI") server, comprising:
    a memory unit configured to store a current location data of a mobile terminal and available ticket data, respectively received from API providers;
    a processing unit configured to calculate and push to the mobile terminal a prioritized ticket list available to enter the gate within time.
PCT/KR2017/009921 2017-01-06 2017-09-10 Artificial intelligence server for prioritized air traveling itinerary Ceased WO2018128245A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170002183A KR20180081225A (en) 2017-01-06 2017-01-06 Scheduling service system for trip around the world using vacant seats based on predicting flight occupancy rate
KR10-2017-0002183 2017-01-06

Publications (1)

Publication Number Publication Date
WO2018128245A1 true WO2018128245A1 (en) 2018-07-12

Family

ID=62790998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/009921 Ceased WO2018128245A1 (en) 2017-01-06 2017-09-10 Artificial intelligence server for prioritized air traveling itinerary

Country Status (2)

Country Link
KR (1) KR20180081225A (en)
WO (1) WO2018128245A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025111592A1 (en) * 2023-11-24 2025-05-30 United Airlines, Inc. Methods and systems for alternative electronic itinerary generation using interactive generative artificial intelligence

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754095B (en) * 2020-06-16 2024-09-17 北京汽车研究总院有限公司 Vehicle scheduling method, device and system based on artificial intelligence
KR102444259B1 (en) * 2020-07-23 2022-09-19 넥서스랩스 주식회사 A one-way flight seat sales platform system with an imminent departure date linked to a one-person travel agency’s travel product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119094A (en) * 1996-02-29 2000-09-12 Electronic Data Systems Corporation Automated system for identifying alternate low-cost travel arrangements
US20010021912A1 (en) * 1999-02-04 2001-09-13 Ita Software, Inc. Method and apparatus for providing availability of airline seats
US20030034958A1 (en) * 1999-12-03 2003-02-20 Anders Waesterlid System and device for assisting flight scheduling by a traveler
KR20130115004A (en) * 2012-04-10 2013-10-21 이호철 System and method for booking airline ticket
US20160117618A1 (en) * 2014-10-22 2016-04-28 Google Inc. Determining alternative travel itineraries using current location

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101501013B1 (en) * 2013-09-24 2015-03-12 한국공항공사 System for managing information of passenger
KR20150116385A (en) * 2014-04-07 2015-10-15 아마데우스 에스.에이.에스. Media input reservation system
JP2015219846A (en) * 2014-05-21 2015-12-07 株式会社野村総合研究所 Vacant seat selling system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119094A (en) * 1996-02-29 2000-09-12 Electronic Data Systems Corporation Automated system for identifying alternate low-cost travel arrangements
US20010021912A1 (en) * 1999-02-04 2001-09-13 Ita Software, Inc. Method and apparatus for providing availability of airline seats
US20030034958A1 (en) * 1999-12-03 2003-02-20 Anders Waesterlid System and device for assisting flight scheduling by a traveler
KR20130115004A (en) * 2012-04-10 2013-10-21 이호철 System and method for booking airline ticket
US20160117618A1 (en) * 2014-10-22 2016-04-28 Google Inc. Determining alternative travel itineraries using current location

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025111592A1 (en) * 2023-11-24 2025-05-30 United Airlines, Inc. Methods and systems for alternative electronic itinerary generation using interactive generative artificial intelligence

Also Published As

Publication number Publication date
KR20180081225A (en) 2018-07-16

Similar Documents

Publication Publication Date Title
CN111897941B (en) Dialog generation method, network training method, device, storage medium and equipment
Kirimtat et al. Future trends and current state of smart city concepts: A survey
Fu et al. CompactETA: A fast inference system for travel time prediction
Shavarani et al. A capacitated biobjective location problem with uniformly distributed demands in the UAV‐supported delivery operation
US20230162098A1 (en) Schema-Guided Response Generation
CN112800234B (en) Information processing method, device, electronic equipment and storage medium
Jiang et al. Deep ROI-based modeling for urban human mobility prediction
Ruan et al. Service time prediction for delivery tasks via spatial meta-learning
Rostami et al. Multi-agent distributed lifelong learning for collective knowledge acquisition
WO2018128245A1 (en) Artificial intelligence server for prioritized air traveling itinerary
Fan et al. CityCoupling: bridging intercity human mobility
WO2022163985A1 (en) Method and system for lightening artificial intelligence inference model
Wen et al. A survey on service route and time prediction in instant delivery: Taxonomy, progress, and prospects
Sandagiri et al. ANN based crime detection and prediction using Twitter posts and weather data
CN116975267A (en) An information processing method, device and computer equipment, media and product
Dalal et al. Optimizing cloud service provider selection with firefly-guided fuzzy decision support system for smart cities
Selvan et al. Ambulance route optimization in a mobile ambulance dispatch system using deep neural network (DNN)
Kim et al. Newspaper article-based agent control in smart city simulations
Wu et al. CACRNN: a context-aware attention-based convolutional recurrent neural network for fine-grained taxi demand prediction
Chen et al. Application of an improved genetic algorithm in tourism route planning
Zhao et al. Multi‐task graph‐based model for metro flow prediction under dynamic urban conditions
Rosa et al. Sentiment analysis based on smart human mobility: A comparative study of ML models
CN116860318A (en) Multi-channel rights issuing routing method, device, equipment and medium
Jing et al. Efficient bike-sharing repositioning with cooperative multi-agent deep reinforcement learning
Yin et al. Development and design of intelligent gymnasium system based on K-Means clustering algorithm under the internet of things

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17890062

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 11.10.19.

122 Ep: pct application non-entry in european phase

Ref document number: 17890062

Country of ref document: EP

Kind code of ref document: A1