CN111859169B - Destination recommendation method and system - Google Patents
Destination recommendation method and system Download PDFInfo
- Publication number
- CN111859169B CN111859169B CN201910486831.7A CN201910486831A CN111859169B CN 111859169 B CN111859169 B CN 111859169B CN 201910486831 A CN201910486831 A CN 201910486831A CN 111859169 B CN111859169 B CN 111859169B
- Authority
- CN
- China
- Prior art keywords
- interest
- point
- order
- model
- destination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a method and a system for recommending a destination. The destination recommending method comprises the steps of obtaining order data of a current order, wherein the order data at least comprise current time, current position or departure place information, obtaining a user historical order in a first time period, extracting a historical destination and characteristic parameters related to the historical destination from the historical order, determining candidate interest points based on the historical destination and the order data of the current order, determining probability values of the candidate interest points based on the characteristic parameters, and determining a recommending destination based on the probability values. The method and the system recommend the destination to the user based on the historical order of the user, so that the user can conveniently, quickly and accurately determine the order destination, and user experience is improved.
Description
Technical Field
The application relates to the field of Internet, in particular to a destination recommending method and system.
Background
In recent years, with the rapid development of mobile communication technology, a large number of application software based on intelligent terminals are emerging. Vehicle applications are one of the most popular types. The passenger enters order origin and destination information via the client before initiating a vehicle request. After the order is initiated, the driver takes over the drive before according to the information of the starting place input by the passenger, and completes the service request according to the information of the destination. Currently, when an order is initiated, a user is often required to determine and accurately input the name of a destination by himself or herself to issue a service request. If the user uses the calling service frequently and regularly goes to a certain destination at a certain time point, the repeated operation can greatly reduce the experience of the user. However, if the system recommends a place which the user does not want to go to as the destination of the order every time, the user can input or modify the destination information every time, and the good effect is not achieved. Therefore, it is desirable to provide a method for intelligently and precisely guessing the destination which the user wants to reach and recommending to the user, so as to avoid frequent repeated determination and input of information and improve user experience.
Disclosure of Invention
An aspect of the present application provides a method of destination recommendation. The destination recommending method comprises the steps of obtaining order data of a current order, wherein the order data at least comprise current time, current position or departure place information, obtaining a user historical order in a first time period, extracting a historical destination and characteristic parameters related to the historical destination from the historical order, determining candidate interest points based on the historical destination and the order data of the current order, determining probability values of the candidate interest points based on the characteristic parameters, and determining a recommending destination based on the probability values.
In some embodiments, the feature parameters include one or a combination of high-dimensional sparse features, user-related features, contextual features, statistical features, other sparse features, negative feedback features.
In some embodiments, the high-dimensional sparse feature comprises one or more of a user name, an interest point name and a city name, the feature related to the user comprises one or more of age, gender, occupation, confidence that the interest point is a home address, confidence that the interest point is a work address, whether the user is working, average distance of a historical order, distance variance of the historical order, distance of a current position and the interest point, and combination of one or more of the interest points with higher occurrence frequency, the contextual feature comprises one or more of time of occurrence of the interest point, date of occurrence of the interest point, longitude and latitude of the interest point and map coordinates of the interest point, the statistical feature comprises one or more of distribution probability of the interest point according to time, distribution probability according to the address name and distribution probability according to time and position, the other sparse feature comprises one or more of frequency of the interest point as a destination in the historical order in a certain time, and the feedback feature comprises no feedback of the negative interest point in a certain time.
In some embodiments, the determining the probability value for the candidate point of interest based on the characteristic parameter includes determining the probability value for the candidate point of interest based on a point of interest recommendation model.
In some embodiments, the method for obtaining the point of interest recommendation model comprises the steps of obtaining a user history order in a second time period, extracting a history destination and characteristic parameters of the history destination from the history order, and training the point of interest recommendation model based on the characteristic parameters and the history destination.
In some embodiments, the point of interest recommendation model is a DeepFM model.
In some embodiments, the output of the DeepFM model is:
fusion=sigmid(w1×first_order_out+w2×second_order_out+w3×dnn_out);
Wherein w1, w2, w3 are weight values, first_order_out is the first-order output of the FM model, second_order_out is the second-order output of the FM model, dnn _out is the high-order output of the CNN model.
In some embodiments, the method for acquiring the point of interest recommendation further comprises determining a loss function and optimizing the recommendation model based on the loss function.
In some embodiments, the loss function is:
Loss=β1×lr_loss+β2×second_order_out_loss+β3×dnn_out_loss+β4×fusion_loss
Wherein β1, β2, β3, β4 are weight values; Wherein, X is a feature matrix composed of feature parameters related to the destination, θ is a weight value corresponding to X, and θ T is a transpose of θ;
fusion= sigmid (w1×first_order_out+w2×second_order_out+w3× DNN _out), lr_loss is a loss function calculation result of the lr model, second_order_out_loss is a loss function calculation result of the FM model second order output, DNN _out_loss is a loss function calculation result of the DNN model high order output, and fusion_loss is a loss function calculation result of the fusion model.
In some embodiments, determining a recommended destination based on the probability values includes obtaining a maximum probability value of the probability values and determining a candidate point of interest corresponding to the maximum probability value as the recommended destination when the maximum probability value is greater than a preset value.
Another aspect of the present application provides a destination recommendation system. The destination recommendation system comprises an acquisition module, a probability value determination module, a recommendation destination determination module and a recommendation destination determination module, wherein the acquisition module is used for acquiring order data of a current order, the order data at least comprise current time, current position or departure place information, the user history order in a first time period is also acquired, a history destination and characteristic parameters related to the history destination are extracted from the history order, the probability value determination module is used for determining candidate interest points based on the history destination and the order data of the current order, the probability value of the candidate interest points is determined based on the characteristic parameters, and the recommendation destination determination module is used for determining recommendation destinations based on the probability value.
In some embodiments, the feature parameters include one or a combination of high-dimensional sparse features, user-related features, contextual features, statistical features, other sparse features, negative feedback features.
In some embodiments, the high-dimensional sparse feature comprises one or more of a user name, an interest point name and a city name, the feature related to the user comprises one or more of age, gender, occupation, confidence that the interest point is a home address, confidence that the interest point is a work address, whether the user is working, average distance of a historical order, distance variance of the historical order, distance of a current position and the interest point, and combination of one or more of the interest points with higher occurrence frequency, the contextual feature comprises one or more of time of occurrence of the interest point, date of occurrence of the interest point, longitude and latitude of the interest point and map coordinates of the interest point, the statistical feature comprises one or more of distribution probability of the interest point according to time, distribution probability according to the address name and distribution probability according to time and position, the other sparse feature comprises one or more of frequency of the interest point as a destination in the historical order in a certain time, and the feedback feature comprises no feedback of the negative interest point in a certain time.
In some embodiments, the probability value determination module is further to determine a probability value for the candidate point of interest based on a point of interest recommendation model.
In some embodiments, the point of interest recommendation system further comprises a training module further configured to obtain a historical order of the user over a second period of time, extract a historical destination and a characteristic parameter of the historical destination from the historical order, and train a point of interest recommendation model based on the characteristic parameter and the historical destination.
In some embodiments, the point of interest recommendation model is a DeepFM model.
In some embodiments, the DeepFM model has an output of fusion= sigmid (w1×first_order_out+w2×second_order_out+w3× DNN _out), where w1, w2, w3 are weight values, first_order_out is a first order output of the FM model, second_order_out is a second order output of the FM model, and DNN _out is a high order output of the DNN model.
In some embodiments, the training module further includes determining a loss function and optimizing the recommendation model based on the loss function.
In some embodiments, the loss function is:
Loss=β1×lr_loss+β2×second_order_out_loss+β3×dnn_out_loss+β4×fusion_loss
Wherein β1, β2, β3, β4 are weight values; Wherein, X is a feature matrix composed of feature parameters related to the destination, θ is a weight value corresponding to X, and θ T is a transpose of θ;
fusion= sigmid (w1×first_order_out+w2×second_order_out+w3× DNN _out), lr_loss is a loss function calculation result of the lr model, second_order_out_loss is a loss function calculation result of the FM model second order output, DNN _out_loss is a loss function calculation result of the DNN model high order output, and fusion_loss is a loss function calculation result of the fusion model.
In some embodiments, the recommendation destination determining module is further configured to obtain a maximum probability value of the probability values, and determine a candidate interest point corresponding to the maximum probability value as the recommendation destination when the maximum probability value is greater than a preset value.
Another aspect of the present application provides a destination recommendation device. The destination recommendation device includes at least one processor for executing the computer instructions to implement the destination recommendation method.
Another aspect of the application provides a computer-readable storage medium. The computer readable storage medium is used for storing computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes the destination recommendation method.
Drawings
The application will be further described by way of exemplary embodiments, which will be described in detail with reference to the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of a destination-recommendation system, according to some embodiments of the application;
FIG. 2 is a block diagram of a destination recommendation system according to some embodiments of the application;
FIG. 3 is an exemplary flow chart of a destination recommendation method according to some embodiments of the application;
FIG. 4 is an exemplary flow chart of a method of training a point of interest recommendation model, according to some embodiments of the application;
FIG. 5 is an exemplary block diagram of a point of interest recommendation model, according to some embodiments of the application;
FIG. 6 is an exemplary flow chart of a method for determining recommended destinations according to an application model shown in some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is apparent to those of ordinary skill in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies of different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Embodiments of the present application may be applied to different transportation systems including, but not limited to, one or a combination of several of land, sea, aviation, aerospace, etc. For example, taxis, special cars, windmills, buses, driving trains, motor cars, high-speed rails, ships, airplanes, hot air balloons, unmanned vehicles, receiving/delivering express, etc., employ management and/or distribution transportation systems. The application scenarios of the different embodiments of the present application include, but are not limited to, one or a combination of several of web pages, browser plug-ins, clients, customization systems, in-enterprise analysis systems, artificial intelligence robots, and the like. It should be understood that the application scenario of the system and method of the present application is merely some examples or embodiments of the present application, and it is possible for those skilled in the art to apply the present application to other similar scenarios according to these drawings without the need for inventive labor. For example, other similar guidance users park systems.
The terms "passenger," "passenger side," "user terminal," "customer," "demander," "service demander," "consumer," "user demander," and the like as used herein are interchangeable and refer to the party that needs or subscribes to a service, either personally or as a tool. Likewise, the terms "driver," "driver side," "provider," "supplier," "service provider," "server," "service party," and the like are also used interchangeably herein to refer to a person, tool, or other entity that provides or assists in providing a service. In addition, the "user" described in the present application may be a party who needs or subscribes to a service, or may be a party who provides a service or assists in providing a service.
Fig. 1 is a schematic view of an application scenario of a destination recommendation system according to some embodiments of the present application.
As shown in FIG. 1, the online service system 100 may recommend a destination and be displayed on a user interface to facilitate user selection of the recommended destination. The online service system 100 may be an online service platform for internet services. For example, the online service system 100 may be an online transport service platform for transport services. In some embodiments, the online service system 100 may be applied to network taxi service such as taxi calls, express calls, special car calls, bus calls, carpool, bus service, driver employment and pick-up service, and the like. In some embodiments, the online service system 100 may also be applied to services such as driving, express, take-away, and the like. The online service system 100 may be an online service platform that includes a server 110, a user terminal 120, a storage device 130, a network 140, and an information source 150. The server 110 may include a processing device 112.
In some embodiments, server 110 may be used to process information and/or data related to determining a recommended destination. The server 110 may be a stand-alone server or a group of servers. The server farm may be centralized or distributed (e.g., server 110 may be a distributed system). The server 110 may be local or remote in some embodiments. For example, server 110 may access information and/or data stored in user terminal 120, storage device 130, and/or information source 150 via network 140. For another example, the server 110 may be directly connected to the user terminal 120, the storage device 130, and/or the information source 150 to access information and/or data stored therein. In some embodiments, server 110 may execute on a cloud platform. For example, the cloud platform may include one of a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, or the like, or any combination thereof.
In some embodiments, server 110 may include a processing device 112. The processing device 112 may process data and/or information related to the service request to perform one or more of the functions described in the present application. For example, the processing device 112 may receive a vehicle use request signal sent by the user terminal 120 to provide a recommendation destination to the user. In some embodiments, the processing device 112 may include one or more sub-processing devices (e.g., a single core processing device or a multi-core processing device). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processor (GPU), a Physical Processor (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an editable logic circuit (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, and the like, or any combination thereof.
In some embodiments, the user may obtain the recommendation destination through the user terminal 120. In some embodiments, the user terminal 120 may include one or any combination of a desktop computer 120-1, a notebook computer 120-2, a vehicle-mounted device 120-3, a mobile device 120-4, and the like. In some embodiments, the mobile device 120-4 may include a smart home device, a wearable device, a smart mobile device, a metaverse device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart furniture device may include a smart lighting device, a control device for a smart appliance, a smart monitoring device, a smart television, a smart camera, an intercom, or the like, or any combination thereof. In some embodiments, the wearable device may include a smart wristband, smart footwear, smart glasses, smart helmets, smart watches, smart clothing, smart back bags, smart accessories, and the like, or any combination thereof. In some embodiments, the smart mobile device may include a smart phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a POS device, etc., or any combination thereof. In some embodiments, the metaverse device and/or augmented reality device may include a metaverse helmet, metaverse glasses, metaverse eyepieces, augmented reality helmets, augmented reality glasses, augmented reality eyepieces, and the like, or any combination thereof. In some embodiments, the user terminal 120 may include a device with positioning functionality to determine the location of the user and/or the user terminal 120.
The storage device 130 may store data and/or instructions. In some embodiments, the storage device 130 may store data acquired from the server 110 or the user terminal 120. In some embodiments, storage device 130 may store information and/or instructions for execution or use by server 110 to perform the exemplary methods described herein. In some embodiments, the storage device 130 may include mass storage, removable storage, volatile read-write memory (e.g., random access memory RAM), read-only memory (ROM), and the like, or any combination thereof. In some embodiments, storage device 130 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like, or any combination thereof.
The network 140 may facilitate the exchange of data and/or information. In some embodiments, one or more components in the online service system 100 (e.g., server 110, user terminal 120, information source 150) may send data and/or information to other components in the online service system 100 over the network 140. In some embodiments, network 140 may be any type of wired or wireless network. For example, the network 140 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an internal network, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, and the like, or any combination thereof. In some embodiments, network 140 may include one or more network access points. For example, the network 140 may include wired or wireless network access points, such as base station and/or internet switching points 140-1, 140-2, through which one or more components of the online service system 100 may connect to the network 140 to exchange data and/or information.
In some embodiments, information source 150 may be connected to network 140 to communicate with one or more components of online service system 100 (e.g., server 110, user terminal 120, etc.). One or more components of the online service system 100 may access data or instructions stored in the information source 150 via the network 140. In some embodiments, information source 150 may be directly connected to or in communication with one or more components (e.g., server 110, user terminal 120) in online service system 100. In some embodiments, information source 150 may be part of server 110.
To implement the various modules, units, and functions thereof described in this disclosure, a computing device or mobile device may serve as a hardware platform for one or more of the components described herein. The hardware elements, operating systems, and programming languages of these computers or mobile devices are conventional in nature, and one skilled in the art will be familiar with these techniques and will adapt the techniques to the on-demand service system described herein. A computer with user interface elements may be used to implement a Personal Computer (PC) or other type of workstation or terminal device, and may also act as a server if properly programmed.
FIG. 2 is a block diagram of a point of interest recommendation system, according to some embodiments of the application.
As shown in fig. 2, the point of interest recommendation system may include an acquisition module 210, a probability value determination module 220, a recommendation destination determination module 230, and a training module 240. In some embodiments, the acquisition module 210, the probability value determination module 220, the recommendation destination determination module 230, and the training module 240 may be provided in the server 110.
The acquisition module 210 may be configured to acquire order data of a current order, where the order data includes at least current time, current location, or departure location information. In some embodiments, the obtaining module 210 may also obtain a user history order over a first period of time, and extract a history destination and a characteristic parameter associated with the history destination from the history order. In some embodiments, the feature parameters related to the historical destination that the acquisition module 210 may extract from the historical order may include one or a combination of several of high-dimensional sparse features, user-related features, contextual features, statistical features, other sparse features, negative feedback features, and the like.
In some embodiments, the high-dimensional sparse features extracted by the acquisition module 210 may include one of a user name (user ID), a point of interest name (POI ID), a city name (city ID), and the like, or any combination thereof. In some embodiments, the user name features in the high-dimensional sparse features extracted by the acquisition module 210 may include one of numbers, letters (e.g., uppercase letters, lowercase letters, etc.), symbols (e.g., @, & etc.), or any combination thereof. In some embodiments, the roll-name feature of interest in the high-dimensional sparse feature extracted by the obtaining module 210 may be a geographical region, such as a five-way business region, a south-Beijing road pedestrian street, or an administrative region, such as province, direct jurisdiction, city, county, and the like. In some embodiments, the obtaining module 210 may directly obtain the city name feature in the high-dimensional sparse feature based on the historical destination in the historical order, or may obtain the city name feature in the high-dimensional sparse feature through a positioning device in the user terminal 120.
In some embodiments, the features related to the user extracted by the obtaining module 210 may include one of a gender, an age of the user, a confidence that the point of interest is a home address, a confidence that the point of interest is a work address, whether the user is working, an average distance of a historical order, a distance variance of the historical order, a distance between a current location and the point of interest, a point of interest with a higher frequency of occurrence, and the like, or any combination thereof. In some embodiments, the acquisition module 210 may extract average distance features of the historical orders from the user-related features based on a straight line distance between the origin and destination of the historical order, a range distance (e.g., walking distance, riding distance, etc.), and so on. In some embodiments, the acquisition module 210 may acquire the distance variance of the characteristic historical orders based on the average distance of the historical orders.
In some embodiments, the contextual features extracted by the acquisition module 210 may include one of a time of occurrence of the point of interest, a date of occurrence of the point of interest, a latitude and longitude of the point of interest, map coordinates of the point of interest, and the like, or any combination thereof. In some embodiments, the acquisition module 210 may acquire the time at which the point of interest in the contextual feature occurs based on a point in time or a period of time.
In some embodiments, the statistical features extracted by the obtaining module 210 may include one or a combination of several of a distribution probability of the points of interest according to time, a distribution probability according to address names, a distribution probability according to longitude and latitude, and a distribution probability according to time and location. In some embodiments, the obtaining module 210 may obtain the distribution probability of the interest points according to the address names based on a geographical region (such as a five-way business region, a tokyo road pedestrian street, etc.). In some embodiments, the obtaining module 210 may obtain the distribution probability of the interest points according to the address names based on administrative division (such as province, direct jurisdiction, city, county, and the like). In some embodiments, the obtaining module 210 may obtain the probability of the distribution of the interest points according to longitude and latitude based on one or any combination of the region, the region heat, the interest point category, and the like.
In some embodiments, the other sparse features extracted by the obtaining module 210 may include one or a combination of several of a frequency of destination points of interest, a frequency of destination points of interest in a historical order over a period of time.
In some embodiments, the negative feedback feature extracted by the obtaining module 210 may include a probability that the user does not adopt the negative feedback of the point of interest as a destination for a certain time. In some embodiments, the negative feedback may be that the system recommends a point of interest to the user multiple times at a particular time, but none of the users click. For example, the system recommends a destination "digital valley" to a user 15:00 8 times, but the user does not click, and the "digital valley &15:00" is negative feedback information.
The probability value determination module 220 may be configured to determine candidate points of interest based on the historical destinations and the order data of the current order, and determine probability values for the candidate points of interest based on the characteristic parameters. In some embodiments, probability value determination module 220 may directly determine all historical destinations in the historical order as candidate points of interest. In some embodiments, the probability value determination module 220 may determine the destination with a frequency greater than a preset threshold as a candidate point of interest based on the frequency of occurrence of the historical destination in the historical order. In some embodiments, the probability value determination module 220 may determine the destination having a frequency greater than a threshold as a candidate point of interest based on the frequency of occurrence of the historical destination in the historical order over a period of time. In some embodiments, the probability value determination module 220 may determine the destination within a particular time (e.g., shift-up, shift-down, etc.) or a particular period (e.g., a week, weekend, holiday, etc.) as a candidate point of interest. In some embodiments, the probability value determination module 220 may determine a destination within a particular region (e.g., business circle, workcircle, etc.) as a candidate point of interest.
In some embodiments, the probability value determination module 220 may determine the probability value of the candidate point of interest based on a calculation manner in which the feature parameter is counted (e.g., histogram statistics, polyline statistics, etc.). In some embodiments, the probability value determination module 220 may determine the probability value of the candidate point of interest by way of modeling. In some embodiments, the probability value determination module 220 may determine the probability value of the candidate point of interest directly using the trained point of interest recommendation model. In some embodiments, the probability value determination module 220 may determine the probability value of the candidate point of interest by building a probability value calculation model.
The recommendation destination determining module 230 may be configured to determine a recommendation destination based on the probability value. In some embodiments, the recommended destination may include at least one of the candidate points of interest. In some embodiments, the recommendation destination determining module 230 may determine the candidate points of interest with the top N digits of the probability value ranking as the recommendation destination according to the probability value ranking of the candidate points of interest, where N is an integer greater than or equal to 1 (e.g., 1,2, 3, etc.). In some embodiments, the recommendation destination determining module 230 may directly determine the point of interest having the greatest probability value among the candidate points of interest as the recommendation destination. In some embodiments, the recommendation destination determining module 230 may determine, as the recommendation destination, a point of interest having a maximum probability value among candidate points of interest and being greater than a preset threshold value by setting a threshold value.
The training module 240 may be used to train a point of interest recommendation model. Specifically, the training module 240 may obtain a user history order within a second time period, extract the feature parameter and the destination from the history order, and train the point of interest recommendation model based on the feature parameter and the destination. In some embodiments, the point of interest recommendation model includes, but is not limited to, a DeepFM model.
It should be understood that the system shown in fig. 2 and its modules may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Where the hardware portions may be implemented using dedicated logic and the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or dedicated design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present application and its modules may be implemented not only with hardware circuitry such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that the above description of the candidate display, determination system, and modules thereof is for descriptive convenience only and is not intended to limit the application to the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. For example, in some embodiments, the acquisition module 210, the probability value determination module 220, the recommendation destination determination module 230, and the training module 240 disclosed in fig. 2 may be different modules in one system, or may be one module to implement the functions of two or more modules described above. For example, the acquisition module 210 and the probability value determination module 220 may be two modules, or one module may have both the acquisition and probability value determination functions. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the application.
FIG. 3 is an exemplary flow chart of a method of point of interest recommendation, according to some embodiments of the application. As shown in fig. 3, the method 300 for point of interest recommendation may include:
In step 310, order data of the current order may be obtained, where the order data includes at least current time, current location, or departure location information. In some embodiments, step 310 may be performed by the acquisition module 210. In some embodiments, the current order may be a current drive up request by the user. In some embodiments, the order data for the current order may include the current time and current location of the user's current drive request, or the user entered departure location information. For example, when a user wants to drive when opening the network taxi-taking platform, the system can acquire the current time and the current user position when the user opens the platform, or the departure place information input by the user on the platform.
Step 320 may obtain a user's historical order over a first period of time, extract a historical destination and a characteristic parameter associated with the historical destination from the historical order, and in some embodiments, step 320 may be performed by the obtaining module 210. In some embodiments, the historical orders within the first time period may refer to historical orders within the last three days, last one week, last one month, or last half year. In some embodiments, the first time period and the second time period of the selected training data in the model training have no intersection. For example, orders within a first time period do not include historical orders within a second time period.
In some embodiments, the feature parameters may include one or a combination of high-dimensional sparse features, user-related features, contextual features, statistical features, other sparse features, negative feedback features.
In some embodiments, the high-dimensional sparse feature may include one of a user name (user ID), a point of interest name (POI ID), a city name (city ID), and the like, or any combination thereof. In some embodiments, the user name may be an account number registered by the user with the service platform. In some embodiments, the user name may include one of numbers, letters (e.g., uppercase, lowercase, etc.), symbols (e.g., @, & etc.), or any combination thereof. In some embodiments, the user name may be an account number set by the user, or may be an account number customized by the service platform system according to registration information of the user. For example, the user name set by the user may be name pinyin, birth year month or symbol, or may be name pinyin, birth year month and symbol combination (such as xiaoli@890723). For another example, the user name generated by the service platform system according to the user registration information may be a user mobile phone number, a mailbox, a registration time, and the like.
In some embodiments, the point of interest name may be a geographic region, such as a five-way business region, a Nanjing road pedestrian street, etc., or an administrative region, such as province, direct jurisdiction, city, county, country, etc. By way of example only, the point of interest name may be a city, such as Beijing city, or an area of a city, such as Beijing city, the lake region, or a more specific address, such as Beijing university south gate, the lake region, beijing city, and so forth.
In some embodiments, the city name may be a city name to which a destination in the historical order belongs. In some embodiments, the city name may be obtained directly from the destination by the obtaining module 210. For example, the acquisition module 210 may acquire the city name "beijing city" from the destination "beijing city, seashore university south gate". In some embodiments, the city name may be acquired by the acquisition module 210 via a positioning device of the user terminal 120.
In some embodiments, the characteristics related to the user may include one of a gender, an age, a confidence that the point of interest is a home address, a confidence that the point of interest is a work address, whether the user is working, an average distance of a historical order, a distance variance of the historical order, a distance of a current location from the point of interest, a point of interest with a higher frequency of occurrence, and the like, or any combination thereof.
In some embodiments, the confidence that the point of interest is a home address may be a probability that the point of interest is a user home address, and the confidence that the point of interest is a work address may be a probability that the point of interest is a user work address. In some embodiments, the average distance of the historical orders may be an average distance between the origin and the destination in the historical order. In some embodiments, the distance may be a straight line distance between two places, a range distance (e.g., walking distance, riding distance, etc.), and so on. In some embodiments, the distance variance of the historical order may be obtained based on an average distance of the historical order.
In some embodiments, the distance between the current location and the point of interest may be a distance between a location where the user opens the service platform through the user terminal 120 and the point of interest. In some embodiments, the distance between the current location and the point of interest may be the distance between the location and the point of interest at which the user entered the order start point via the user terminal 120. In some embodiments, the user's current location may be obtained by the user terminal 120 through a device having a positioning function.
In some embodiments, the points of interest with higher frequency of occurrence may be points of interest frequently visited by the user. Such as a mall, movie theatre, entertainment venue, etc. where users often go.
In some embodiments, the contextual characteristics may include one of a time of occurrence of the point of interest, a date of occurrence of the point of interest, a latitude and longitude of the point of interest, map coordinates of the point of interest, and the like, or any combination thereof. In some embodiments, the time at which the point of interest occurs may include, but is not limited to, a point in time, a period of time, and the like. For example, the point of interest may occur at 5 pm, and for example, the point of interest may occur at between 5 pm and 6 pm.
In some embodiments, the date on which the point of interest appears may be one of the forms of a year, month, day of week, day of work/day of rest, or the like, or any combination thereof. For example, the date on which the point of interest appears may be a specific date of 2018, 1, 7, or may be Monday, sunday, etc., or may be a working day (Monday through Friday)/rest day (Saturday).
In some embodiments, the statistical features may include one or a combination of several of a distribution probability of the points of interest according to time, a distribution probability according to address names, a distribution probability according to longitude and latitude, a distribution probability according to time and location.
In some embodiments, the probability of the distribution of the interest points according to time may be a time distribution probability divided by weekday, holiday, etc. In some embodiments, the probability of the distribution of the interest points according to time may be a time distribution probability divided according to the time period of the commute in the day. For example, the time division interval is 7:00-9:30 of peak hours of work, 9:30-17:00 of normal time, 17:00-19:00 of peak hours of work, and 19:00-0:00 of other times. In some embodiments, the probability of the distribution of the interest points according to time may be a time distribution probability divided according to a user's travel habit (e.g., daytime, evening). In some embodiments, the probability of the interest points being distributed according to time may be counted according to user behavior habits of different cities. For example, the time interval may be divided differently when the time of the Beijing city going to work is different from the time of the metropolitan city going to work.
In some embodiments, the division of the region in the distribution probability of the interest points according to the address names may be a geographical division, such as a five-way business region, a south Beijing road pedestrian street, etc., or an administrative division, such as province, direct jurisdiction, city, county, country, etc.
In some embodiments, the division of the intervals in the probability of the interest points according to the distribution of longitude and latitude may be based on one or any combination of regions, region warmth, interest point categories and the like. For example, the interest point categories (such as business circles, work circles, residential circles, etc.) corresponding to the longitude and latitude may be equally divided into the same section.
In some embodiments, the partitioning of the points of interest according to time and location may be based on a partitioning of time and address name intervals. For example, the beijing lake region and the facing region have different working hours, and the division of the regions may be different regions such as a lake region 9:00, a facing region 8:30, a lake region 17:30, a facing region 17:00, and the like.
In some embodiments, the other sparse features may include one or a combination of several of a frequency of destination points of interest, a frequency of destination points of interest in historical orders over a period of time. In some embodiments, the frequency of destination points of interest may be the number of times the points of interest appear as destinations in historical orders. For example, the frequency of the destination "Zhongguancun" is 38 times. In some embodiments, the frequency of points of interest being destination in the historical order for a certain period of time may be the number of times the points of interest appear as destination in the historical order for a certain period of time. For example, the frequency of occurrence of "upland" as a destination is 18 times at 17:00-19:00.
In some embodiments, the negative feedback feature may include a probability that the user does not adopt negative feedback that the point of interest is a destination for a period of time. In some embodiments, the negative feedback may be that the system recommends a point of interest to the user multiple times at a particular time, but none of the users click. For example, the system recommends a destination "digital valley" to a user 15:00 8 times, but the user does not click, and the "digital valley+15:00" is negative feedback information. In some embodiments, the negative feedback characteristic may be a probability of negative feedback that the point of interest is the destination within one week and one month.
At step 330, candidate points of interest may be determined based on the historical destinations and the order data for the current order. In some embodiments, step 330 may be performed by the probability value determination module 220.
In some embodiments, probability value determination module 220 may determine all destinations in the historical order as candidate points of interest. In some embodiments, the probability value determination module 220 may determine a destination associated with the order data of the current order as a candidate point of interest. In some embodiments, the probability value determination module 220 may determine a historical destination in the historical order that is the current location of the origin as a candidate point of interest. In some embodiments, the probability value determination module 220 may determine a destination corresponding to a historical order having a departure point within a certain range of the current location in the historical order as a candidate point of interest. For example, a historical destination in an order with a departure point within 500m of the current location is determined as a candidate point of interest. In some embodiments, the probability value determination module 220 may determine historical destinations at the same point in time or for the same period of time in the historical order as candidate points of interest. For example, the current time is 17 pm, and the destination of the historical order within 17 order start times or 16 to 18 order start times may be determined as a candidate point of interest. In some embodiments, the probability value determination module 220 may set destinations with frequencies greater than a certain threshold as candidate points of interest. In some embodiments, the frequency may be the frequency with which destinations appear in historical orders. For example, the probability value determination module 220 may determine destinations with frequency of occurrence greater than 10 in the historical order as candidate points of interest. In some embodiments, the frequency may be a frequency of destinations in a historical order over a period of time. For example, the probability value determination module 220 may determine destinations having a frequency of occurrence greater than 30 within a period of 17:00-19:00 as candidate points of interest. In some embodiments, the probability value determination module 220 may set destinations within a particular time (e.g., shift-out time) or a particular period (e.g., one week, weekend, holiday, etc.) as candidate points of interest. For example, destinations within a workday (monday through friday) may be set as candidate points of interest. In some embodiments, the probability value determination module 220 may determine a destination within a particular region as a candidate point of interest. For example, the probability value determination module 220 may determine destinations within a business turn that are hot as candidate points of interest.
In some embodiments, the candidate points of interest may include at least one related point of interest. Specifically, the candidate points of interest may include information such as names, categories (e.g., point areas or area areas), longitude and latitude, and the like.
In step 340, a probability value for the candidate point of interest may be determined based on the feature parameter. In some embodiments, step 340 may be performed by the probability value determination module 220.
In some embodiments, the probability value of the candidate point of interest may be a possible probability value of the candidate point of interest as a destination inferred from the feature parameters of the point of interest. In some embodiments, the probability values of the candidate points of interest may be determined by statistical calculation methods (e.g., histogram statistics, polyline statistics, etc.). For example, the probability value of the candidate point of interest may be determined by counting how often the candidate point of interest appears in the historical order or how often the candidate point of interest appears in the historical order over a period of time. For another example, the probability value of the candidate point of interest may be determined by counting the ratio of the total number of times the candidate point of interest is accepted by the user over a period of time to the total recommended number of times.
In some embodiments, the probability values of the candidate points of interest may be determined by modeling. In some embodiments, the point of interest recommendation model may be trained by the training module 240 based on the characteristic parameters of the user's historical orders and the destination, and the probability values for the candidate points of interest may be determined using the trained recommendation model. For more details on training the point of interest recommendation model, see FIG. 4 and its associated description. In some alternative embodiments, the probability values of candidate points of interest may also be predicted by other existing means, as the application is not limited in this regard.
In step 350, a recommendation destination may be determined based on the probability value. In some embodiments, step 350 may be performed by recommendation destination determination module 230. In some embodiments, the recommended destination may include at least one candidate point of interest.
In some embodiments, the recommendation destination determining module 230 may determine the top N-bit candidate points of interest with the top probability values (e.g., the highest probability of being selected) as recommendation destinations, where N is an integer greater than or equal to 1 (e.g., 1,2, 3, etc.).
In some embodiments, the recommendation destination determining module 230 may obtain a maximum probability value of the probability values, and determine a candidate interest point corresponding to the maximum probability value as the recommendation destination.
In some embodiments, the recommendation destination determining module 230 may determine the recommendation destination by way of a threshold setting. In some embodiments, the setting of the threshold may be determined manually or automatically. In some embodiments, the set threshold may also be determined experimentally. In some embodiments, the recommendation destination determining module 230 may obtain a maximum probability value of the probability values, and determine a candidate point of interest corresponding to the maximum probability value as the recommendation destination when the maximum probability value is greater than a preset threshold.
In some embodiments, if the maximum probability value is less than a preset threshold, no point of interest recommendation is performed. For example, if the preset threshold is 0.6 and the maximum probability value of the candidate points of interest is 0.3, all candidate points of interest are lower than the preset threshold, and no point recommendation is performed. The method and the device ensure that the recommended interest points have higher matching degree with the destination which the user wants to go, and avoid the interference of messy recommendation to the user.
It should be noted that the above description of the process 300 is for purposes of illustration and description only and is not intended to limit the scope of the present application. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of the teachings of this application. However, such modifications and variations are still within the scope of the present application. For example, in some embodiments, if steps 310 and 320 can be combined into one step.
FIG. 4 is an exemplary flow chart of a method of training a point of interest recommendation model, according to some embodiments of the application. As shown in fig. 4, a process 400 of the training method of the point of interest recommendation model may include:
Step 410, obtaining a user history order in a second time period, and extracting the characteristic parameters and the destination from the history order. In some embodiments, this step 410 may be performed by training module 240.
In some embodiments, training module 240 may obtain a user's historical order over a second period of time and extract a historical destination and a characteristic parameter associated with the historical destination from the historical order. In some embodiments, the second time period is different from the first time period, the relevant data extracted from the user history orders in the second time period is training data, the relevant data extracted from the user history orders in the first time period is calculation data for calculating recommendation destinations, and the relevant data extracted from the user history orders in the first time period can be used for determining the input of candidate interest point probability values. In some embodiments, the second time period may be a period of one month, one quarter, one year, in addition to the first time period. In some embodiments, the second time period may be updated according to the passage of time. For example, the data for model training in the last month is data of 5 months to 3 months, and the model update in this month is that the training data may be updated to data of 6 months (6 months data is not already in the first period) to 4 months.
In some embodiments, the destination-related feature parameters may include one or a combination of several of high-dimensional sparse features, user-related features, contextual features, statistical features, other sparse features, negative feedback features, and the like. In some embodiments, the high-dimensional sparse feature may include one or a combination of a user name (user ID), a point of interest name (POI ID), and a city name (city ID). In some embodiments, the characteristics related to the user may include one or a combination of several of age, gender, occupation, confidence that the point of interest is a home address, confidence that the point of interest is a work address, whether the user is working, average distance of historical orders, distance variance of historical orders, distance of current location from the point of interest, and points of interest with higher frequency of occurrence. In some embodiments, the contextual characteristics may include one or a combination of several of the time of occurrence of the point of interest, the date of occurrence of the point of interest, the latitude and longitude of the point of interest, and the map coordinates of the point of interest. In some embodiments, the statistical features may include one or a combination of distribution probabilities of the points of interest according to time, distribution probabilities according to address names, distribution probabilities according to longitude and latitude, and distribution probabilities according to time and location. In some embodiments, the other sparse features may include one or a combination of several of a frequency of destination points of interest, a frequency of destination points of interest in historical orders over time. In some embodiments, the negative feedback feature may include a probability that the user has not adopted negative feedback for the point of interest as a destination for a period of time. Further details regarding the feature parameters are similar to those described in step 310 and are not repeated here.
And step 420, training a point of interest recommendation model based on the characteristic parameters and the historical destination. Specifically, this step 420 may be performed by training module 240.
In some embodiments, the training module 240 may input the feature parameters and the historical destination as sample data into a machine learning model for training, to obtain a trained point of interest recommendation model. In some embodiments, the machine learning model may include, but is not limited to, an FM model, a DNN model, a DeepFM model of FM and DNN fusion, and the like. Preferably, the machine learning model may be a DeepFM model of FM and DNN fusion. The DeepFM model contains two parts, a factorer part (FM model) and a neural network part (DNN model), which share the same input feature data. The DeepFM model is adopted for training, so that the result of breadth and depth training can be considered in the training process, the learning effect of the model is improved, and the obtained interest point recommendation model predicts the interest points more accurately.
In some embodiments, the structure of DeepFM model may be a built-up structure as shown in fig. 5. As shown in fig. 5, an exemplary structure of the DeepFM model may include a feature input Layer (i.e., spark Features in fig. 5), an embedded Layer (i.e., dense Embeddings in fig. 5), an FM Layer (i.e., FM Layer in fig. 5), a hidden Layer (i.e., HIDDEN LAYER in fig. 5), and an Output Layer (i.e., output Units in fig. 5). In some embodiments, after inputting the feature vectors into the DeepFM model, the embedding layer may compress the input feature vectors into low-dimensional dense vectors. For example, the high-order sparse features may be compressed into five-dimensional or seven-dimensional dense features. The compressed feature vectors are then input into the model FM layer and the model DNN, i.e. DeepFM the model shares the same input data from the input layer and the embedded layer. And finally, the DeepFM model fuses the outputs of the FM and the DNN in a certain mode and then takes the fused outputs as a final output result of the DeepFM model. In some embodiments, the outputs of DeepFM models may include first-order, second-order outputs of model FM outputs, higher-order outputs of model DNN, and fused outputs.
In some embodiments, the fused output manner of DeepFM models may be:
fusion=sigmid(w1×first_order_out+w2×second_order_out+w3×dnn_out);
wherein w1, w2, w3 are weight values, first_order_out is a first-order output of the FM model, second_order_out is a second-order output of the FM model, DNN _out is a high-order output of the DNN model.
In some embodiments, w1, w2, and w3 are different values to balance the output results of the model. In some embodiments, the value of w2 may be greater than the values of w1 and w3 to increase the weight of the second order output of the DeepFM model.
And 430, determining a loss function, and optimizing the interest point recommendation model based on the loss function. In some embodiments, this step 430 may be performed by training module 240
In some embodiments, training module 240 may determine the loss function based on the output of DeepFM models. For example, the loss function may be fusion_loss. Wherein the final output result of the model may be:
Fusioin=sigmid(w1×first_order_out+w2×second_order_out+w3×dnn_out)。
in some embodiments, training module 240 may determine the loss function based on first, second, higher order of DeepFM models and the fused final output. For example, the loss function may be:
Loss=β1×lr_loss+β2×second_order_out_loss+β3×dnn_out_loss+β4×fusion_loss
wherein β1, β2, β3, β4 are weight values. In some embodiments of the present invention, in some embodiments, Wherein X is a feature matrix composed of feature parameters related to the destination, θ is a weight value corresponding to X, and θ T is a transpose of θ. In some embodiments, X may be a matrix of all feature parameter combinations. In some embodiments, θ may be a weight value corresponding to each dimensional feature, which may be continuously adjusted during model training. lr_loss is the result of the loss function calculation of the lr model. In some embodiments, second_order_out_loss is the loss function calculation for the second order output of the FM model, DNN _out_loss is the loss function calculation for the higher order output of the DNN model, and fusion_loss is the loss function calculation for the fusion model. In some embodiments, the weight values β1, β2, β3, β4 may be determined manually or automatically, such as β1=0.2, β2=0.8, β3=0.8, β4=0.2. In some embodiments, the weight value may also be determined experimentally. The loss function is added to three outputs (first order, second order and high order) of the FM model respectively, so that the accuracy of model output can be improved. In some embodiments, the loss function weight value of the second order output of the FM model may be set to be the largest to promote stability of the model.
It should be noted that the above description of the process 400 is for purposes of illustration and description only and is not intended to limit the scope of the present application. Various modifications and changes to flow 400 may be made by those skilled in the art in light of the teachings of the present application. However, such modifications and variations are still within the scope of the present application. For example, in some embodiments, steps 420 and 430 may be combined into one step.
FIG. 6 is an exemplary flow chart of a method for determining recommended destinations according to an application model shown in some embodiments of the present application. As shown in fig. 6, the method for determining a recommended destination by using the application model may include:
Step 610, obtaining order data of a current order, wherein the order data at least comprises current time, current position or departure place information, obtaining a user history order in a certain time period, and extracting a history destination and characteristic parameters related to the history destination from the history order. In some embodiments, step 610 may be performed by the acquisition module 210. Step 610 is similar to steps 310 and 320 and will not be described again here.
Step 620, determining candidate points of interest based on the historical destinations and the order data of the current order. In some embodiments, step 620 may be performed by the probability value determination module 220.
Step 630, obtain a point of interest recommendation model. In some embodiments, step 630 may be performed by the probability value determination module 220. In some embodiments, the probability value determination module 220 may obtain the point of interest recommendation model directly from the server 110, the network 120, or the storage device 130. In some embodiments, the point of interest recommendation model may be generated by training by the training module 240. In some embodiments, training module 240 may generate the point of interest recommendation model by obtaining a user's historical order over a second period of time, extracting the characteristic parameters and the destination from the historical order, and training based on the characteristic parameters and the destination. In some embodiments, the point of interest recommendation model may be a DeepFM model. For more details on training the point of interest recommendation model, see fig. 4 and the description thereof, which are not repeated here.
At step 640, a probability value for the candidate point of interest may be determined based on the recommendation model. In some embodiments, step 640 may be performed by the probability value determination module 220. In some embodiments, candidate points of interest may be determined based on the destination, feature parameters of each candidate point of interest may be extracted and input into the point of interest recommendation model, and a probability value of each candidate point of interest may be output through the model.
In step 650, a recommendation destination may be determined based on the probability values of the candidate points of interest and sent to the user. In some embodiments, step 650 may be performed by recommendation destination determination module 230. In some embodiments, the top N candidate points of interest of the probability value rank may be determined as recommendation destinations, where N is an integer greater than or equal to 1 (e.g., 1,2, 3, 4, 5, etc.). In some embodiments, the candidate point of interest corresponding to the maximum probability value of the probability values may be determined as the recommendation destination. In some embodiments, a maximum probability value among the probability values may be acquired, and when the maximum probability value is greater than a preset threshold, a candidate point of interest corresponding to the maximum probability value is determined as the recommended destination. In some embodiments, the determined recommendation destination may be sent to the user through the user terminal 120. In some embodiments, if the maximum probability value of the candidate points of interest is less than a preset threshold, no point of interest recommendation is performed. For example, if the preset threshold is 0.6 and the maximum probability value of the candidate points of interest is 0.3, all candidate points of interest are lower than the preset threshold, and no point recommendation is performed.
It should be noted that the above description of the process 600 is for purposes of illustration and description only and is not intended to limit the scope of the present application. Various modifications and changes to flow 600 may be made by those skilled in the art in light of the teachings of the present application. However, such modifications and variations are still within the scope of the present application. For example, in some embodiments, if the probability values of the candidate points of interest determined in step 640 are all small, step 650 may be omitted, the determination of the recommendation destination may not be performed, and the point of interest recommendation may not be performed for the user.
The method and the device have the advantages that (1) characteristics related to recommended destinations can be obtained based on historical orders, the user destinations can be accurately recommended, the operation burden of the user is reduced, the user experience is improved, (2) the destination is recommended by adopting a DeepFM fusion model, the advantages of a breadth model and a depth model can be considered relative to other models, the accuracy of a prediction result is improved, and (3) the method and the device establish a DeepFM fusion output method, respectively calculate losses of different models when a loss function is established, and then perform weighted addition, so that the low-order output and the high-order output of the models can be considered. It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements and adaptations of the application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within the present disclosure, and therefore, such modifications, improvements, and adaptations are intended to be within the spirit and scope of the exemplary embodiments of the present disclosure.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, C#, VB NET, python, and the like, a conventional programming language such as the C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application is not intended to limit the sequence of the processes and methods unless specifically recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of example, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject application requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations in some embodiments for use in determining the breadth of the range, in particular embodiments, the numerical values set forth herein are as precisely as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited herein is hereby incorporated by reference in its entirety. Except for the application history file that is inconsistent or conflicting with this disclosure, the file (currently or later attached to this disclosure) that limits the broadest scope of the claims of this disclosure is also excluded. It is noted that the description, definition, and/or use of the term in the appended claims controls the description, definition, and/or use of the term in this application if there is a discrepancy or conflict between the description, definition, and/or use of the term in the appended claims.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the application may be considered in keeping with the teachings of the application. Accordingly, the embodiments of the present application are not limited to the embodiments explicitly described and depicted herein.
Claims (14)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910486831.7A CN111859169B (en) | 2019-06-05 | 2019-06-05 | Destination recommendation method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910486831.7A CN111859169B (en) | 2019-06-05 | 2019-06-05 | Destination recommendation method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111859169A CN111859169A (en) | 2020-10-30 |
| CN111859169B true CN111859169B (en) | 2024-12-17 |
Family
ID=72965986
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910486831.7A Active CN111859169B (en) | 2019-06-05 | 2019-06-05 | Destination recommendation method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111859169B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112632414A (en) * | 2020-12-30 | 2021-04-09 | 北京嘀嘀无限科技发展有限公司 | Method, apparatus, device, medium and program product for determining candidate get-off location |
| CN113780923A (en) * | 2021-01-20 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Delivery method, device, electronic equipment and medium |
| CN112926796B (en) * | 2021-03-22 | 2023-12-22 | 广州宸祺出行科技有限公司 | Get-off point recommendation method and device based on specific scene |
| CN114608597B (en) * | 2022-03-21 | 2025-08-19 | 合众新能源汽车股份有限公司 | Blind box navigation method and device thereof |
| CN114742403A (en) * | 2022-04-08 | 2022-07-12 | 携程商旅信息服务(上海)有限公司 | Journey determination method, system, device and medium |
| CN114780869A (en) * | 2022-04-11 | 2022-07-22 | 北京百度网讯科技有限公司 | Riding point recommendation method and device, electronic equipment and medium |
| CN114662020B (en) * | 2022-04-29 | 2025-08-26 | 深圳依时货拉拉科技有限公司 | Method and device for recommending points of interest based on point of interest contours |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108989397A (en) * | 2018-06-26 | 2018-12-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Data recommendation method, device and storage medium |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018528535A (en) * | 2015-08-20 | 2018-09-27 | ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド | System and method for determining information related to a current order based on past orders |
| CN107402931A (en) * | 2016-05-19 | 2017-11-28 | 滴滴(中国)科技有限公司 | Recommend method and apparatus to a kind of trip purpose |
| CN105912685B (en) * | 2016-04-15 | 2019-08-23 | 上海交通大学 | Based on cross-cutting air ticket personalized recommendation system and recommended method |
-
2019
- 2019-06-05 CN CN201910486831.7A patent/CN111859169B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108989397A (en) * | 2018-06-26 | 2018-12-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Data recommendation method, device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111859169A (en) | 2020-10-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111859169B (en) | Destination recommendation method and system | |
| CN112236787B (en) | System and method for generating personalized destination recommendations | |
| CN109074622B (en) | System and method for routing transportation services | |
| TWI670677B (en) | System and method for recommending estimated arrival time | |
| TWI676783B (en) | Method and system for estimating time of arrival | |
| CN111858786B (en) | System and method for providing time-of-flight confidence in path planning | |
| CN112036645B (en) | System and method for determining an estimated time of arrival | |
| US20200300650A1 (en) | Systems and methods for determining an estimated time of arrival for online to offline services | |
| TW201901474A (en) | System and method for determining estimated arrival time | |
| CN110782648B (en) | System and method for determining estimated time of arrival | |
| TW201903622A (en) | Method and system for path planning | |
| CN112243487B (en) | System and method for on-demand services | |
| JP6632723B2 (en) | System and method for updating a sequence of services | |
| CN110800001B (en) | Systems and methods for data storage and data query | |
| CN111159317A (en) | System and method for determining path topology | |
| Mohamed et al. | A context-aware recommender system for personalized places in mobile applications | |
| CN111859175B (en) | Method and system for recommending boarding point | |
| US11112263B2 (en) | Inventory quantity prediction for geospatial ads with trigger parameters | |
| CN111563639A (en) | Order distribution method and system | |
| CN114579844A (en) | User portrait generation method, device, medium and equipment based on driving behaviors | |
| CN111859115A (en) | User allocation method and system, data processing equipment and user allocation equipment | |
| WO2021022487A1 (en) | Systems and methods for determining an estimated time of arrival | |
| Kulakov et al. | Ontological model of multi-source smart space content for use in cultural heritage trip planning | |
| CN111612198B (en) | Method and device for predicting success rate of spelling and electronic equipment | |
| CN111275232A (en) | Method and system for generating future value prediction models |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TG01 | Patent term adjustment | ||
| TG01 | Patent term adjustment |