[go: up one dir, main page]

CN118945815B - Service scene identification method, electronic equipment and storage medium - Google Patents

Service scene identification method, electronic equipment and storage medium Download PDF

Info

Publication number
CN118945815B
CN118945815B CN202411180585.XA CN202411180585A CN118945815B CN 118945815 B CN118945815 B CN 118945815B CN 202411180585 A CN202411180585 A CN 202411180585A CN 118945815 B CN118945815 B CN 118945815B
Authority
CN
China
Prior art keywords
scene
base station
fence
wifi
snapshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411180585.XA
Other languages
Chinese (zh)
Other versions
CN118945815A (en
Inventor
刘兴宇
陈志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202411180585.XA priority Critical patent/CN118945815B/en
Publication of CN118945815A publication Critical patent/CN118945815A/en
Application granted granted Critical
Publication of CN118945815B publication Critical patent/CN118945815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/322Aspects of commerce using mobile devices [M-devices]
    • G06Q20/3224Transactions dependent on location of M-devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. Transmission Power Control [TPC] or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请提供一种业务场景识别方法、电子设备及存储介质,涉及终端技术领域,该方法包括:监听到目标业务的场景识别请求,获取电子设备当前接入的基站的基站信息和目标业务的场景识别精度,场景识别精度包括高精度,检测到基站信息与一个基站围栏快照中的运营商标识、小区编号以及基站编号匹配,根据与基站信息匹配的基站围栏快照中的场景围栏标识,确定目标场景围栏快照;确定目标场景围栏快照对应的围栏区域;获取电子设备的当前位置;检测到当前位置在围栏区域内,获取电子设备的WiFi列表;检测到WiFi列表与WiFi特征匹配,确定电子设备位于目标场景内。该方法可以实现低功耗的业务场景识别,有效地提高业务场景识别的实时性。

The present application provides a service scene recognition method, an electronic device and a storage medium, and relates to the field of terminal technology. The method includes: monitoring a scene recognition request of a target service, obtaining base station information of a base station currently accessed by the electronic device and the scene recognition accuracy of the target service, wherein the scene recognition accuracy includes high accuracy, detecting that the base station information matches the operator identifier, cell number and base station number in a base station fence snapshot, determining the target scene fence snapshot according to the scene fence identifier in the base station fence snapshot that matches the base station information; determining the fence area corresponding to the target scene fence snapshot; obtaining the current position of the electronic device; detecting that the current position is within the fence area, obtaining the WiFi list of the electronic device; detecting that the WiFi list matches the WiFi feature, and determining that the electronic device is located in the target scene. The method can realize low-power service scene recognition and effectively improve the real-time performance of service scene recognition.

Description

Service scene identification method, electronic equipment and storage medium
The application relates to a split application of a Chinese patent application, which is submitted to a national intellectual property office, the application number of which is 202211320411.X and the application name of which is a business scene identification method, electronic equipment and a storage medium at 10-26-2022.
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a service scene identification method, an electronic device, and a storage medium.
Background
In daily life, in some scenes such as companies, canteens, movie theaters, shops, train stations, airports, schools, hospitals, scenic spots, etc., users often need electronic devices to continuously perform scene recognition to determine the relationship between their locations and scenes, so as to facilitate some shortcut services through the electronic devices. For example, when the electronic device (such as a mobile phone) recognizes that the user enters the canteen, the payment code shortcut card is automatically popped up on the mobile phone interface, so that the user can quickly complete payment based on the payment code shortcut card when buying food in the canteen.
In the related art, when a service triggers an electronic device to perform scene recognition, the electronic device locates current position information and sends the position information to a cloud service platform. The cloud service platform is used for inquiring whether the corresponding position is located in the target scene or not according to the received position information, and feeding back the result to the electronic equipment.
However, the electronic device needs to query the location from the cloud service platform every time, which results in larger power consumption and poorer real-time performance.
Disclosure of Invention
The application provides a business scene recognition method, electronic equipment and a storage medium, the service scene identification method can realize the service scene identification with low power consumption and can effectively improve the real-time performance of the service scene identification.
In a first aspect, the application provides a service scene identification method, which is executed by an electronic device, and the method comprises the steps of acquiring base station information of a base station currently accessed by the electronic device when a scene identification request of a target service is monitored, wherein the base station information comprises a target operator identifier, a target cell number and a target base station number, the scene identification request is used for requesting to identify whether the electronic device is located in a target scene associated with the target service, scene identification is carried out based on scene feature data and the base station information so as to determine whether the electronic device is located in the target scene, the scene feature data comprises at least one base station fence snapshot, and the base station fence snapshot comprises the operator identifier, the cell number and the base station number with an association relation.
When the scene recognition precision comprises high precision, detecting that base station information is matched with an operator identifier, a cell number and a base station number in one base station fence snapshot, wherein the base station fence snapshot comprises a scene fence identifier, determining a target scene fence snapshot according to the scene fence identifier in the base station fence snapshot matched with the base station information, wherein the target scene fence snapshot comprises WiFi features, determining a fence area corresponding to the target scene fence snapshot, acquiring the current position of the electronic equipment, acquiring a WiFi list of the electronic equipment if the current position is detected to be in the fence area, and determining that the electronic equipment is positioned in the target scene if the WiFi list is detected to be matched with the WiFi features.
Optionally, the target scene fence snapshot includes longitude and latitude information of a scene fence center point and a scene fence radius, and the fence area is determined according to the longitude and latitude information of the scene fence center point and the scene fence radius in the target scene fence snapshot.
Alternatively, the base station information of the base station currently accessed by the electronic device can be acquired by means of Cell-ID positioning.
Alternatively, the target service may be a regular payment service, a ticket taking service, a two-dimensional code service, a ticket purchasing service, or the like.
Optionally, the operator identifier, the cell number, and the base station number with the association relationship are used to uniquely identify one base station.
Optionally, the base station fence snapshot may further include latitude and longitude information of a base station fence center point and a base station fence radius.
According to the service scene recognition method provided by the first aspect, when a scene recognition request of a target service is monitored, base station information of a base station to which the electronic equipment is accessed currently is obtained, scene recognition is performed based on scene feature data and the base station information, and whether the electronic equipment is located in the target scene is determined. The scene feature data comprises at least one base station fence snapshot, the base station fence snapshot comprises an operator identifier, a cell number and a base station number with an association relation, and the base station information comprises a target operator identifier, a target cell number and a target base station number. Obviously, scene identification is performed based on scene feature data and base station information, namely, the operator identification, the cell number and the base station number in the base station fence snapshot are compared with the target operator identification, the target cell number and the target base station number in the base station information, that is, whether the electronic equipment is located in the target scene is determined through comparison between the base station information and the base station fence snapshot. In the process of identifying the service scene, the base station information is adopted, the cloud service platform is not required to acquire the position, and compared with the GPS positioning adopted in the related technology, the power consumption of the base station positioning is smaller than that of the GPS positioning. The method does not need to be compared with data in the cloud service platform, so that the power consumption of the electronic equipment is reduced, and the instantaneity is improved.
For the service with high scene recognition precision, when the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, the current position of the electronic equipment is in a fence area corresponding to the certain scene fence snapshot (the scene fence snapshot determined by the scene fence identification in the base station fence snapshot), and the WiFi list is matched with the WiFi features in the scene fence snapshot, the electronic equipment can be recognized to enter a service scene with high scene recognition precision, and the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved.
In the high-precision business scene recognition process, on one hand, cell-ID positioning is adopted in the early stage (namely, cell-ID positioning is adopted before the scene fence snapshot is matched), GPS positioning and scanning WiFi are adopted in the later stage, and power consumption is saved while the recognition precision is kept. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
Optionally, the scene feature data may be data that the electronic device sends a feature acquisition request to the cloud service platform, and the cloud service platform sends the feature acquisition request to the electronic device based on the feature acquisition request. The electronic device caches scene feature data in a local database in advance.
In one possible implementation, when the scene recognition accuracy of the target service is detected to be low, scene recognition is performed based on the scene feature data and the base station information to determine whether the electronic device is located in the target scene, including determining that the electronic device is located in the target scene if the base station information matches with the operator identifier, the cell number and the base station number in one base station fence snapshot.
In the implementation manner, for the service with low scene recognition precision, as long as the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, the electronic equipment can be recognized to enter the service scene with low scene recognition precision, and the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved.
In the low-precision business scene recognition process, on one hand, the power consumption of the electronic equipment for positioning by adopting the Cell-ID is smaller than that of GPS, so that the mode of matching the base station information of the electronic equipment with the base station fence snapshot is adopted to recognize whether the electronic equipment enters a business scene with low scene recognition precision, and the power consumption is saved. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
Optionally, if the base station fence snapshot matching the base station information is not found in the scene feature data, determining that the electronic device is not currently located within a target scene associated with the target service. For example, if the carrier identifier, the cell number, and the base station number in the snapshot of the base station fence are not found in the scene feature data, and are the same as the target carrier identifier, the target cell number, and the target base station number in the base station information, it is determined that the electronic device is not currently located in the target scene associated with the target service.
In a possible implementation manner, when the scene recognition precision of the target service is detected to be low, scene recognition is performed based on scene feature data and base station information to determine whether the electronic equipment is located in the target scene, and the method further comprises the steps of determining a target area according to the target service if the base station information is not matched with an operator identifier, a cell number and a base station number in any base station fence snapshot, wherein the target area is an area corresponding to a destination and is a place for realizing the target service, acquiring the current position of the electronic equipment, and determining that the electronic equipment is located in the target scene if the current position of the electronic equipment is detected to be located in the target area.
In the implementation mode, before the base station information of the electronic equipment is not matched with any base station fence snapshot, whether the current position of the electronic equipment is in the target area or not can be judged through the current position information of the electronic equipment and the target area, so that whether the electronic equipment enters a service scene with low scene recognition precision is recognized, and support is provided for cold start of low-precision service scene recognition.
Optionally, the base station fence snapshot may also include a scene fence identification. The scene characteristic data may further include at least one scene fence snapshot including latitude and longitude information of a scene fence center point having an association relationship and a scene fence radius.
In a possible implementation manner, when the scene recognition precision of the target service is detected to be medium precision, scene recognition is performed based on scene feature data and base station information to determine whether the electronic equipment is located in a target scene or not, wherein the method comprises the steps of determining a target scene fence snapshot according to a scene fence identifier in a base station fence snapshot matched with base station information if the base station information is matched with an operator identifier, a cell number and a base station number in one base station fence snapshot, determining a fence area according to longitude and latitude information of a scene fence center point in the target scene fence snapshot and a scene fence radius, acquiring the current position of the electronic equipment, and determining that the electronic equipment is located in the target scene if the current position of the electronic equipment is detected to be in the fence area.
In the implementation manner, for the service with the scene recognition precision of medium precision, when the base station information of the electronic equipment is matched with a certain base station fence snapshot, and the current position of the electronic equipment is in a fence area corresponding to the certain scene fence snapshot (the scene fence snapshot determined by the scene fence mark in the base station fence snapshot), the electronic equipment can be recognized to enter the service scene with the scene recognition precision of medium precision, and the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved. And when the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, detecting whether the current position of the electronic equipment is in a fence area corresponding to the certain scene fence snapshot, namely starting middle-precision business scene recognition, and effectively avoiding the power consumption waste of middle-precision business scene recognition.
In the middle-precision business scene recognition process, on one hand, cell-ID positioning is adopted in the early stage (namely, cell-ID positioning is adopted before the scene fence snapshot is not matched), GPS positioning is not required in the whole process, and power consumption is saved. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
Optionally, the step of obtaining the current position of the electronic device may include that the accuracy of the obtained current position of the electronic device can be ensured through the current position of the electronic device of the global satellite positioning system, so that the accuracy of service scene recognition is improved.
Optionally, the WiFi list includes at least one WiFi identification information and a WiFi intensity corresponding to each WiFi identification information.
In a possible implementation manner, if the WiFi list is detected to be matched with the WiFi characteristics of the target scene fence snapshot, the electronic equipment is determined to be located in the target scene, wherein the method comprises the steps of determining a matching degree threshold corresponding to the WiFi list, and if the WiFi identification information in the WiFi list is detected to be matched with the WiFi identification information list in the WiFi characteristics, and the matching degree threshold is greater than or equal to the target matching degree threshold, determining that the electronic equipment is located in the target scene.
Optionally, if the WiFi characteristic of the WiFi list and the target scene fence snapshot is detected to be matched, determining that the electronic device is located in the target scene may further include determining that the electronic device is located in the target scene if the WiFi identification information in the WiFi list and the WiFi intensity corresponding to each WiFi identification information are matched with each WiFi identification information in the WiFi identification information list and the WiFi intensity corresponding to each WiFi identification information in the WiFi characteristic.
In the implementation mode, the accuracy of the result of the WiFi list and WiFi feature matching is improved, and the accuracy of service scene identification is improved.
Alternatively, acquiring the WiFi list may be performed using a WiFi lift technique, where the WiFi lift technique is a WiFi scan generated by the acquisition system or a third party application, and the WiFi scan may include the WiFi list.
In a possible implementation manner, the service scene identification method provided by the application further comprises the steps of acquiring a WiFi list generated by the third-party application program if the WiFi list is not matched with the WiFi features of the target scene fence snapshot, and determining that the electronic equipment is located in the target scene if the WiFi list generated by the third-party application program is matched with the WiFi features of the target scene fence snapshot.
In the implementation mode, when the WiFi list is not matched with the WiFi characteristics, the WiFi-related data is acquired by utilizing the WiFi lift-up technology, and WiFi scanning is not required to be carried out independently, so that the power consumption is effectively saved.
In a possible implementation manner, when the scene recognition precision of the detected target service is high precision, scene recognition is performed based on the scene feature data and the base station information to determine whether the electronic device is located in the target scene, and if the detected WiFi list of the electronic device is matched with the WiFi feature in any scene fence snapshot, determining that the electronic device is located in the target scene may be further included.
In the implementation mode, when the scene recognition precision of the target service is high, if the WiFi list is detected to be matched with a certain WiFi feature, the electronic equipment is immediately recognized to be located in the target scene, the real-time performance of the service scene recognition is improved, and the power consumption required by early-stage positioning is saved.
In a possible implementation manner, the service scene recognition method provided by the application further comprises the steps of obtaining the motion state of the user carrying the electronic equipment, predicting the first motion speed of the user according to the motion state, determining the destination for realizing the target service according to the target service, determining the first distance between the current position of the user and the destination, and predicting the next positioning time according to the first motion speed and the first distance.
Alternatively, the current exercise state of the user may include a walking state, a running state, a fast walking state, a car running state, etc.
In the implementation mode, the motion speed is estimated according to the current motion state of the user, and then the time of next positioning is predicted according to the motion speed and the distance between the user and the destination.
In a possible implementation manner, the service scene identification method provided by the application further comprises the steps of determining the position of the user according to the longitude and latitude information of the center point of the base station fence of the target base station fence snapshot if the base station information of the base station currently accessed by the electronic equipment is detected to be matched with the target base station fence snapshot in the moving process of the electronic equipment, determining the second distance according to the position and the destination of the user, determining the second movement speed of the user, and updating the time of the next positioning according to the second movement speed and the second distance.
In the implementation mode, the time of next positioning is continuously refreshed according to the matched base station fence snapshot in the advancing process of the user, the GPS positioning is not needed in the whole process, the accuracy of service scene identification is maintained, the positioning times are greatly reduced, and the power consumption is reduced. And as the learning of scene features increases, scene feature data is perfected continuously, the number of times of positioning by adopting a GPS mode is smaller and smaller no matter where a user wants to go in the later period, and the overall power consumption which can be saved is larger and larger for realizing service scene recognition.
In a possible implementation manner, the service scene identification method provided by the application can further comprise the steps of acquiring the latest position of the electronic equipment when the electronic equipment is in the target scene, and determining whether the electronic equipment leaves the target scene or not according to the latest position. By the implementation mode, false recognition can be effectively avoided, accuracy of service scene recognition is improved, namely whether the electronic equipment really leaves a target scene associated with the target service or not is accurately judged through the latest position, and better experience is brought to a user.
In a possible implementation manner, the business scene recognition method provided by the application can further comprise stopping positioning and/or stopping scanning WiFi when the electronic equipment is detected to be static, so that power consumption can be effectively saved.
In a possible implementation manner, the service scene identification method provided by the application can further comprise stopping positioning and/or stopping scanning WiFi when the moving range of the electronic equipment is detected to be smaller than or equal to the preset range, so that the power consumption can be effectively saved.
Optionally, the application also provides a WIFI chip which can be installed in the electronic equipment, and the method adopted by the WIFI chip when scanning WIFI is different from the method adopted by the WIFI in the related art, so that the scanning power consumption of the WIFI chip is much lower than that of the existing WIFI chip, and the service scene recognition is carried out based on the WIFI chip, so that the scanning power consumption is greatly reduced.
Optionally, the application further provides a method for collecting scene crowdsourcing data, which comprises the steps that a first application program in the electronic equipment executes first service, a sensing module in the electronic equipment acquires service data of the first service, the sensing module in the electronic equipment collects current environment data of the electronic equipment, the sensing module reports a collection data set to a cloud service platform, the collection data set comprises the environment data and the service data, so that the cloud service platform can conveniently perform cloud computing, and scene characteristics can be learned based on the data.
Optionally, the application further provides a method for learning scene features, which is generally executed by the cloud service platform, and comprises the steps of constructing a grid graph based on the earth surface space data, mapping scene crowdsourcing data into the grid graph, determining scene fence snapshots corresponding to each service, and determining base station fence snapshots of each base station. The method provides a basis for the subsequent business scene recognition.
Optionally, the scene crowd-sourced data may include a plurality of scene collection data sets, each scene collection data set may include service data and environment data collected by the electronic device when implementing a corresponding service, each scene collection data set corresponds to a service type, and each scene collection data set includes longitude and latitude information.
Optionally, determining the scene fence snapshot corresponding to each service may include determining points belonging to the same attribute in a raster image, clustering the points of the same attribute to obtain a first clustering result, wherein the first clustering result comprises at least one cluster, determining longitude and latitude information of a center point of the scene fence according to the first clustering result, determining a radius of the scene fence according to the first clustering result, and generating the scene fence snapshot based on the longitude and latitude information of the center point of the scene fence, the radius of the scene fence, the service type information and city codes corresponding to the service type information. The method provides a basis for the subsequent business scene recognition.
Optionally, the points belonging to the same attribute are points corresponding to the scene acquisition data set belonging to the same attribute, and the same attribute refers to the same service type information and the same city code.
Optionally, determining the base station fence snapshot of each base station may include determining points belonging to the same base station in a grid chart, clustering the points of the same base station to obtain a second clustering result, wherein the second clustering result comprises at least one cluster, determining longitude and latitude information of a base station fence center point according to the second clustering result, determining a base station fence radius according to the second clustering result, and generating the base station fence snapshot based on the longitude and latitude information of the base station fence center point, the base station fence radius, base station indication information and city codes corresponding to the base station. The method provides a basis for the subsequent business scene recognition.
Optionally, the points belonging to the same base station are points corresponding to the scene acquisition data set belonging to the same base station, and the same base station indicates that the information of the base station is the same, i.e. the operator identifier, the cell number and the base station number are the same.
In a second aspect, the present application provides an apparatus, which is included in an electronic device, the apparatus having a function of implementing the above first aspect and the behavior of the electronic device in the possible implementation manners of the above first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a listening module or unit, a processing module or unit, etc.
In a third aspect, the application provides an electronic device, which comprises a processor, a memory and an interface, wherein the processor, the memory and the interface are mutually matched, so that the electronic device executes any one of the methods provided in the first aspect.
In a fourth aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Optionally, the chip further comprises a communication interface.
Optionally, the WIFI chip provided by the application can be integrated in the chip.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, which when executed by a processor causes the processor to perform any one of the methods of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising computer program code which, when run on an electronic device, causes the electronic device to perform any one of the methods of the first aspect.
Drawings
FIG. 1 is a schematic diagram of an application scenario according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a process for opening a payment code according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario according to another exemplary embodiment of the present application;
fig. 4 is a schematic diagram illustrating a process of opening a two-dimensional code according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of an application scenario according to yet another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a system architecture according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural view of an electronic device according to an exemplary embodiment of the present application;
FIG. 8 is a block diagram of the software architecture of an electronic device shown in an exemplary embodiment of the application;
FIG. 9 is a flowchart illustrating an electronic device collecting scene crowd-sourced data in accordance with an exemplary embodiment of the present application;
FIG. 10 is a flowchart of a method of learning scene features according to an exemplary embodiment of the application;
FIG. 11 is a schematic diagram of a raster pattern shown in accordance with an exemplary embodiment of the present application;
FIG. 12 is a schematic view of another raster pattern shown in accordance with an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of data distribution shown in an exemplary embodiment of the present application;
FIG. 14 is a diagram illustrating a cache scene feature according to an exemplary embodiment of the application;
FIG. 15 is a flowchart of a business scenario recognition method according to an exemplary embodiment of the present application;
FIG. 16 is a flowchart illustrating a method of predicting a time to next fix in accordance with an exemplary embodiment of the present application;
FIG. 17 is a schematic diagram of an application scenario of a predicted time according to an exemplary embodiment of the present application;
FIG. 18 is a schematic diagram of another predicted time application scenario in accordance with an exemplary embodiment of the present application;
FIG. 19 is a schematic diagram of a switched positioning mode according to an exemplary embodiment of the present application;
fig. 20 is a schematic diagram showing a change in power consumption and real-time performance according to an exemplary embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B, and "and/or" herein is merely an association relationship describing an association object, and means that there may be three relationships, for example, a and/or B, and that there may be three cases where a exists alone, while a and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
First, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
1. Coarse positioning services, also known as Cell-ID positioning, or base station positioning. Cell-ID positioning is the determination of the location of a Cell phone based on the location of the cellular base station to which the electronic device (e.g., the Cell phone) is currently connected. In the process of moving the mobile phone carried by the user, the position of the user is almost consistent with the position of the mobile phone, so that the position of the mobile phone which is usually determined is also used as the position of the user.
2. And the interest points (Point Of Interest, POIs) at least comprise four basic information, namely names, addresses, categories and longitude and latitude coordinates. For example, a POI may represent a house, a shop, a cell gate, a mailbox, a bus stop, etc.
3. An Area Of Interest (AOI), also called an information plane, is mainly used to represent regional geographic entities in map data. For example, an AOI may represent a residential community, a university, an office building, a complex, a scenic spot, a train station, etc.
4. The spatial clustering algorithm (DBSCAN) is an unsupervised clustering algorithm. The algorithm divides regions of sufficient density into clusters, which are defined as the largest set of densely connected points.
5. Morton (Morton) coding is an algorithm for coding a trellis.
6. The global satellite positioning system (Global Positioning System, GPS), a high-precision radio navigation positioning system based on satellites, provides accurate geographic location, vehicle speed and accurate time information anywhere in the world and near earth space.
The foregoing is a simplified description of the terminology involved in the embodiments of the present application, and is not described in detail below.
With the rapid development of electronic devices, users have increasingly demanded electronic devices. In some scenarios, a user may wish the electronic device to automatically implement certain shortcut services through scene recognition. For example, taking an electronic device as an example of a mobile phone, when a user enters an airport, the mobile phone is expected to automatically pop up a prompt card with information such as a waiting hall, a flight number, an airline company and the like in an interface, for example, when the user enters a hospital, the mobile phone is expected to automatically present a two-dimensional code in the interface, for example, when the user enters a company canteen, the mobile phone is expected to automatically pop up a payment code in the interface and the like.
To enable scene recognition, it is often necessary for the electronic device to determine the associated service as well as its own location. For example, when a service pre-associated with the electronic device triggers the electronic device to perform scene recognition, the electronic device locates current position information and then sends the position information to the cloud service platform. The cloud service platform is used for inquiring whether the corresponding position is located in the target scene or not according to the received position information, and feeding back the result to the electronic equipment.
In the implementation manner, the electronic device needs to request from the cloud service platform every time, so that the power consumption is high and the instantaneity is poor. Therefore, the embodiment of the application provides a service scene identification method, which can realize low-power consumption service scene identification and effectively improve the real-time performance of service scene identification.
Before describing the service scene recognition method provided by the embodiment of the application in detail, several possible application scenes (such as scenes of a company, a canteen, a cinema, a market, a train station, an airport, a school, a hospital, a scenic spot and the like) related to the embodiment of the application are described. In the embodiment of the application, the electronic equipment is a mobile phone as an example, and several possible application scenarios are described.
In an example, referring to fig. 1, fig. 1 is a schematic diagram of an application scenario according to an exemplary embodiment of the present application. For example, the cell phone used by user A supports regular payment services, and the user A goes to the corporate canteen for dining between 11:45 and 12:10 on each weekday. As shown in FIG. 1, user A moves with the handset from S1 to the corporate canteen at 11:40 AM on a certain workday. In the process that the user A carries the mobile phone to move, the mobile phone starts to perform scene recognition in a bright screen state so as to determine whether the user A enters a company canteen. After that, when the user a enters the area where the company canteen is located, for example, the user a moves to the S2 position shown in fig. 1 (such as the entrance of the company canteen), at this time, the mobile phone displays the card with the payment code in the main interface, so that the user a can use the payment code to pay quickly when buying food.
For ease of understanding, the process of displaying the payment code will be described in connection with the main interface of the handset. Referring to fig. 2, fig. 2 is a schematic diagram illustrating a process of opening a payment code according to an exemplary embodiment of the present application. For example, when the user a is at a company station or on the way to a company canteen, the main interface of the mobile phone is as shown in (a) in fig. 2, that is, the file management icon 102 is displayed in the yoyoyo advice card 101 displayed on the main interface of the mobile phone. When user a enters the area of the corporate canteen, for example, when user a moves to the entrance of the corporate canteen, the phone recommends payment code shortcut cards to the user. At this time, the main interface of the mobile phone is shown in fig. 2 (b), that is, the payment code icon 103 is displayed in the YOYO advice card 101 displayed on the main interface of the mobile phone. The payment code icon 103 is a shortcut, and when a user clicks the payment code icon 103 to purchase food, the user can quickly jump to the payment code interface of the three parties, as shown in fig. 2 (c), and quick payment can be realized based on the payment code interface. After the payment is completed, the payment code shortcut card automatically disappears, and the main interface of the mobile phone is as shown in (d) of fig. 2, that is, the file management icon 102 is displayed in the YOYO advice card 101 displayed on the main interface of the mobile phone.
Optionally, a payment code shortcut card may also be displayed above the main interface of the phone. For example, a payment code shortcut card is displayed at the place where the time, date, and week are currently displayed as shown in fig. 2 (b).
In another example, please refer to fig. 3, fig. 3 is a schematic diagram of an application scenario according to another exemplary embodiment of the present application. During epidemic situation, two-dimension codes are needed for entering public places and riding public transportation. For example, the mobile phone used by the user a supports the quick display two-dimension code service, and the user a needs to take a bus at the bus station BS1 to the bus station BS2 between 7:25 and 7:35 of each working day, and then walk from the bus station BS2 to the company for working. As shown in fig. 3, the mobile phone starts to perform scene recognition at 7:20 of the working day, and when the mobile phone determines that the user a moves to the position S3 which is one hundred meters away from the bus station BS1, the mobile phone displays a card with a two-dimensional code in the main interface, so that the user can rapidly display the two-dimensional code before the bus.
For easy understanding, the process of displaying the two-dimensional code will be described with reference to the main interface of the mobile phone. Referring to fig. 4, fig. 4 is a schematic diagram illustrating a process of opening a two-dimensional code according to an exemplary embodiment of the application. For example, when the user a is on the road going to the bus station BS1, the main interface of the mobile phone is as shown in (a) in fig. 4, that is, the file management icon 202 is displayed in the YOYO advice card 201 displayed on the main interface of the mobile phone. The mobile phone starts scene recognition at 7:20 of the working day, and when the mobile phone determines that the user A moves to the S3 position which is one hundred meters away from the bus station BS1, the two-dimensional code shortcut card is recommended to the user. At this time, as shown in fig. 4 (b), the main interface of the mobile phone displays the two-dimensional code icon 203 in the YOYO advice card 201 displayed on the main interface of the mobile phone. The two-dimensional code icon 203 is a shortcut, and the user can quickly switch to the three-party two-dimensional code interface by clicking the two-dimensional code icon 203. If the user clicks the two-dimensional code icon 203 and then jumps to the interface shown in fig. 4 (c), the user clicks the my electronic code control in the interface and jumps to the two-dimensional code interface shown in fig. 4 (d), so that the two-dimensional code can be quickly displayed to the driver based on the two-dimensional code interface. After the display is completed, the two-dimension code interface can be exited, the two-dimension code shortcut card automatically disappears, and the main interface of the mobile phone is shown in (e) in fig. 4, namely, the file management icon 202 is displayed in the YOYO suggestion card 201 displayed on the main interface of the mobile phone.
Alternatively, the two-dimensional code may be a riding code. For example, the position where the two-dimensional code icon 203 is originally displayed on the main interface of the mobile phone is displayed as a riding code icon. The riding code icon is a shortcut, and a user can quickly transfer to a riding code interface of a three-party by clicking the riding code icon. Therefore, the user can directly swipe the card through the riding code of the riding code interface after getting on the bus.
In yet another example, please refer to fig. 5, fig. 5 is a schematic diagram of an application scenario according to another exemplary embodiment of the present application. For example, the cell phone used by user a supports a quick show ticket purchase service, and user a often watches movies at movie theatre B. Assuming that the user goes to theater B to watch a movie on a certain day, as shown in fig. 5, when the user holds the mobile phone to move to the entrance of theater B, the mobile phone automatically displays a movie ticket purchase icon 302 in the recommendation card 301 with one screen, the movie ticket purchase icon 302 is a shortcut, the user clicks the movie ticket purchase icon 302, and the mobile phone jumps to a ticket purchase page in response to the clicking operation of the user, so that the user can purchase movie tickets based on the ticket purchase page.
For ease of understanding, the system architecture to which embodiments of the present application relate is briefly described below. Referring to fig. 6, fig. 6 is a schematic diagram of a system architecture according to an exemplary embodiment of the application, where the system architecture includes an electronic device 400 and a cloud service platform 500. A communication connection is established between the electronic device 400 and the cloud service platform 500.
The electronic device 400 can perform scene recognition for some services so as to automatically implement some shortcut functions under the condition of determining that the electronic device enters some specific scenes, for example, automatically displaying a two-dimensional code card, automatically displaying a riding code card, automatically displaying a payment code card, and the like.
As an example of the present application, the electronic device 400 has the capability to access a mobile communication network and may support at least one network type. Illustratively, the electronic device 400 is capable of supporting third generation (3G) networks, fourth generation (4G) networks, fifth generation (fifth generation G) networks, and so forth. The electronic device 400 provided by the embodiment of the application can be a mobile phone, a tablet computer, a wearable device, a notebook computer, a netbook, a portable terminal and the like. The embodiment of the present application does not impose any limitation on the specific type of electronic device 400.
The cloud service platform 500 is used for collecting data based on scene crowdsourcing, namely collecting scene crowdsourcing data, and learning scene features corresponding to different services. In this way, the electronic device 400 can pull/acquire part of the scene features from the cloud service platform 500 according to the requirement, so as to perform scene recognition for a certain service based on the pulled/acquired part of the scene features.
The cloud service platform 500 provided by the embodiment of the application may include a server, for example, a cloud server.
Optionally, the system structure may further include a merchant cloud platform on the basis of including the electronic device 400 and the cloud service platform 500. The merchant cloud platform can provide positioning service for the electronic device 400, and can also provide scene crowdsourcing data and the like for the cloud service platform 500 according to the positioning result. In addition, the cloud service platform 500 may also subscribe to/obtain POI data, AOI data, and the like from the merchant cloud platform.
The system architecture according to the embodiment of the present application is briefly described above, and the structure of the electronic device 400 according to the embodiment of the present application is briefly described below. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. Electronic device 400 may include a processor 410, an external memory interface 420, an internal memory 421, a universal serial bus (universal serial bus, USB) interface 430, a charge management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an ear-piece interface 470D, a sensor module 480, keys 490, a motor 491, an indicator 492, a camera 493, a display screen 494, and a subscriber identity module (subscriber identification module, SIM) card interface 495, among others. The sensor modules 480 may include pressure sensors 480A, gyroscope sensors 480B, barometric pressure sensors 480C, magnetic sensors 480D, acceleration sensors 480E, distance sensors 480F, proximity sensors 480G, fingerprint sensors 480H, temperature sensors 480J, touch sensors 480K, ambient light sensors 480L, bone conduction sensors 480M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 400. In other embodiments of the application, electronic device 400 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 410 may include one or more processing units, for example, the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 400, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 410 for storing instructions and data. In some embodiments, the memory in the processor 410 is a cache memory. The memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided, reducing the latency of the processor 410 and thus improving the efficiency of the system.
The wireless communication function of the electronic device 400 may be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The structures of the antennas 1 and 2 in fig. 4 are only one example. Each antenna in electronic device 400 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 450 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied to the electronic device 400. The mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), or the like. The mobile communication module 450 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 450 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the processor 410. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the same device as at least some of the modules of the processor 410.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speaker 470A, receiver 470B, etc.), or displays images or video through display screen 494. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 450 or other functional module, independent of the processor 410.
The wireless communication module 460 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., as applied to the electronic device 400. The wireless communication module 460 may be one or more devices that integrate at least one communication processing module. The wireless communication module 460 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and transmits the processed signals to the processor 410. The wireless communication module 460 may also receive a signal to be transmitted from the processor 410, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 450 of electronic device 400 are coupled, and antenna 2 and wireless communication module 460 are coupled, such that electronic device 400 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques can include a global system for mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS). It is to be appreciated that in embodiments of the present application, a hardware module in a positioning or navigation system may be referred to as a positioning sensor.
The electronic device 400 implements display functions via a GPU, a display screen 494, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 494 is used to display images, videos, and the like. The display screen 494 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 400 may include 1 or N display screens 494, N being a positive integer greater than 1.
The external memory interface 420 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 400. The external memory card communicates with the processor 410 through an external memory interface 420 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 421 may be used to store computer-executable program code that includes instructions. The processor 410 executes various functional applications of the electronic device 400 and data processing by executing instructions stored in the internal memory 421. The internal memory 421 may include a storage program area and a storage data area. The storage program area may store an APP (such as a sound playing function, an image playing function, etc.) and the like required for at least one function of the operating system. The storage data area may store data created during use of the electronic device 400 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 421 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The pressure sensor 480A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 480A may be disposed on display screen 494. The pressure sensor 480A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 480A, the capacitance between the electrodes changes. The electronic device 400 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 494, the electronic apparatus 400 detects the touch operation intensity from the pressure sensor 480A. The electronic device 400 may also calculate the location of the touch based on the detection signal of the pressure sensor 480A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity smaller than a first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The acceleration sensor 480E may detect the magnitude of acceleration of the electronic device 400 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 400 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
It will be appreciated that the electronic device may also include a speed sensor. The speed sensor is used for acquiring the moving speed of the electronic equipment.
The ambient light sensor 480L is used to sense ambient light level. The electronic device 400 may adaptively adjust the brightness of the display screen 494 based on the perceived ambient light level. The ambient light sensor 480L may also be used to automatically adjust white balance during photographing. Ambient light sensor 480L may also cooperate with proximity light sensor 480G to detect whether electronic device 400 is in a pocket to prevent false touches. In particular, in the method according to the embodiment of the present application, the electronic device 400 may perform scene recognition according to the ambient light brightness sensed by the ambient light sensor 480L, so as to determine whether the ambient scene (indoor scene or outdoor scene) where the electronic device 400 is located is changed.
The touch sensor 480K, also referred to as a "touch panel". The touch sensor 480K may be disposed on the display screen 494, and the touch sensor 480K and the display screen 494 form a touch screen, which is also called a "touch screen". The touch sensor 480K is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 494. In other embodiments, the touch sensor 480K may also be disposed on a surface of the electronic device 400 at a different location than the display screen 494.
The structure of the electronic device 400 according to the embodiment of the present application is briefly described above, and the software structure according to the embodiment of the present application is briefly described below. Referring to fig. 8, fig. 8 is a block diagram illustrating a software structure of an electronic device according to an exemplary embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is described as an example of the electronic device 400, and the Android system is divided into four layers, namely, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and a system library from top to bottom, and a kernel layer.
The application layer may include a series of application packages. As shown in fig. 8, the application package may include applications for cameras, calendars, instant messaging, payments, ticket purchases, maps, navigation, wireless local area networks (wireless local area networks, WLAN), music, short messages, and the like.
The instant messaging application program can be used for realizing the instant messaging service, the two-dimension code presenting service, the riding code presenting service and the like, for example, the instant messaging application program can be but not limited toEtc. The payment application may be used to implement an online payment service, for example, the payment application may be, but is not limited toSilver couplet, etc. The ticketing application may be used to implement a ticketing service and may include, for example, but not limited to, an application for purchasing movie tickets, an application for purchasing tickets or air tickets, and the like.
The application framework layer provides an application programming interface (ApplicationProgramming Interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As an example of the present application, the application framework layer may include a business program module (which may also be referred to as YOYO advice) for displaying cards or controlling the disappearance of cards on the screen of the electronic device 400.
Optionally, the application framework layer may further include a decision module and a perception module. The sensing module is used for sending generated service data to the notification decision module when sensing that other applications and systems execute certain service. In addition, the perception module can also be used for carrying out scene recognition aiming at a certain service. The decision module is used for carrying out business event management based on the business data, for example, requesting the perception module to carry out scene recognition based on the business data.
Optionally, the application framework layer may further include a general acquisition module, where the general acquisition module is used for acquiring environmental data.
Optionally, the application framework layer may also include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc., and make such data accessible to the application.
The view system may include visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to construct a display interface for an application, which may be comprised of one or more views, such as a view that includes displaying a text notification icon, a view that includes displaying text, and a view that includes displaying a picture.
The telephony manager is used to provide communication functions of the electronic device 400, such as management of talk states (including on, off, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. For example, a notification manager is used to inform that the download is complete, a message alert, etc. The notification manager may also be a notification that appears in the system top status bar in the form of a chart or a scroll bar text, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as a text message being prompted in a status bar, a notification sound being emitted, the electronic device vibrating, a flashing indicator light, etc.
Android run time includes a core library and virtual machines. Android runtime is responsible for scheduling and management of the android system. The core library comprises two parts, wherein one part is a function required to be called by java language, and the other part is an android core library. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules such as a surface manager (surface manager), a Media library (Media Libraries), a three-dimensional graphics processing library (e.g., openGL ES), a 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In addition, the electronic device 400 provided by the embodiment of the present application may further include, but is not limited to, a wireless fidelity (WIRELESS FIDELITY, wiFi) chip, and the WiFi chip may be used to display a WiFi scanning function.
The software structure related to the embodiment of the application is briefly introduced, and the process of collecting scene crowdsourcing data related to the embodiment of the application is introduced. In an implementation process, the electronic device 400 collects environmental data of the surrounding environment when implementing the service, and reports the collected environmental data and service data related to the service to the cloud service platform 500. The cloud service platform 500 collects the environmental data collected by crowd sourcing and the business data related to the business, and performs cloud computing based on the collected data to determine scene characteristics corresponding to different businesses. In this way, the electronic device 400 can pull part of the scene features from the cloud service platform 500 according to the actual requirements, so as to perform scene recognition for a certain service.
As an example of the present application, please refer to fig. 9, fig. 9 is a flowchart illustrating an electronic device collecting scene crowd-sourced data according to an exemplary embodiment of the present application. The process of the electronic device collecting scene crowd-sourced data may include S601 to S604.
S601, a first application program in the electronic device executes a first service.
The first service is any one of a plurality of services of which the electronic equipment supports scene recognition, and the first application program is an application program capable of realizing the first service. For example, the first service may be a two-dimensional code service, a payment code service, a riding code service, etc., and the first application may be a WeChat application, a Payment device application, etc.
S602, the sensing module acquires service data of a first service.
The service data may include service type information, for example. For example, the service type information is pay, which indicates that the first service currently performed is a payment service.
Optionally, in a possible implementation manner, the service data may further include, but is not limited to, one or more of a service packet name, service additional description information, and scene recognition accuracy of the first service.
Wherein the service package name is used to identify which application is implementing the first service currently in progress. For example, the payment service may be implemented by a WeChat application, a Payment device application, or a UnionPayment device application.
The service attachment description information may be used to identify some additional service information. For example, the business attachment description information may be a store name associated with the first business. For example, in the case where the first business is a payment business, the business addition description information is used to identify a shop name paid by the first business. In one example, the service attachment description information may be a string in JSON format, such as { "payType": "qrcode", "payee": "milky tea store" }.
As an example of the present application, the scene recognition accuracy of the service includes three kinds of low, medium, and high. The scene recognition accuracy of different services is usually determined by the service itself. For example, different scene recognition accuracy may be set in advance for different services according to different demands of users. For example, the scene recognition accuracy of the regular payment service may be low accuracy, the scene recognition accuracy of the two-dimensional code service may be medium accuracy, and the scene recognition accuracy of the ticket-taking service may be high accuracy. This is merely illustrative and is not limiting.
In one example, the sensing module may include service acquisition plug-ins for multiple services, each of which may be used to sense a service and to acquire service data generated by the service. For example, the sensing module includes, but is not limited to, a service acquisition plug-in of two-dimension code service, a service acquisition plug-in of riding code service, a service acquisition plug-in of regular payment service, a service acquisition plug-in of ticket taking service, a service acquisition plug-in of ticket purchasing service, and the like. When an application program or a system in the electronic equipment realizes the service, the corresponding service acquisition plug-in can sense and acquire service data of the service, and in addition, the service acquisition plug-in informs the universal acquisition module to acquire environment data.
For example, when a WeChat application program in the electronic device presents the two-dimensional code, a service acquisition plug-in corresponding to the two-dimensional code service can sense the operation, and at the moment, the service acquisition plug-in corresponding to the two-dimensional code acquires relevant service data and notifies a general acquisition module to acquire environmental data.
S603, the sensing module collects current environmental data of the electronic equipment.
The sensing module collects environmental data through the universal collecting module. In one example, the environmental data may include base station indication information, latitude and longitude information, city coding.
Wherein, the base station indication information is used for uniquely identifying one base station, and the base station indication information can include an operator identifier (operator), a cell number (lac), and a base station number (cellid).
The latitude and longitude information may include latitude (longitude) and longitude (latitude), which may be determined by way of global satellite positioning system (global positioning system, GPS) positioning or network positioning. The network positioning may be based on a base station positioning method to determine latitude and longitude information, or may be based on a base station and a wireless network (WIRELESS FIDELITY, WIFI) to determine latitude and longitude information.
City codes are used to uniquely identify a city. For example, the city code is 0755, which is used to identify Shenzhen. Also for example, city code 029, where city code is used to identify western security. The city code may be obtained by invoking a geo-interface based on location services (location based service, LBS).
In one example, the environmental data may further include at least one of positioning accuracy, data acquisition time, base station strength of connection, positioning type, coordinate system type, device type, base station type.
In one example, where the scene identification of the business involves a plurality of different regions (e.g., involves different countries), the environment data may also include a region name that is used to distinguish between the different regions.
In one example, the environmental data may also include a number of searches for stars that are used to analyze whether the electronic device is currently indoors or outdoors.
In one example, the environment data may also include information of neighboring base stations for traffic where scene identification has a medium accuracy requirement. The neighboring base station is a base station neighboring the currently connected base station, and the information of the neighboring base station may include a base station number, a base station strength, and the like of the neighboring base station.
As an example of the present application, for a service requiring high accuracy in scene recognition, the environment data may further include WiFi data, where the WiFi data includes at least one WiFi identification information and a WiFi intensity corresponding to each WiFi identification information. The WiFi identification information may be used to uniquely identify a WiFi hotspot. In one example, the WiFi identification information may include WiFi physical address information and a WiFi name.
S604, reporting an acquisition data set to the cloud service platform by the perception module, wherein the acquisition data set comprises environment data and service data.
The sensing module may generate scene crowdsourcing data based on the service data and the environment data after acquiring the service data and the environment data, where the scene crowdsourcing data includes a plurality of scene acquisition data sets, and each scene acquisition data set may include longitude and latitude information, base station information, wiFi data, service type information, and the like. And the perception module sends the generated scene crowdsourcing data to the cloud service platform so as to facilitate cloud computing by the cloud service platform.
As an example of the present application, different field types may be set for individual elements in each scene acquisition data set according to requirements. Illustratively, the elements and field types of the elements included in each scene acquisition data set are shown in table 1.
TABLE 1
The foregoing describes an example in which the electronic device performs data collection according to a default manner. In another example, the cloud service platform may further issue different acquisition configuration information to the electronic device according to scene recognition accuracy required by different services, so as to instruct the electronic device how to perform data acquisition for different services. In one example, referring to table 2, the acquisition configuration information may include a service type, an acquisition class, a maximum number of acquisitions per day, and the like.
TABLE 2
The cloud service platform configures the acquisition level for the electronic equipment so that the electronic equipment can acquire whether WiFi data are required to be acquired or not. For example, wiFi data need not be acquired in the case where the acquisition level is low (e.g., 0), and WiFi data need be acquired in the case where the acquisition level is high (e.g., 2).
The maximum number of acquisitions per day indicates how many times the electronic device performs the maximum number of acquisitions per day for the configured service. Therefore, the power consumption of the electronic equipment during data acquisition can be controlled, and the acquisition power consumption of the electronic equipment is saved.
Further, referring to table 2, the acquisition configuration information may further include service description information. The service description information is used for explaining the service, so that the user can visually check the service. For example, the business description information may be a positioning lift, regular payments, two-dimensional codes, etc. It should be noted that, when the electronic device runs the third party application program (application program requiring positioning operation), base station indication information, latitude and longitude information, positioning accuracy, wiFi scanning results (such as a WiFi list) and the like are also generated, and base station indication information, latitude and longitude information, positioning accuracy, wiFi scanning results (such as a WiFi list) and the like generated by the application program are obtained, which is called positioning lift.
For similar electronic equipment, the service can be realized, scene crowdsourcing data related to the service can be determined according to the flow, and the scene crowdsourcing data is reported to the cloud service platform. Therefore, the cloud service platform can obtain a large amount of scene crowdsourcing data through a crowdsourcing acquisition mode. Based on the above, the cloud service platform can learn scene characteristics corresponding to different services by utilizing the scene crowdsourcing data.
A description will be given below of how to learn scene features corresponding to different services using the acquired scene acquisition data sets. As an example of the present application, please refer to fig. 10, fig. 10 is a flowchart of a method for learning scene features according to an exemplary embodiment of the present application. The method is generally performed by a cloud service platform, and in one possible implementation, the method may also be performed by an electronic device, and the method may include S701 to S704:
and S701, constructing a grid chart based on the earth surface space data.
In one example, the geospatial data may include latitude and longitude information, and the raster pattern may be constructed by Morton encoding based on the latitude and longitude information in the geospatial data. It is colloquially understood that the earth surface is dissected into a grid map. The grid pattern may include multiple levels of grids, each corresponding to a Morton code, for example, a kilometer group granularity grid and a hundred meter fine granularity grid. Illustratively, a location point in physical space is mapped into the grid of the grid map by Morton encoding the latitude and longitude information of the location point.
For ease of understanding, please refer to fig. 11, fig. 11 is a schematic diagram of a grid diagram according to an exemplary embodiment of the present application. Here, if the data in the base station and the data in the WiFi coverage area are mapped to the raster, as shown in (a) of fig. 11, the data in the base station coverage area is indicated by a dotted line, and the data in the WiFi coverage area is indicated by a solid line. After the partial area of fig. 11 (a) is enlarged, that is, as shown in fig. 11 (b), it can be seen that only the base station data is included in some grids, where it can be understood that the location points included in the grids by the electronic device are connected to only the base station covering the grid, and some grids include both the base station data and the WiFi data, where it can be understood that the location points included in the grids by the electronic device are connected to not only the base station covering the grid but also the WiFi covering the grid.
S702, mapping scene crowdsourcing data into a raster graph.
The scene crowdsourcing data may include a plurality of scene collection data sets, each scene collection data set may be obtained by an embodiment corresponding to fig. 9, and as can be known from the foregoing description, each scene collection data set may include service data and environment data collected by an electronic device when implementing a corresponding service, each scene collection data set corresponds to a service type, and each scene collection data set includes longitude and latitude information.
As an example of the present application, the cloud service platform may perform morton encoding on the longitude and latitude information in each scene acquisition data set to obtain a morton code corresponding to the longitude and latitude information in each scene acquisition data set, and then map each scene acquisition data set into the raster graph based on the morton code.
As an example of the present application, the cloud service platform may further include POI data and AOI data, where the POI data and AOI data also include latitude and longitude information. The cloud service platform can map the POI data into the grid graph according to the longitude and latitude information in the POI data, and can map the AOI data into the grid graph according to the longitude and latitude information in the AOI data.
For example, some POI data carries "_Point": "POINT (114.064829 22.572986)". Some AOI data carries POLYGON data: "_polygon": "MULTIPOLYGON (((114.064063)
22.573102,114.063954 22.572744,114.063946 22.572678,114.063946 22.572652,114.06395422.572625,114.063964 22.572609,114.064751 22.572433,114.064795
22.572432,114.064893......114.064063 22.573102)))”。
For ease of understanding, please refer to fig. 12, fig. 12 is a schematic diagram of another grid diagram according to an exemplary embodiment of the present application. As shown in fig. 12, a grid set formed by grids passed by each side of the irregular area and grids covered in the irregular area, that is, the representation of AOI data mapped in the grid map, may be understood as that the AOI data corresponds to one grid set in the grid map. The independent circles in fig. 11 are the representations of the POI data mapped in the grid map respectively, which can be understood as that one POI data corresponds to one grid in the grid map.
As an example of the present application, in the case that the data collection time is included in the scene collection data set, the cloud service platform may filter out the scene collection data set in the last period of time, that is, filter out the scene collection data set far from the current time. And then, the screened scene acquisition data set is mapped into the grid chart according to the mode, so that scene characteristics learned later can be effective in real time.
As an example of the present application, in the case that the scene acquisition data set includes a coordinate system type, if the scene acquisition data set acquired by crowd sourcing refers to different coordinate system types (such as GCJ02 Mars coordinate system, BD09 hundred degrees coordinate system, and WGS84 earth coordinate system), the cloud service platform may unify the scene acquisition data sets under different types of coordinate systems under the same type of coordinate system, for example, under the WGS84 earth coordinate system, and map the scene acquisition data set after unifying the coordinate system types into the grid map, so that the mapping result may be more accurate.
As an example of the present application, where scene recognition levels are included in the scene acquisition data set, the crowd-sourced acquired scene acquisition data set may also be screened based on the scene recognition levels. Specifically, since the scene recognition accuracy of a certain service may change, for example, the scene recognition accuracy of a certain service is improved from low accuracy to high accuracy, in which case if scene feature learning is still performed based on a low-level scene acquisition data set, the subsequent scene recognition is likely to be inaccurate. Therefore, the cloud service platform can screen out the scene acquisition data set which is the same as the current scene recognition level of the service from among scene acquisition data sets acquired by crowdsourcing according to the scene recognition level, and then map the screened scene acquisition data set into the grid graph according to the mode.
As an example of the present application, in the case where the accuracy of the positioning type and the latitude and longitude information is also included in the scene acquisition data set, some scene acquisition data sets may also be screened according to the accuracy of the positioning type and the latitude and longitude information. For example, scene collection data sets with the precision lower than the precision threshold are filtered, so that some scene collection data sets with lower confidence are filtered, and the effectiveness and the accuracy of subsequent scene feature learning are ensured. The accuracy threshold may be set according to actual requirements, which is not limited.
As an example of the present application, in the case where the scene acquisition data set further includes a region name, the scene acquisition data sets acquired by crowd-sourcing may be grouped according to the region name, each group of scene acquisition data sets corresponds to one region name, and then scene feature learning is performed by taking the group as a dimension, that is, learning is performed by region, so that efficiency of subsequent scene feature learning is improved.
S703, determining a scene fence snapshot corresponding to each service.
Illustratively, the scene fence snapshot of any one service may include scene characteristics of the one service in its corresponding scene, for example, may include service type information, longitude and latitude information of a center point of the scene fence (i.e., a center point of the scene fence), a radius of the scene fence (i.e., a radius of the scene fence), and the like.
As an example of the present application, the specific implementation of S703 may include S7031 to S7035:
s7031, determining points belonging to the same attribute in the raster pattern.
The points belonging to the same attribute are points corresponding to scene acquisition data sets belonging to the same attribute, and the same attribute means that the service type information is the same and the city codes are the same. It can be understood that the points with the same attribute are points corresponding to scene acquisition data sets which have the same service type information and comprise the same city codes.
Illustratively, with the service type information as an index, points corresponding to scene acquisition data sets which have the same service type information and include the same city codes are determined in the raster image.
As an example of the present application, since the same service may relate to different cities, the cloud service platform may first use city codes as dimensions to segment data corresponding to the same service type information, so as to segment data belonging to the same service in the same city into the same bucket. According to the foregoing description, each scene acquisition data set includes one service type information (i.e., tag) and one city code, so that the cloud service platform can query points corresponding to the scene acquisition data sets with the same service type information and the same city code in the raster pattern by taking the service type information as an index, so as to divide the data of the points into at least one barrel, wherein each barrel corresponds to one service type information and one city code. Feature learning can then be performed based on the data in each bucket to determine a scene fence snapshot for each business in one city.
For ease of understanding and description, feature learning based on data in a bucket corresponding to any one of the traffic type information will be described as an example.
And S7032, clustering the points with the same attribute to obtain a first clustering result, wherein the first clustering result comprises at least one cluster.
Each point with the same attribute has longitude and latitude information, and the points with the same attribute are clustered through a clustering algorithm to obtain at least one cluster. It may also be understood that the longitude and latitude information corresponding to the service type information is clustered by a clustering algorithm to obtain at least one cluster.
Referring to fig. 13 for an exemplary embodiment, fig. 13 is a schematic diagram illustrating data distribution according to an exemplary embodiment of the present application. Specifically, fig. 13 is a schematic diagram showing a situation that data in a bucket corresponding to the one service type information is distributed in a raster pattern according to an exemplary embodiment. The cloud service platform can cluster longitude and latitude information of points in the barrel through a clustering algorithm to obtain at least one cluster, for example, as shown in fig. 13, to obtain three clusters. The clustering algorithm may be a spatial clustering algorithm (DBSCAN).
It should be noted that data that is free outside the cluster may be regarded as noise points, and these data may not be calculated. That is, dirty data that is free outside the cluster can be filtered out by a DBSCAN clustering algorithm.
It should be noted that, in the process of using the DBSCAN clustering algorithm, the neighborhood radius may be set to a first preset distance, and the first preset distance may be set according to actual requirements. For example, the first preset distance may be set to 50 meters, which means that for any two clusters, the DBSCAN clustering algorithm will determine that two points closest to each other have no correlation if the distance between the two points is greater than 50 meters.
The center point of the scene fence and the radius of the scene fence of the service type information can determine the scene fence corresponding to the service type information, namely, determine the scene fence corresponding to the service. It is understood that a cluster corresponds to a scene fence, and how to determine the center point of the scene fence and the latitude and longitude information of the center point of the scene fence are described below.
S7033, determining longitude and latitude information of a center point of the scene fence according to the first clustering result.
For example, the scene fence center point for each cluster may be determined first. For example, for any one of the clusters, the cloud service platform may determine an average value of all longitude and latitude information included in the cluster, so as to calculate longitude and latitude information of a center point of the cluster (i.e., a scene fence center point) based on the average value of all longitude and latitude information. In this manner, latitude and longitude information of a center point of each of the plurality of clusters (i.e., a scene fence center point) may be determined. And obtaining longitude and latitude information of a scene fence central point of each service type information.
And S7034, determining the scene fence radius according to the first clustering result.
As an example of the present application, in the case where the number of at least one cluster is one, the radius of the cluster is determined as the scene fence radius of the scene fence of the one traffic type information. I.e. when the number of clusters is one, the radius of the cluster is determined as the scene fence radius of the scene fence of the one traffic type information.
As an example of the present application, in the case where the number of at least one cluster is plural, the radius of each cluster is determined as the scene fence radius of the scene fence corresponding to the traffic type information of each cluster.
S7035, generating a scene fence snapshot based on longitude and latitude information of a scene fence center point, a scene fence radius, service type information and city codes corresponding to the service type information.
Illustratively, a city code corresponding to the service type information is obtained, and a scene fence snapshot of the scene fence of the service type information is generated based on latitude and longitude information of a scene fence center point corresponding to the service type information, a scene fence radius, the service type information and the corresponding city code.
In an implementation, a scene fence may be determined in a raster graph based on latitude and longitude information of a center point of the scene fence and a scene fence radius, and then a scene fence snapshot is generated based on data within the scene fence. As one example of the present application, a scene fence identification, city coding, latitude and longitude information (latitude and longitude information) of a scene fence center point, fence radius, traffic type information, morton code within a scene fence may be included in the scene fence snapshot.
Optionally, in the case that the scene collecting data set further includes service packet names and service additional description information, the data that occur in the same city and have the same service packet names and service additional description information may be further divided into the same barrel according to the city codes, the service packet names and the service additional description information. Thereafter, clustering is performed based on the data within each bucket.
In one example, if a POI and/or AOI is also included within the scene enclosure in the raster image, the POI and/or AOI may also be included in the scene enclosure snapshot.
Optionally, the scene fence snapshot may further include WiFi features on the basis of including a fence identification, a city code, latitude and longitude information (latitude and longitude information) of a center point of the scene fence, a scene fence radius, traffic type information, morton code, POI and/or AOI within the scene fence in the scene fence snapshot.
Illustratively, if the bucket also includes WiFi data, the WiFi features in the scene enclosure may also be determined based on the WiFi data in the bucket. As can be seen from the foregoing description, the WiFi data may include a plurality of WiFi identification information and a WiFi intensity corresponding to each WiFi identification information. In one example, the cloud service platform may determine a frequency of occurrence of each WiFi identification information in the bucket, and obtain a frequency corresponding to each WiFi identification information. For the WiFi identification information with the frequency lower than the frequency threshold, which may be a WiFi hotspot outside the scene, the cloud service platform may consider dirty data, so that the cloud service platform may delete the WiFi identification information with the frequency lower than the frequency threshold and the WiFi intensity thereof. The frequency threshold may be set according to requirements, which is not limited. And the cloud service platform learns the WiFi characteristics in the scene fence based on the WiFi identification information remained in the barrel and the WiFi intensity corresponding to each WiFi identification information in the residual WiFi identification information.
In one example, the cloud service platform learns the WiFi feature specific implementation manner in the scene fence based on the remaining WiFi identification information list in the barrel and the WiFi intensity corresponding to each WiFi identification information in the remaining WiFi identification information list, and the cloud service platform can determine the average intensity of the WiFi intensity corresponding to each WiFi identification information for each WiFi identification information in the remaining WiFi identification information list, determine the intensity matching degree of each WiFi intensity corresponding to each WiFi identification information and the average intensity corresponding to each WiFi identification information, obtain a plurality of intensity matching degrees, sort the intensity matching degrees according to the order from small intensity matching degrees to large intensity matching degrees, and acquire the nth intensity matching degree from the sorted intensity matching degrees as a target matching degree threshold of the remaining WiFi identification information list. And taking the remaining WiFi identification information list, the average intensity and frequency corresponding to each WiFi identification information in the remaining WiFi identification information list and the target matching degree threshold corresponding to the remaining WiFi identification information list as WiFi characteristics in the scene fence.
In one example, when determining the intensity matching degree of each WiFi intensity and the average intensity corresponding to each WiFi identification information, the cloud service platform may determine a hailing distance (HELLINGER DISTANCE) of each WiFi intensity and the average intensity corresponding to each WiFi identification information, and then use the obtained data as the intensity matching degree of each WiFi intensity and the average intensity corresponding to each WiFi identification information.
Optionally, the cloud service platform may also filter some WiFi data in the bucket according to WiFi similarity before learning the WiFi features in the scene enclosure. For example, the cloud service platform may determine the similarity of longitude and latitude information of every two pieces of WiFi identification information, and then filter out WiFi identification information with similarity lower than a similarity threshold. And then, scene feature learning is performed based on the filtered WiFi data, so that the effectiveness and accuracy of scene feature learning can be improved.
As an example of the present application, in the case that the collected data set includes the types of electronic devices, since the stability of WiFi scanning of the electronic devices of different types is different, if the scene fence includes WiFi data scanned by the electronic devices of different types, in order not to pull down the target matching degree threshold of WiFi data with stronger WiFi scanning stability, the cloud service platform may segment the WiFi data according to the types of the electronic devices, and then determine the target matching degree threshold corresponding to the WiFi identification information list in each bucket according to the above manner based on the WiFi data in each bucket. It will be appreciated that at this point the WiFi features in the scene enclosure include multiple WiFi identification information lists, each corresponding to a target match threshold.
Illustratively, the scene fence snapshot may be as shown in table 3, where each row represents one scene fence snapshot.
TABLE 3 Table 3
And S704, determining a base station fence snapshot of each base station.
Illustratively, the base station snapshot of any one base station may include the city code (citycode), operator identity (operator), cell number (lac), base station number (cellID), longitude (longitude), latitude (latitude), radius (radius), service list (taglist), and the like for that one base station.
As an example of the present application, the specific implementation of S704 may include S7041 to S7045:
S7041, determining points belonging to the same base station in the raster pattern.
The points belonging to the same base station are points corresponding to scene acquisition data sets belonging to the same base station, and the same base station refers to the same indication information of the base station, namely the same operator identifier, the same cell number and the same base station number.
Illustratively, the base station indication information is used as an index, and points corresponding to scene acquisition data sets belonging to the same base station are determined in the grid chart.
As an example of the present application, as can be seen from the foregoing description, each scene acquisition data set includes base station indication information, so that the cloud service platform can query points corresponding to the scene acquisition data sets belonging to the same base station in the raster image with the base station indication information as an index, and perform feature learning based on the points to determine a base station snapshot corresponding to each base station.
For easy understanding and description, feature learning is described below by taking a point corresponding to a scene acquisition data set belonging to the same base station as an example.
And S7042, clustering the points of the same base station to obtain a second clustering result, wherein the second clustering result comprises at least one cluster.
And each point corresponding to the scene acquisition data set of the base station has longitude and latitude information, and the points corresponding to the scene acquisition data set of the base station are clustered through a clustering algorithm to obtain at least one cluster. The longitude and latitude information of points corresponding to the scene acquisition data set of the base station is clustered through a clustering algorithm, so that at least one cluster is obtained. The clustering algorithm may be a spatial clustering algorithm (DBSCAN).
It should be noted that data that is free outside the cluster may be regarded as noise points, and these data may not be calculated. That is, dirty data that is free outside the cluster can be filtered out by a DBSCAN clustering algorithm.
It should be noted that, in the process of using the DBSCAN clustering algorithm, the neighborhood radius may be set to a second preset distance, and the second preset distance may be set according to actual requirements. For example, the second preset distance may be set to 50 meters, which means that for any two clusters, the DBSCAN clustering algorithm will determine that two points closest to the cluster have no correlation if the distance between the two points is greater than 50 meters.
And the base station fence center point and the base station fence radius corresponding to one base station can determine the base station fence corresponding to the base station. The following describes how to determine the longitude and latitude information of the center point of the base station fence.
S7043, determining longitude and latitude information of a central point of the base station fence according to the second aggregation result.
For example, the longitude and latitude information of the center point of each cluster may be determined first. And calculating the average value of the longitude and latitude information of the central points of the clusters, determining the central point of the base station fence of the base station according to the average value, and simultaneously determining the longitude and latitude information of the central point of the base station fence. For example, for any one of the clusters, the cloud service platform may determine an average value of all longitude and latitude information included in the cluster, so as to calculate longitude and latitude information of a center point of the cluster based on the average value of all longitude and latitude information. In this way, longitude and latitude information of a center point of each of the plurality of clusters can be determined.
And then, the cloud service platform can calculate the average value of the longitude and latitude information of the central points of the clusters according to the longitude and latitude information of the central point of each cluster, and take the average value of the longitude and latitude information of the central points of the clusters as the longitude and latitude information of the central point of the base station fence of the base station.
And S7044, determining the radius of the base station fence according to the second aggregation result.
As an example of the present application, in the case where the number of at least one cluster is one, the radius of the cluster is determined as the base station fence radius of this base station fence. I.e. when the number of clusters is one, the radius of the cluster is determined as the base station fence radius of the base station fence of the one base station.
As an example of the present application, in case that the number of at least one cluster is plural, a distance between a center point of a base station fence and a center point of each of the plural clusters is determined, plural distances are obtained, and a base station fence radius of the base station fence is determined according to the plural distances.
The cloud service platform may calculate the distance between the base station fence center point of the base station and the center point of each cluster based on the latitude and longitude information of the base station fence center point of the base station and the latitude and longitude information of the center point of each cluster, so as to obtain a plurality of distances. The maximum distance of the plurality of distances may be taken as the base station fence radius for this base station. In one possible implementation, an average value of a plurality of distances may be calculated, and the calculated average value is taken as a base station fence radius of the base station.
S7045, generating a base station fence snapshot based on longitude and latitude information of a base station fence center point, base station fence radius, base station indication information and city codes corresponding to the base station.
As can be seen from the above, the base station indication information is used to uniquely identify one base station, and the base station indication information may include an operator identifier, a cell number, and a base station number.
For example, a base station fence may be determined in a grid graph from base station fence radius and latitude and longitude information of a base station fence center point, after which a base station fence snapshot is generated based on data within the base station fence. As one example of the present application, a city code (citycode), operator identity (operator), cell number (lac), base station number (cellID), longitude (longitude), latitude (latitude), radius (radius), service list (taglist), etc. of a base station may be included in the base station fence snapshot.
Illustratively, the base station snapshot generated by the cloud service platform is shown in table 4, where each row represents one base station fence snapshot:
TABLE 4 Table 4
In table 4, taglist indicates a service list, which is service type information included in a base station fence in a raster pattern and a scene fence identifier corresponding to the service type information, that is, it may be determined which service type information is included in the base station fence in the raster pattern, then the scene fence identifiers corresponding to the service type information are queried from a scene fence snapshot, so as to establish the taglist, and add the taglist to the base station snapshot.
Optionally, under the condition that the scene acquisition data set further comprises the base station intensity, the base station intensity distribution information in the base station fence can be determined, and the base station intensity distribution information is carried in the base station snapshot, so that the position of the electronic equipment can be determined according to the base station intensity distribution information and the base station intensity of the base station to which the electronic equipment is currently connected during subsequent scene identification, the positioning precision is improved, and the accuracy of scene identification is improved.
As an example of the application, after scene feature learning, the cloud service platform can display the data distribution situation in the raster graphics in a visual form, and can identify the base station type, such as 4G or 5G type, for each base station during display, so that technicians and the like can visually check the distribution situation of different types of networks in the visual form.
As an example of the present application, the cloud service platform may perform scene feature learning and updating periodically, and the period duration may be set according to actual requirements, for example, the period duration may be one day, one week, or one month, which is not limited in the embodiment of the present application.
The above describes how to learn the scene characteristics corresponding to different services by using the acquired scene acquisition data set, and the following describes the process of caching the scene characteristics according to the embodiment of the present application.
Based on the scene characteristics stored in the cloud service platform, the electronic equipment can download the scene characteristics from the cloud service platform. Because the data volume of the full scene features is large, in order to improve timeliness when the scene features are downloaded, flow is reduced, running power consumption of the electronic equipment is reduced, occupied storage space is reduced, and the electronic equipment can acquire part of scene features according to business requirements.
As an example of the present application, for each service supporting scene recognition, the cloud service platform may configure feature update configuration information of each service, and then issue the feature update configuration information of each service to the electronic device, so that the electronic device determines a scene feature update manner for each service according to the feature update configuration information of each service, that is, the electronic device obtains part of scene features from the cloud service platform according to service requirements, thereby reducing the download data volume.
In one example, the electronic device sends a feature acquisition request to the cloud service platform, where the feature acquisition request may include a city code. Correspondingly, the cloud service platform acquires a base station fence snapshot set comprising the city code from the full scene features. The base station fence snapshot set comprises scene fence identifications corresponding to the business type information. And then, according to scene fence identifications corresponding to the service type information in each obtained base station fence snapshot set, obtaining scene fence snapshots corresponding to the scene fence identifications, and obtaining a scene fence snapshot set. The cloud service platform sends the acquired data (namely the base station fence snapshot set and the scene fence snapshot set) to the electronic equipment. For the electronic device, after receiving the data sent by the cloud service platform, the data is stored in a database, for example, a local database.
In one example, the feature acquisition request may include city codes and traffic type information for the target traffic. Correspondingly, the cloud service platform acquires a base station fence snapshot set comprising the city code and the service type information from the full scene features. The base station fence snapshot set comprises scene fence identifications corresponding to the business type information. And then, according to scene fence identifications corresponding to the service type information in each obtained base station fence snapshot set, obtaining scene fence snapshots corresponding to the scene fence identifications, and obtaining a scene fence snapshot set. The cloud service platform sends the acquired data (namely the base station fence snapshot set and the scene fence snapshot set) to the electronic equipment. And the electronic equipment stores the data sent by the cloud service platform in a database after receiving the data.
In one example, the feature acquisition request may further carry base station indication information of a base station to which the electronic device is connected on the basis of service type information including a city code and a target service, so that after acquiring a base station fence snapshot set including the city code and the service type information from a full amount of scene features, the cloud service platform screens out a base station fence snapshot set including the base station indication information from the acquired base station fence snapshot set, where the screened base station fence snapshot set including the base station indication information includes scene fence identifiers corresponding to each service type information. And obtaining scene fence snapshots corresponding to the scene fence identifications according to the scene fence identifications corresponding to the service type information in the screened base station fence snapshot sets, so as to obtain the scene fence snapshot sets. And the cloud service platform sends the screened base station fence snapshot set and the scene fence snapshot set to the electronic equipment. And the electronic equipment stores the base station fence snapshot set and the scene fence snapshot set which are sent by the cloud service platform in a database after receiving the base station fence snapshot set and the scene fence snapshot set.
For ease of understanding, please refer to fig. 14, fig. 14 is a schematic diagram illustrating a cache scene feature according to an exemplary embodiment of the present application. If the current position of the electronic device reaches a new city, that is, the scene features of the new city are not stored in the electronic device, the scene features of the nearby area of the current position can be obtained from the cloud service platform in real time according to the current position of the electronic device.
As shown in fig. 14, in the area (a), a plurality of (e.g., 2048×2048) grids are expanded to the periphery around the current position of the electronic device in the raster pattern as the range to be downloaded. In one implementation, the data covered in these grids (i.e., the base station fence snapshot set and the scene fence snapshot set) are both downloaded to the electronic device. In another implementation manner, as shown in the area (b) of fig. 14, dark gray grids are screened out from the range to be downloaded according to service requirements (such as base station indication information of the base station to which the electronic device is connected), and data (i.e., a base station fence snapshot set and a scene fence snapshot set) covered in the dark gray grids are used as data in the actually downloaded range and are downloaded to the electronic device.
In an example, if the position is more than a preset distance from the original position after the electronic device reaches a new position, in order to ensure the recall rate of the service at the electronic device side, the electronic device may send the current longitude and latitude information and the longitude and latitude information of the last location to the cloud service platform. The preset distance may be set according to actual requirements, for example, the preset distance may be 1000 meters. Correspondingly, the cloud service platform removes intersection data of scene features according to the previous longitude and latitude information and the current longitude and latitude information, and sends scene features in a range to the electronic equipment side. And reserving intersection part data for the electronic equipment side, and writing new scene characteristics issued by the electronic equipment side into a database. Therefore, the downloading flow can be saved, namely the online real-time downloading power consumption is saved, and the erasing of the database can be reduced on the electronic equipment side.
As shown in fig. 14 (c), the electronic device moves from the home position to the current position (here, the current position refers to a new position), and a plurality of (e.g., 2048×2048) grids are expanded around the current position of the electronic device in the grid map as a new range to be downloaded. In the area (d) shown in fig. 14, gray grids are screened out in the new range to be downloaded according to service requirements (such as base station indication information of the base station to which the electronic device is connected), and data (i.e., a base station fence snapshot set and a scene fence snapshot set) covered in the gray grids are used as data in the range to be downloaded. As shown in fig. 14 (e), there is an intersection part (hit buffer part) between the download range corresponding to the home position and the download range corresponding to the current position, and after the intersection data is removed, the data covered in the remaining gray grids (i.e., the base station fence snapshot set and the scene fence snapshot set) are used as the data in the actual download range, and are downloaded to the electronic device.
The process of caching scene features according to the embodiment of the present application is described above, and the method for identifying a service scene according to the embodiment of the present application is described below. Referring to fig. 15, fig. 15 is a flowchart illustrating a service scenario recognition method according to an exemplary embodiment of the present application, the method may include:
S801, when a scene identification request of a target service is monitored, base station information of a base station which is accessed by the electronic equipment currently is acquired.
And S802, performing scene recognition based on the scene characteristic data and the base station information to determine whether the electronic equipment is located in the target scene.
The base station information includes a target operator identity, a target cell number, and a target base station number, and the scenario identification request is for requesting identification of whether the electronic device is located within a target scenario associated with a target service. The scene feature data includes at least one base station fence snapshot including an operator identity, a cell number, and a base station number having an association.
In an exemplary embodiment, when a scene identification request of a target service is monitored, base station information of a base station currently accessed by the electronic device is acquired, and whether the electronic device is located in the target scene is determined according to one or more base station fence snapshots in the base station information and scene feature data of the base station currently accessed by the electronic device. For example, when the target operator identifier, the target cell number, and the target base station number in the base station information are the same as the operator identifier, the cell number, and the base station number in a certain base station fence snapshot, it is determined that the electronic device is located in the target scene.
In the service scene recognition method, when a scene recognition request of a target service is monitored, base station information of a base station to which the electronic equipment is accessed currently is acquired, and scene recognition is performed based on scene feature data and the base station information so as to determine whether the electronic equipment is located in the target scene. The scene feature data comprises at least one base station fence snapshot, the base station fence snapshot comprises an operator identifier, a cell number and a base station number with an association relation, and the base station information comprises a target operator identifier, a target cell number and a target base station number. Obviously, scene identification is performed based on scene feature data and base station information, namely, the operator identification, the cell number and the base station number in the base station fence snapshot are compared with the target operator identification, the target cell number and the target base station number in the base station information, that is, whether the electronic equipment is located in the target scene is determined through comparison between the base station information and the base station fence snapshot. In the process of identifying the service scene, the base station information is adopted, the cloud service platform is not required to acquire the position, and compared with the GPS positioning adopted in the related technology, the power consumption of the base station positioning is smaller than that of the GPS positioning. The method does not need to be compared with data in the cloud service platform, so that the power consumption of the electronic equipment is reduced, and the instantaneity is improved.
As an example of the present application, scene recognition accuracy of a service may include three kinds of low, medium, and high. The scene recognition accuracy of different services is usually determined by the service itself. For example, different scene recognition accuracy may be set in advance for different services according to different demands of users. For example, the scene recognition accuracy of the regular payment service may be low accuracy, the scene recognition accuracy of the two-dimensional code service may be medium accuracy, and the scene recognition accuracy of the ticket-taking service may be high accuracy. This is merely illustrative and is not limiting.
In one example, the scene recognition accuracy of the service is low, when a scene recognition request of the target service is monitored, base station information of a base station to which the electronic device is currently connected is obtained, and whether the electronic device is located in a target scene associated with the target service is determined according to the base station information of the base station to which the electronic device is currently connected and the base station fence snapshot set. The base station information currently accessed by the electronic device may include a target operator identifier, a target cell number, and a target base station number, where the target service is a service with low scene recognition accuracy. And as can be seen from the above description, a set of base station fence snapshots is stored in the electronic device, the set of base station fence snapshots including a plurality of base station fence snapshots.
The electronic equipment can always perform Cell-ID positioning, and when the electronic equipment is an android mobile phone, the switching of the base station can be perceived/monitored through telephonyManager of the android mobile phone. Illustratively, the base station information of the base station to which the electronic device is currently connected may be obtained through Cell-ID positioning. And searching whether a base station fence snapshot matched with the base station information exists in the base station fence snapshot set, and if the base station fence snapshot matched with the base station information is searched in the base station fence snapshot set, determining that the electronic equipment is currently located in a target scene associated with target service. If the base station fence snapshot matched with the base station information is not found in the base station fence snapshot set, determining that the electronic equipment is not currently located in a target scene associated with the target service.
For example, the scene recognition accuracy of the regular payment service is low, that is, the target service in the present embodiment is the regular payment service. The current target operator identifier, the target Cell number and the target base station number of the electronic equipment are obtained through Cell-ID positioning, operators, lacs and cellID in a certain base station fence snapshot are found in the base station fence snapshot set, and if the current target operator identifier, the target Cell number and the target base station number of the electronic equipment are the same as the current target operator identifier, the target Cell number and the target base station number of the electronic equipment, the electronic equipment is determined to be currently located in a target scene associated with regular payment service. At this time, the payment code shortcut card is displayed in the display interface of the electronic device, so that the user can use the payment code shortcut card to realize quick payment.
For another example, if the operators, lacs, and cellID in a certain base station fence snapshot are not found in the base station fence snapshot set, and are the same as the current target operator identifier, the target cell number, and the target base station number of the electronic device, it is determined that the electronic device is not currently located in the target scene associated with the regular payment service. At this time, the payment code shortcut card is not displayed on the display interface of the electronic device. The electronic device will continue Cell-ID positioning and repeat the above process to determine if the electronic device is within a target scenario associated with regular payment services.
In this implementation manner, for a service with low scene recognition precision, as long as the base station information of the electronic device is matched with a certain base station fence snapshot in the base station fence snapshot set, or as long as the base station information of the electronic device has an intersection with the base station fence snapshot set in the low-precision feature, the electronic device can be recognized to enter a service scene with low scene recognition precision, and the time delay is low, that is, the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved. If the scene recognition accuracy of the service represented by the service type information included in the base station fence snapshot is low accuracy, the base station fence snapshot can be said to conform to the low-accuracy feature.
In the low-precision business scene recognition process, on one hand, the power consumption of the electronic equipment for positioning by adopting the Cell-ID is smaller than that of GPS, so that the mode of matching the base station information of the electronic equipment with the base station fence snapshot set is adopted to recognize whether the electronic equipment enters a business scene with low scene recognition precision, and the power consumption is saved. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
In one example, when the scene recognition accuracy of the service is low, in addition to determining whether the electronic device is located in a target scene associated with the target service according to the base station information of the base station to which the electronic device is currently connected and the base station fence snapshot set, it may be determined whether the electronic device is located in the target scene associated with the target service according to the current location information of the electronic device and the target area. The current position information of the electronic device is used for representing the current position of the electronic device, and the current position information of the electronic device can comprise longitude information and latitude information. The target area is an area corresponding to a destination for realizing the target service, and is determined by a center point of the destination and a radius corresponding to the destination. It will be appreciated that the destination typically appears as an area in a real environment, and thus has a corresponding center point and radius.
Illustratively, a target region is acquired. For example, a set of scene fence snapshots is stored in the electronic device, the set of scene fence snapshots including a plurality of scene fence snapshots. The monitored scene identification request of the target service can carry service type information, and one or more scene fence snapshots matched with the service type information can be determined from the scene fence snapshot set. Each scene fence snapshot comprises longitude information, latitude information and a scene fence radius, and a target area corresponding to each scene fence snapshot is determined according to the longitude information, the latitude information and the scene fence radius.
Optionally, in a possible implementation manner, each scene fence snapshot may further include a POI, and the POI data may be further carried in the scene identification request. If a plurality of scene fence snapshots matched with the service type information are determined in the scene fence snapshot set, a scene fence snapshot is determined again in the plurality of scene fence snapshots determined for the first time according to the POI data and the POIs included in the scene fence snapshots. And then, determining a target area corresponding to the scene fence snapshot according to the longitude information, the latitude information and the scene fence radius which are included in the finally determined scene fence snapshot.
The current position information of the electronic device can be obtained by a GPS mode, and the current position information of the electronic device can also be obtained by a positioning lift. And judging whether the current position of the electronic equipment is in the target area according to the current position information of the electronic equipment and the target area. Colloquially, it is understood that a point is determined by longitude information and latitude information of the electronic device, and whether the point is within the target area is determined. And if the current position of the electronic equipment is in the target area, determining that the electronic equipment is currently positioned in a target scene associated with the target service. If the current position of the electronic equipment is not in the target area, determining that the electronic equipment is not currently in a target scene associated with the target service.
In the implementation manner, before the base station information of the electronic equipment is not matched with any base station fence snapshot in the base station fence snapshot set, whether the current position of the electronic equipment is in the target area can be judged through the current position information of the electronic equipment and the target area, so that whether the electronic equipment enters a service scene with low scene recognition precision is recognized, and support is provided for cold start of low-precision service scene recognition.
In one example, the scene recognition accuracy of the service is medium accuracy, and when a scene recognition request of the target service is monitored, base station information of a base station to which the electronic device is currently connected is acquired, and whether the electronic device is located in a target scene associated with the target service is determined according to the base station information of the base station to which the electronic device is currently connected, the acquired current position information of the electronic device, the base station fence snapshot set and the scene fence snapshot set. It should be noted that, the target service herein refers to a service with a scene recognition accuracy of medium accuracy. The electronic device stores a base station fence snapshot set and a scene fence snapshot set, wherein the base station fence snapshot set comprises a plurality of base station fence snapshots, and the scene fence snapshot set comprises a plurality of scene fence snapshots.
The base station information of the electronic device is obtained through Cell-ID positioning, whether a base station fence snapshot matched with the base station information exists or not is searched in the base station fence snapshot set, and if the base station fence snapshot matched with the base station information is searched in the base station fence snapshot set, a service list included in the base station fence snapshot is obtained. The service list comprises service type information and scene fence identifications corresponding to the service type information, and one or more scene fence snapshots can be found in the scene fence snapshot set according to the scene fence identifications. According to the longitude information and the latitude information contained in the scene fence snapshot, a center point can be determined, and then a fence area corresponding to the scene fence snapshot is determined according to the center point and the scene fence radius contained in the scene fence snapshot. The method comprises the steps of obtaining current position information of the electronic equipment in a GPS mode, judging whether the current position of the electronic equipment is in a fence area according to the current position information of the electronic equipment and the fence area corresponding to the scene fence snapshot, and accordingly determining whether the electronic equipment is located in a target scene associated with target business.
Under the condition that one scene fence snapshot is found in the scene fence snapshot set according to the scene fence identification, determining a fence area corresponding to the scene fence snapshot according to longitude information, latitude information and scene fence radius contained in the scene fence snapshot. And acquiring the position information of the electronic equipment, determining the position of the electronic equipment based on the position information, judging whether the current position of the electronic equipment is in the fence area, and if the current position of the electronic equipment is in the fence area, determining that the electronic equipment is in a target scene associated with a target service. And if the current position of the electronic equipment is not in the fence area, determining that the electronic equipment is not in a target scene associated with the target service.
Optionally, in one possible implementation manner, when it is determined that the electronic device is located in the target scene associated with the target service, a shortcut card related to the target service (the service with the scene recognition precision being the middle precision) is displayed in the display interface of the electronic device, so that the user can use the shortcut card to implement the target service. When the electronic equipment is determined not to be located in the target scene associated with the target service, the shortcut card is not displayed in the display interface of the electronic equipment. The electronic device may repeat the process of determining whether the electronic device is located within a target scene associated with the target service.
Under the condition that a plurality of scene fence snapshots are found in the scene fence snapshot set according to the scene fence identification, if POI data is detected, the scene fence snapshot matched with the POI data is found in the plurality of scene fence snapshots according to the POI data. For example, a scene fence snapshot that contains the same POI as the POI data is found among a plurality of scene fence snapshots. And determining the found fence area corresponding to the scene fence snapshot, and judging whether the current position of the electronic equipment is in the fence area according to the acquired position information of the electronic equipment. Therefore, the rail areas of the scene rail snapshot related to the target service can be rapidly determined, and the current position of the electronic equipment is not required to be compared with each rail area, so that the service scene recognition speed is improved.
Under the condition that a plurality of scene fence snapshots are found in the scene fence snapshot set according to the scene fence identification, if POI data are not detected, determining a fence area corresponding to each scene fence snapshot according to longitude information, latitude information and scene fence radius contained in each scene fence snapshot. And if the current position of the electronic equipment is determined to be in any fence area according to the acquired position information of the electronic equipment, namely, the electronic equipment is determined to be positioned in a target scene associated with the target service, the applicability is wider.
In the implementation manner, for a service with the scene recognition precision being medium precision, when the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, and the current position of the electronic equipment is in a fence area corresponding to the certain scene fence snapshot (the scene fence snapshot determined by the scene fence mark in the base station fence snapshot), the electronic equipment can be recognized to enter a service scene with the scene recognition precision being medium precision, and the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved. And when the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, detecting whether the current position of the electronic equipment is in a fence area corresponding to a certain scene fence snapshot (the scene fence snapshot determined by the scene fence identification in the base station fence snapshot), or starting middle-precision business scene identification as long as the base station information of the electronic equipment is intersected with the base station fence snapshot set in the middle-precision feature, and effectively avoiding the waste of power consumption of the middle-precision business scene identification. If the scene recognition accuracy of the service represented by the service type information included in the base station fence snapshot is middle accuracy, the base station fence snapshot can be said to conform to the middle accuracy characteristic.
In the middle-precision business scene recognition process, on one hand, cell-ID positioning is adopted in the early stage (namely, cell-ID positioning is adopted before the scene fence snapshot is not matched), GPS positioning is not required in the whole process, and power consumption is saved. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
In one example, the scene recognition accuracy of the service is high, when a scene recognition request of the target service is monitored, base station information of a base station currently accessed by the electronic device is obtained, and whether the electronic device is located in a target scene associated with the target service is determined according to the base station information of the base station currently accessed by the electronic device, the obtained current position information of the electronic device, a WiFi list currently corresponding to the electronic device, a base station fence snapshot set and the scene fence snapshot set. The WiFi list may include at least one WiFi identification information and a WiFi intensity corresponding to each WiFi identification information, where the WiFi identification information may include WiFi physical address information and a WiFi name. The target service herein refers to a service with high scene recognition accuracy. The electronic equipment stores a base station fence snapshot set and a scene fence snapshot set, wherein the base station fence snapshot set comprises a plurality of base station fence snapshots, the scene fence snapshot set comprises a plurality of scene fence snapshots, and the scene fence snapshots comprise WiFi features.
For example, base station information of the electronic device is obtained through Cell-ID positioning, whether a base station fence snapshot matched with the base station information exists in a base station fence snapshot set is searched, and if the base station fence snapshot matched with the base station information is found in the base station fence snapshot set, the base station fence snapshot is determined through a scene fence identifier in the base station fence snapshot. And determining a fence area corresponding to the scene fence snapshot according to the longitude information, the latitude information and the scene fence radius contained in the scene fence snapshot. The current position information of the electronic equipment is obtained in a GPS mode, and whether the current position of the electronic equipment is in the fence area or not is judged according to the current position information of the electronic equipment and the fence area corresponding to the scene fence snapshot. And if the current position of the electronic equipment is in the fence area, acquiring a current WiFi list of the electronic equipment. And judging whether the WiFi list is matched with the WiFi features in the scene fence snapshot, and if the WiFi list is matched with the WiFi features in the scene fence snapshot, determining that the electronic equipment is located in a target scene associated with the target service. If the WiFi list does not match the WiFi features in the scene enclosure snapshot, determining that the electronic device is not located within a target scene associated with the target service.
Judging whether the WiFi list is matched with the WiFi features in the scene fence snapshot or not can comprise determining that the WiFi list is matched with the WiFi features in the scene fence snapshot according to WiFi identification information in the WiFi list and WiFi intensity corresponding to each WiFi identification information, wherein the WiFi intensity is the same as the WiFi intensity corresponding to each WiFi identification information in the WiFi identification information list in the WiFi features.
Determining whether the WiFi list is matched with the WiFi features in the scene fence snapshot or not can further comprise determining a matching degree threshold corresponding to the current WiFi list of the electronic device, and determining that the WiFi list is matched with the WiFi features in the scene fence snapshot when the similarity between WiFi identification information in the WiFi list and WiFi identification information in the WiFi identification information list in the WiFi features is greater than or equal to a preset similarity threshold, and the matching degree threshold corresponding to the WiFi list is greater than or equal to a target matching degree threshold in the WiFi features. The preset similarity threshold may be set to 50%, 60%, 70%, etc., and the process of determining the matching threshold corresponding to the current WiFi list may refer to the related description in the process of determining the WiFi feature, which is not described herein.
Judging whether the WiFi list is matched with the WiFi features in the scene fence snapshot or not can further comprise determining intensity matching degree corresponding to each WiFi intensity in the WiFi list. And when the similarity between the WiFi identification information in the WiFi list and the WiFi identification information in the WiFi identification information list in the WiFi feature is larger than or equal to a preset similarity threshold value, and the intensity matching degree corresponding to each WiFi intensity in the WiFi list is larger than or equal to the intensity matching degree corresponding to each WiFi intensity in the WiFi feature, determining that the WiFi list is matched with the WiFi feature in the scene fence snapshot. The process of determining the strength matching degree corresponding to each WiFi strength in the WiFi list may refer to the foregoing description in the process of determining the WiFi feature, which is not repeated herein. The three modes for judging whether the WiFi list is matched with the WiFi features in the scene fence snapshot improve the accuracy of the matching result and are favorable for improving the accuracy of service scene identification.
In the implementation manner, for the service with high scene recognition precision, when the base station information of the electronic equipment is matched with a certain base station fence snapshot in the base station fence snapshot set, the current position of the electronic equipment is in a fence area corresponding to a certain scene fence snapshot (a scene fence snapshot determined by a scene fence mark in the base station fence snapshot), and the WiFi list is matched with the WiFi feature in the scene fence snapshot, the electronic equipment can be recognized to enter a service scene with high scene recognition precision, and the real-time performance is high. Based on the method, the shortcut card can be immediately recommended to the user, the intelligence of the electronic equipment is improved, the use requirement of the user is met, and the user experience is improved. And as long as the base station information of the electronic equipment and the snapshot set of the base station fence in the high-precision characteristic have an intersection, the high-precision service scene recognition is started, and the power consumption waste of the high-precision service scene recognition is effectively avoided. If the scene recognition accuracy of the service represented by the service type information included in the base station fence snapshot is high accuracy, the base station fence snapshot can be said to conform to the high-accuracy feature.
In the high-precision business scene recognition process, on one hand, cell-ID positioning is adopted in the early stage (namely, cell-ID positioning is adopted before the scene fence snapshot is matched), GPS positioning and scanning WiFi are adopted in the later stage, and power consumption is saved while the recognition precision is kept. On the other hand, the cloud service platform is not required to be requested in the service scene identification process, and the power consumption is also saved.
Optionally, in one possible implementation manner, when the scene recognition accuracy of the service is high, if the WiFi list is detected to be matched with the WiFi feature in a certain scene fence snapshot, the electronic device can be directly recognized to enter the service scene with the scene recognition accuracy being high, without matching the base station fence snapshot and the scene fence snapshot. By the implementation mode, the real-time performance of service scene identification is improved, and the power consumption required by early positioning is saved.
Optionally, in one possible implementation, when the scene recognition accuracy of the service is high, if the WiFi list is detected to be not matched with the WiFi feature of the target scene fence snapshot (the scene fence snapshot determined by the scene fence identifier in the matched base station fence snapshot), the WiFi scan result may be obtained by using a WiFi lift-up technology. The acquiring a WiFi scanning result by using a WiFi pickup technology refers to acquiring a WiFi scanning result generated by a system or a third party application program, where the WiFi scanning result may include a WiFi list. If the WiFi list generated by the system or the third-party application program is detected to be matched with the WiFi features of the target scene fence snapshot, the electronic equipment is determined to be located in the target scene. In the implementation mode, when the WiFi list and the WiFi feature are not intersected, the WiFi-related data are acquired by utilizing the WiFi lift-up technology, wiFi scanning is not needed independently, and power consumption is effectively saved.
Whether the identification electronic device enters into the target scene associated with the target service is described above, and whether the identification electronic device leaves from the target scene associated with the target service is described below.
In one example, when the scene recognition accuracy of the service is low, it may be determined that the electronic device is located within a target scene associated with the target service by the base station information of the electronic device matching a certain base station fence snapshot in the set of base station fence snapshots. After the electronic equipment enters the target scene, if the base station information of the electronic equipment is no longer matched with the base station fence snapshot, the electronic equipment can be identified to leave the target scene, and the implementation mode is high in real-time performance.
Optionally, in one possible implementation, after the electronic device enters the target scene, if the base station information of the electronic device is no longer matched with the base station fence snapshot, the location information of the electronic device is obtained. For example, the current location information of the electronic device is obtained by a GPS method. And judging whether the electronic equipment really leaves the target scene or not according to the current position information of the electronic equipment and the target area. If the current position of the electronic equipment is not in the target area, determining that the electronic equipment is really away from the target scene. If the current position of the electronic equipment is still in the target area, determining that the electronic equipment does not leave the target scene currently. By the implementation mode, false recognition can be effectively avoided, accuracy of service scene recognition is improved, namely whether the electronic equipment really leaves a target scene associated with the target service or not is accurately judged, and better experience is brought to a user.
In one example, when the scene recognition precision of the service is medium precision, the base station information of the electronic device is matched with a certain base station fence snapshot in the base station fence snapshot set, and the current position of the electronic device is in a fence area corresponding to a certain scene fence snapshot (a scene fence snapshot determined by a scene fence identifier in the base station fence snapshot), namely, the electronic device is determined to enter a service scene with the scene recognition precision of medium precision. After the electronic equipment enters the service scene, if the position of the electronic equipment is detected not to be in the fence area, the electronic equipment is determined to be actually leaving the service scene. By the implementation mode, false recognition can be effectively avoided, accuracy of service scene recognition is improved, namely whether the electronic equipment really leaves the service scene or not is accurately judged, and better experience is brought to a user.
In one example, when the scene recognition accuracy of the service is high, the base station information of the electronic device is matched with a certain base station fence snapshot in the base station fence snapshot set, the current position of the electronic device is in a fence area corresponding to the certain scene fence snapshot (the scene fence snapshot determined by the scene fence identification in the base station fence snapshot), a WiFi list is matched with WiFi features in the scene fence snapshot, and the electronic device is recognized to enter a target scene. After the electronic equipment enters the target scene, if the WiFi list is detected to be not matched with the WiFi features in the scene fence snapshot, the electronic equipment can be identified to leave the target scene, and the implementation mode is high in real-time performance.
Optionally, in one possible implementation, after the electronic device enters the target scene, if the WiFi list is detected to be not matched with the WiFi feature in the scene enclosure snapshot, the location information of the electronic device is obtained. And judging whether the electronic equipment really leaves the target scene or not according to the current position information of the electronic equipment and the fence area. If the current position of the electronic equipment is not in the fence area, determining that the electronic equipment is really away from the target scene. If the current position of the electronic equipment is still in the fence area, determining that the electronic equipment does not leave the target scene currently. By the implementation mode, false recognition can be effectively avoided, accuracy of service scene recognition is improved, namely whether the electronic equipment really leaves a target scene associated with the target service or not is accurately judged, and better experience is brought to a user.
The embodiment of the application also provides a method for predicting the next positioning time according to the current motion state of the user, and the method is described below. As an example of the present application, please refer to fig. 16, fig. 16 is a flowchart illustrating a method for predicting a time of a next positioning according to an exemplary embodiment of the present application. The method may include:
s901, acquiring the current motion state of a user.
The current gesture of the user can be analyzed through sensors such as an acceleration sensor, a gyroscope sensor and the like in the electronic equipment and a network model capable of realizing gesture learning, so that the current motion state of the user can be obtained. The current movement state of the user may include a walking state, a running state, a fast walking state, a car running state, etc.
S902, determining the current first movement speed of the user according to the movement state.
And evaluating the current first movement speed of the user according to the current movement state of the user. The correspondence between different motion states and different motion speeds may be established in advance, and the current motion speed of the user may be determined based on the correspondence. For example, the movement speed may be 1 m/s when the movement state is a walking state, and the first movement speed may be 10 m/s when the movement state is a vehicle-running state. The current first movement speed of the user may also be determined by at least two positioning, for example, a time interval between two positioning is obtained, a distance between two positions of the two positioning is obtained, and the current first movement speed of the user is determined according to the time interval and the distance. This is merely illustrative and is not limiting.
Alternatively, in one possible implementation, the user's current first speed of movement may be refreshed based on historical positioning during the user's travel. For example, a time interval between the historical position and the last position is acquired, a distance between two positions of the two positions is acquired, and the current first movement speed of the user is refreshed according to the time interval and the distance. Therefore, the determined current movement speed of the user can be ensured to be more accurate, the next positioning time of the next positioning can be accurately determined, the positioning times are reduced, and the power consumption is saved.
And S903, determining a destination for realizing the target service according to the target service, and determining a first distance between the current position of the user and the destination.
In one example, determining a destination to implement the target service based on the target service may include monitoring that the scene recognition request of the target service may carry service type information and POI data. The service type information is used to represent the service type of the target service, and the POI data represents the destination. According to the service type information and the POI data, a scene fence snapshot can be determined from the scene fence snapshot set. The scene fence snapshot includes longitude information, latitude information, and a scene fence radius. The longitude information, the latitude information and the scene fence radius can form a fence area, wherein the fence area is a target area corresponding to the destination, and then the longitude information and the latitude information are center points of the target area corresponding to the destination, and the scene fence radius is a radius of the target area corresponding to the destination.
If the base station information of the current electronic equipment is not matched with any base station fence snapshot in the base station fence snapshot set, namely the base station fence snapshot matched with the base station information of the base station to which the electronic equipment is currently connected is not found in the base station fence snapshot set, the current position of the user is obtained, namely the current position information of the user is obtained. It can be understood that, in the process of moving the electronic device carried by the user, the position of the electronic device is almost consistent with the position of the user, so that the current position of the electronic device is obtained, that is, the current position of the user is obtained.
The distance difference between the current position of the user and the destination can be calculated through longitude and latitude information of the current position of the user and longitude and latitude information of a center point corresponding to the destination, and the result obtained by subtracting the scene fence radius (namely the radius of a target area corresponding to the destination) from the distance difference is the first distance between the current position of the user and the destination.
For ease of understanding, please refer to fig. 17, fig. 17 is a schematic diagram of an application scenario of predicting time according to an exemplary embodiment of the present application. As shown in fig. 17 (a), S4 represents the current location of the user, where the base station information of the electronic device does not match any base station fence snapshot in the set of base station fence snapshots, where the destination has a corresponding target area (fence area), where the location information of the electronic device may match a certain scene fence snapshot. It will be appreciated that as soon as the user moves to the edge of the target area, the entry of the electronic device into the target scene can be identified, and therefore, the scene fence radius needs to be subtracted when calculating the first distance.
Optionally, in one possible implementation, if the base station information of the current electronic device matches a certain base station fence snapshot in the base station fence snapshot set, the latitude and longitude information and the base station fence radius of the base station fence center point in the base station fence snapshot are obtained. At this time, the latitude and longitude information of the center point of the base station fence can be used to represent the current position of the user, the distance difference between the two points is calculated through the latitude and longitude information of the center point of the base station fence and the latitude and longitude information of the center point corresponding to the destination, and the result obtained by subtracting the scene fence radius (namely the radius of the target area corresponding to the destination) from the distance difference is the first distance between the current position of the user and the destination.
For ease of understanding, please refer to (b) in fig. 17, the left circular area represents the base station fence corresponding to the base station fence snapshot matching the base station information of the electronic device, and S5 represents the current location of the user determined from the base station fence snapshot. The right circular area represents the target area (fence area) corresponding to where the destination is located, where the location information of the electronic device may match a certain scene fence snapshot. It will be appreciated that as long as the user moves to the edge of the target area, it can be identified that the electronic device is entering the target scene, and similarly, the scene fence radius needs to be subtracted when calculating the first distance.
S904, predicting the time of the next positioning according to the current first movement speed and the first distance of the user.
And calculating the quotient between the first distance and the current first movement speed, wherein the obtained value is the time of the next positioning. For example, the destination is a company, the first distance between the current position of the user and the company is calculated to be 2000 meters, the current first movement speed of the user is 1 meter/second, after 2000 seconds, the user moves to the target area corresponding to the company, and then the next positioning time is predicted to be 2000 seconds. The user can be positioned without adopting a GPS mode in the advancing process, the latest position information of the user is acquired through positioning after 2000 seconds, and whether the user enters the target scene can be judged according to the latest position information.
It should be noted that the methods in S901 to S904 described above may be applied to the recognition of the service scene with different scene recognition precision.
In the implementation mode, the motion speed is estimated according to the current motion state of the user, and then the time of next positioning is predicted according to the motion speed and the distance between the user and the destination.
The embodiment of the application also provides a method for refreshing the next positioning time, which comprises the steps that in the advancing process of a user, the electronic equipment always performs Cell-ID positioning, and if the base station information of the base station currently accessed by the electronic equipment obtained by a certain time is matched with a certain base station fence snapshot, longitude and latitude information and base station fence radius of a base station fence center point of the base station fence snapshot are obtained. And determining a second distance between the current position of the user and the destination according to the longitude and latitude information of the central point of the base station fence and the radius of the base station fence, and predicting a second movement speed of the user according to the current movement state of the user. The method for calculating the second distance and the method for calculating the first distance, and the method for predicting the second movement speed and the method for predicting the first movement speed are the same, and will not be described herein. And updating the time of the next positioning according to the second movement speed and the second distance.
For ease of understanding, please refer to fig. 18, fig. 18 is a schematic diagram of another application scenario of predicting time according to an exemplary embodiment of the present application. In the travelling process, a user may be matched with a plurality of base station fence snapshots, and the base station fence snapshots may be base station fence snapshots which have been matched by the user before or base station fence snapshots which are newly matched in the travelling process. Each base station fence snapshot appears in fig. 18 as a base station fence, with the right-most circular area representing the target area (fence area) corresponding to where the destination is located, where the location information of the electronic device may match a certain scene fence snapshot.
For example, the destination is a company. As shown in fig. 18, the user start point is in the first base station fence, that is, the electronic device at the start point matches the snapshot of the first base station fence, at this time, the distance between the current position of the user and the company is calculated to be 2000 meters, the current movement speed of the user is 1 meter/second, after 2000 seconds, the user moves to the target area corresponding to the company, and then the next positioning time is 2000 seconds later. Then, the user walks into the second base station fence, namely the electronic equipment is matched with the second base station fence snapshot, at the moment, the distance between the current position of the user and the company is calculated to be 1500 meters, the current movement speed of the user is 1 meter/second, the user moves to a target area corresponding to the company after 1500 seconds, and the next positioning time is refreshed, namely the next positioning time is determined to be 1500 seconds.
If the user changes the motion state during the traveling process, the motion speed is changed. The time for the next positioning is refreshed according to the changed movement speed and the remaining distance. For example, when the user walks into the third base station fence, that is, the electronic device matches the snapshot of the third base station fence, the distance between the current position of the user and the company is calculated to be 1000 meters, the current movement speed of the user is 10 meters/second, after 100 seconds, the user moves to the target area corresponding to the company, and the next positioning time is refreshed, that is, after the next positioning time is determined to be 100 seconds. And pushing the method until the user travels into the target area, and identifying that the electronic equipment enters the target scene.
In the implementation mode, the time of next positioning is continuously refreshed according to the matched base station fence snapshot in the advancing process of the user, the GPS positioning is not needed in the whole process, the accuracy of service scene identification is maintained, the positioning times are greatly reduced, and the power consumption is reduced. And as the learning of scene features increases, scene feature data is perfected continuously, the number of times of positioning by adopting a GPS mode is smaller and smaller no matter where a user wants to go in the later period, and the overall power consumption which can be saved is larger and larger for realizing service scene recognition.
The method for refreshing the next positioning time according to the embodiment of the present application is described above, and some methods for saving power consumption according to the embodiment of the present application are described below. It is worth to describe that the method for saving power consumption can be applied to the identification of the business scene with different scene identification precision.
In one example, the electronic device stops locating and/or stopping scanning WiFi when it detects that the user stops moving/stationary. For example, the current gesture of the user can be analyzed by using sensors such as an acceleration sensor, a gyroscope sensor and the like in the electronic equipment and combining a network model capable of realizing gesture learning, so as to determine whether the user is in a state of stopping moving currently. When it is determined that the user is currently stopped moving/stationary, the electronic device may stop positioning and/or stopping scanning WiFi operation without temporarily having to identify again whether the electronic device is located within the target scene, which may effectively save power consumption.
In one example, the electronic device stops locating and/or stopping scanning WiFi when the user's range of movement is detected to be less than or equal to a preset range. The movement range may be represented by a step number, for example, the movement range of the user is less than or equal to a preset range, and the number of steps the user currently walks is less than or equal to a preset step number (e.g., 40 steps). The number of steps may be counted by a pedometer in the electronic device. When the moving range of the user is smaller than or equal to the preset range, whether the electronic device is located in the target scene or not is not needed to be recognized again temporarily, and the electronic device can stop positioning and/or stopping scanning WiFi operation, so that power consumption can be effectively saved.
Optionally, the embodiment of the application further provides a method for switching the positioning mode, and the process is described in detail below. Referring to fig. 19, fig. 19 is a schematic diagram illustrating a switching positioning mode according to an exemplary embodiment of the present application.
In one example, as shown in fig. 18, the area in which the electronic device may be located is divided into an irrelevant area, a low correlation area, and a high correlation area. The manner of using Cell-ID positioning is referred to as a base station scan mode, which is used when the electronic device is in an irrelevant area. As shown in fig. 19, when the service scene recognition is started, the electronic device adopts a base station scanning mode, and if the base station signal is not matched with the low correlation area at this time, it is determined that the electronic device is currently in an irrelevant area. It can be understood that at this time, the base station information of the base station to which the electronic device is currently connected is not matched to any base station fence snapshot, and it is determined that the electronic device is currently in an irrelevant area.
If the base station signal is matched with the low correlation area but the WIFI signal is not matched with the high correlation area, determining that the electronic equipment is currently in the low correlation area. It can be understood that, at this time, the base station information of the base station currently accessed by the electronic device is matched to a base station fence snapshot, but the WIFI list of the electronic device is not matched with the WIFI feature, so as to determine that the electronic device is currently in a low correlation area. When the electronic device is in a low correlation area, a network scanning mode is adopted. For example by using GPS means. Optionally, when the electronic device is in the low correlation area, if the base station signal does not match the low correlation area, determining that the electronic device is currently in an irrelevant area.
When the electronic device is in a high correlation area, an online positioning mode is adopted. For example, the base station information of the base station to which the electronic device is currently connected is matched to a base station fence snapshot, and the WIFI list of the electronic device is also matched with the WIFI feature, so that it is determined that the electronic device is currently in a high correlation area. Optionally, when the electronic device is in the high correlation area, it may be determined that the electronic device is currently located in the high correlation area, or in the low correlation area, or in the irrelevant area, again according to the online positioning result. For example, according to the online positioning result, it is determined that the base station information of the base station to which the electronic device is currently connected is matched with a base station fence snapshot, the WIFI list of the electronic device is also matched with the WIFI feature, and it is determined that the electronic device is still currently in a high correlation area. For another example, if it is determined that the base station signal matches the low correlation region but the WIFI signal does not match the high correlation region according to the online positioning result, it is determined that the electronic device is currently in the low correlation region. For another example, if it is determined that the base station signal does not match the low correlation area according to the online positioning result, it is determined that the electronic device is currently in an irrelevant area.
In the embodiment, the base station scanning mode, the network scanning mode and the online positioning mode are flexibly switched, so that the fact that the electronic equipment is in an irrelevant area, a low-correlation area or a high-correlation area is accurately judged, the flexibility and the instantaneity of judging the area where the electronic equipment is located are improved, and the guarantee is provided for the accuracy and the instantaneity of identifying the service scene.
Finally, the power consumption saving mode in the service scene recognition method provided by the embodiments of the application is summarized simply. The method comprises the following steps:
Through verification, the power consumption required for indoor and outdoor identification is 0.1 mAh/time, the power consumption required for WIFI feature matching is 0.05 mAh/time, the power consumption required for GPS positioning is 0.05 mAh/time, the power consumption required for Cell-ID positioning is 0.005 mAh/time, and the power consumption required for cellular feature matching (such as base station information and base station fence snapshot matching) is negligible. Correspondingly, no error exists in indoor and outdoor identification, the error of WIFI characteristic matching is 5-50 meters, the error of GPS positioning is 10-15 meters, the error of Cell-ID positioning is 100-200 meters, and the error of honeycomb characteristic matching is 400-800 meters.
The method for identifying the service scene is mainly realized by combining the honeycomb feature matching, the Cell-ID positioning, the GPS positioning and the WIFI feature matching, and in most cases, the honeycomb feature matching and the Cell-ID positioning are used as the main parts, the GPS positioning and the WIFI feature matching are used as the auxiliary parts, the power consumption required by the former is extremely small, and the error of the latter is small, so that the power consumption required by the service scene identification is greatly reduced under the condition of ensuring the accuracy of the service scene identification.
According to the embodiment of the application, scene crowdsourcing data are collected, so that scene characteristics corresponding to different services are learned. With continuous learning, the learned scene features are more and more complete, and when the service scene recognition is performed based on the scene features, the scene recognition can be realized through cellular feature matching and Cell-ID positioning in most cases, the number of times of GPS positioning is reduced, and the power consumption is reduced continuously. Meanwhile, as the learned scene features are more and more complete, whether the user moves to any position in the later period can be quickly identified whether the user enters the service scene or not through the scene features, and the instantaneity is improved.
For ease of understanding, please refer to fig. 20, fig. 20 is a schematic diagram illustrating a power consumption and real-time variation according to an exemplary embodiment of the present application. As shown in fig. 20 (a), the required power consumption was 2.5 mAh/day at the time of cold start, i.e., when the scene feature was not yet learned, and was reduced to 1.5 mAh/day after the scene feature learning. As shown in fig. 20 (b), the real-time performance was 30 seconds at the time of cold start, that is, when the scene feature was not yet learned, and 3 seconds after the scene feature was learned.
The embodiment of the application also evaluates the motion speed according to the current motion state of the user, further predicts the time of the next positioning according to the motion speed and the distance between the user and the destination, and in the whole process, the GPS positioning is not needed at all times, so that the positioning times are greatly reduced, and the power consumption is reduced.
The embodiment of the application also continuously updates the next positioning time according to the matched base station fence snapshot in the advancing process of the user, and the GPS positioning is not needed in the whole process, so that the positioning times and the WIFI scanning times are greatly reduced and the power consumption is reduced while the accuracy of service scene identification is maintained. And as the learning of scene features increases, scene feature data is perfected continuously, the number of times of positioning by adopting a GPS mode is smaller and smaller no matter where a user wants to go in the later period, and the overall power consumption which can be saved is larger and larger for realizing service scene recognition.
The embodiment of the application also provides a WIFI chip which can be installed in the electronic equipment, and the scanning power consumption of the WIFI chip is much lower than that of the existing WIFI chip. Through experiments, the scanning power consumption of the WIFI chip provided by the application is only one tenth of that of the existing WIFI chip.
In one example, when the WIFI chip provided by the application scans WIFI, the discarded data packet in the prior art is also analyzed, so that more results are scanned in one scanning process. The WIFI chip provided by the application can also realize multichannel parallel scanning, so that the coverage rate of a WIFI scanning result is greatly improved. When the same number of scanning results are needed, the frequency of scanning the WIFI chip is obviously reduced, so that the scanning power consumption is reduced.
In one example, the power consumption required by the WIFI chip for scanning the frequency band of 5GHz is far greater than that required by the WIFI chip for scanning the frequency band of 2.4GHz, and the WIFI chip provided by the application can only scan the frequency band of 2.4GHz, so that the scanning power consumption is greatly reduced.
Examples of service scene recognition provided by the embodiments of the present application are described in detail above. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each functional module can be divided into each functional module corresponding to each function, for example, a monitoring unit, an obtaining unit, a processing unit, a display unit and the like, and two or more functions can be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to perform the above service scenario recognition, so that the same effect as that of the above implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital Signal Processing (DSP) and a combination of microprocessors, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a WiFi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 7.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, causes the processor to execute the service scene recognition method of any of the above embodiments.
The embodiment of the application also provides a computer program product, which when running on a computer, causes the computer to execute the related steps so as to realize the business scene recognition method in the embodiment.
In addition, the embodiment of the application also provides a device which can be a chip, a component or a module, and the device can comprise a processor and a memory which are connected, wherein the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory so that the chip can execute the business scene identification method in the method embodiments. Optionally, the WIFI chip provided by the application can be integrated in the chip.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (14)

1. A business scenario recognition method, characterized in that it is applied to an electronic device, the method comprising:
When a scene identification request of a target service is monitored, base station information of a base station currently accessed by electronic equipment and scene identification precision of the target service are obtained, wherein the base station information comprises a target operator identifier, a target cell number and a target base station number, and the scene identification request is used for requesting to identify whether the electronic equipment is positioned in a target scene associated with the target service or not;
for the target service with high scene recognition precision, detecting that the base station information is matched with an operator identifier, a cell number and a base station number in a base station fence snapshot, wherein the base station fence snapshot comprises a scene fence identifier;
Determining a fence area corresponding to the target scene fence snapshot;
Acquiring the current position of the electronic equipment;
if the current position is detected to be in the fence area, acquiring a WiFi list of the electronic equipment;
and if the WiFi list is detected to be matched with the WiFi characteristics, determining that the electronic equipment is located in the target scene.
2. The method of claim 1, wherein when the scene recognition accuracy comprises a low accuracy, the method further comprises:
and if the base station information is matched with the operator identification, the cell number and the base station number in one base station fence snapshot, determining that the electronic equipment is positioned in the target scene.
3. The method of claim 2, wherein the method further comprises:
If the base station information is not matched with the operator identification, the cell number and the base station number in any base station fence snapshot, determining a target area according to the target service, wherein the target area is an area corresponding to a destination, and the destination is a place for realizing the target service;
Acquiring the current position of the electronic equipment;
and if the current position of the electronic equipment is detected to be in the target area, determining that the electronic equipment is positioned in the target scene.
4. The method of claim 1, wherein when the scene recognition accuracy comprises a medium accuracy, the method further comprises:
If the base station information is matched with an operator identifier, a cell number and a base station number in one base station fence snapshot, determining a target scene fence snapshot according to a scene fence identifier in the base station fence snapshot matched with the base station information, wherein the target scene fence snapshot comprises longitude and latitude information of a scene fence center point and a scene fence radius;
determining a fence area according to longitude and latitude information of the center point of the scene fence and the radius of the scene fence;
Acquiring the current position of the electronic equipment;
And if the current position of the electronic equipment is detected to be in the fence area, determining that the electronic equipment is positioned in the target scene.
5. The method of claim 1, wherein the WiFi list includes at least one WiFi identification information and a WiFi intensity corresponding to each WiFi identification information, the WiFi feature includes a WiFi identification information list and a target match threshold corresponding to a WiFi identification information list, and the determining that the electronic device is located within the target scene if the WiFi list is detected to match the WiFi feature comprises:
determining a matching degree threshold corresponding to the WiFi list;
And if the WiFi identification information in the WiFi list is detected to be matched with the WiFi identification information list in the WiFi feature, and the matching degree threshold is larger than or equal to the target matching degree threshold, determining that the electronic equipment is located in the target scene.
6. The method of claim 5, wherein the method further comprises:
If the WiFi list is not matched with the WiFi features, acquiring a WiFi list generated by a third-party application program;
and if the WiFi list generated by the third-party application program is detected to be matched with the WiFi characteristics, determining that the electronic equipment is positioned in the target scene.
7. The method of any one of claims 1 to 6, wherein the method further comprises:
And if the WiFi list of the electronic equipment is detected to be matched with the WiFi characteristics in any scene fence snapshot, determining that the electronic equipment is positioned in the target scene.
8. The method of claim 1, wherein the method further comprises:
Acquiring a motion state of a user carrying the electronic equipment;
Predicting a first movement speed of the user according to the movement state;
Determining a destination for realizing the target service according to the target service, and determining a first distance between the current position of the user and the destination;
And predicting the time of the next positioning according to the first movement speed and the first distance.
9. The method of claim 1, wherein the obtaining the current location of the electronic device comprises:
and acquiring the current position of the electronic equipment through a global satellite positioning system.
10. The method of claim 8, wherein the base station fence snapshot includes latitude and longitude information of a base station fence center point, the method further comprising:
If the base station information of the base station currently accessed by the electronic equipment is detected to be matched with the target base station fence snapshot in the moving process of the electronic equipment, determining the position of the user according to the longitude and latitude information of the base station fence center point of the target base station fence snapshot;
determining a second distance according to the position of the user and the destination;
and determining a second movement speed of the user, and updating the time of the next positioning according to the second movement speed and the second distance.
11. The method of claim 1, wherein the method further comprises:
When the electronic equipment is located in the target scene, acquiring the latest position of the electronic equipment;
and determining whether the electronic equipment leaves the target scene or not according to the latest position.
12. An electronic device comprising one or more processors, one or more memories, the memories storing one or more programs that when executed by the processors cause the electronic device to perform the method of any of claims 1-11.
13. A chip comprising a processor for calling and running a computer program from a memory, causing an electronic device on which the chip is mounted to perform the method of any one of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 11.
CN202411180585.XA 2022-10-26 2022-10-26 Service scene identification method, electronic equipment and storage medium Active CN118945815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411180585.XA CN118945815B (en) 2022-10-26 2022-10-26 Service scene identification method, electronic equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202411180585.XA CN118945815B (en) 2022-10-26 2022-10-26 Service scene identification method, electronic equipment and storage medium
CN202211320411.XA CN116709501B (en) 2022-10-26 2022-10-26 Business scenario identification method, electronic device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202211320411.XA Division CN116709501B (en) 2022-10-26 2022-10-26 Business scenario identification method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN118945815A CN118945815A (en) 2024-11-12
CN118945815B true CN118945815B (en) 2025-06-27

Family

ID=87836254

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211320411.XA Active CN116709501B (en) 2022-10-26 2022-10-26 Business scenario identification method, electronic device and storage medium
CN202411180585.XA Active CN118945815B (en) 2022-10-26 2022-10-26 Service scene identification method, electronic equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202211320411.XA Active CN116709501B (en) 2022-10-26 2022-10-26 Business scenario identification method, electronic device and storage medium

Country Status (1)

Country Link
CN (2) CN116709501B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579739A (en) * 2024-01-16 2024-02-20 成都准度科技有限公司 Electronic equipment silencing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110267207A (en) * 2019-06-03 2019-09-20 中国建设银行股份有限公司 Intelligent position monitoring method, device and electronic equipment
CN111698648A (en) * 2020-04-27 2020-09-22 汉海信息技术(上海)有限公司 Network positioning method and device, electronic equipment and storage medium

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160142625A1 (en) * 2014-11-13 2016-05-19 Lenovo (Singapore) Pte. Ltd. Method and system for determining image composition attribute adjustments
EP3665537A4 (en) * 2017-08-11 2021-04-28 Lenovo (Beijing) Limited GEOGRAPHIC BARRIER DATA GENERATION
WO2019036898A1 (en) * 2017-08-22 2019-02-28 深圳先进技术研究院 Wearable device-based remote monitoring system, server, and remote monitoring method
CN110365721A (en) * 2018-03-26 2019-10-22 华为技术有限公司 A method, terminal device and system for triggering services based on user scene perception
CN111328021B (en) * 2018-12-14 2021-08-27 中国移动通信集团河南有限公司 Superbusiness scene early warning method and system for Internet of things prevention and control
CN112995409B (en) * 2019-12-02 2022-03-11 荣耀终端有限公司 Display method of intelligent communication strategy effective scene, mobile terminal and computer readable storage medium
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN113133076B (en) * 2019-12-30 2023-04-25 荣耀终端有限公司 Communication method, related equipment and communication system
CN111274910B (en) * 2020-01-16 2024-01-30 腾讯科技(深圳)有限公司 Scene interaction method, device and electronic equipment
CN112153568B (en) * 2020-08-28 2022-08-30 汉海信息技术(上海)有限公司 Wi-Fi identification and binding method, device and equipment based on service scene
CN114463898B (en) * 2021-07-30 2023-08-22 荣耀终端有限公司 Express delivery pickup reminding method and device
CN113794801B (en) * 2021-08-09 2022-09-27 荣耀终端有限公司 Method and device for processing geo-fence
CN115002668B (en) * 2021-11-11 2023-04-07 荣耀终端有限公司 Method and electronic equipment for utilizing position fingerprint
CN113905438B (en) * 2021-12-10 2022-03-22 腾讯科技(深圳)有限公司 Scene identification generation method, positioning method and device and electronic equipment
CN115065996B (en) * 2021-12-14 2023-04-07 荣耀终端有限公司 Method, terminal and communication system for generating electronic fence
CN115002849B (en) * 2021-12-14 2023-02-24 荣耀终端有限公司 Method and terminal for network switching
CN115022459B (en) * 2021-12-24 2023-05-05 荣耀终端有限公司 Method and electronic device for travel reminder
CN114598560B (en) * 2022-03-17 2023-05-30 中国联合网络通信集团有限公司 Wireless network policy issuing method and device, electronic equipment and storage medium
CN114719842B (en) * 2022-06-08 2022-08-30 深圳市乐凡信息科技有限公司 Positioning method, system, equipment and storage medium based on electronic fence
CN114880065B (en) * 2022-07-08 2022-09-27 荣耀终端有限公司 Method, device, system and storage medium for controlling card display

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110267207A (en) * 2019-06-03 2019-09-20 中国建设银行股份有限公司 Intelligent position monitoring method, device and electronic equipment
CN111698648A (en) * 2020-04-27 2020-09-22 汉海信息技术(上海)有限公司 Network positioning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116709501B (en) 2024-09-13
CN118945815A (en) 2024-11-12
CN116709501A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN113615217B (en) Method for determining that terminal equipment is located inside geo-fence and terminal equipment
US9842282B2 (en) Method and apparatus for classifying objects and clutter removal of some three-dimensional images of the objects in a presentation
CN104457767B (en) The method and apparatus for realizing location-based service
US20180301111A1 (en) Electronic device and method for displaying electronic map in electronic device
CN111182453A (en) Positioning method, positioning device, electronic equipment and storage medium
US20110143768A1 (en) Methods and apparatus related to region-specific mobile device and infrastructure detection, analysis and display
US12299918B2 (en) Methods and systems to facilitate passive relocalization using three-dimensional maps
CN116056003B (en) Geofence triggering method and related electronic equipment
CN114466102A (en) Method for displaying application interface, electronic equipment and traffic information display system
WO2011081872A1 (en) Methods and apparatus related to region-specific mobile and infrastructure detection, analysis and display
CN118945815B (en) Service scene identification method, electronic equipment and storage medium
CN117128985B (en) Point cloud map updating method and equipment
CN116668951A (en) A method, electronic device and storage medium for generating geofence
CN116668580B (en) Scene recognition method, electronic device and readable storage medium
CN116668576B (en) Method, device, cloud management platform, system and storage medium for acquiring data
CN114266385A (en) Method, system, terminal and storage medium for selecting addresses of multiple logistics and decentralization centers of automobile parts
CN117135573B (en) Cell location updating method, server and storage medium
CN116033344B (en) Determination method, equipment and storage medium of geofence
CN116027941B (en) Service recommendation method and electronic equipment
CN114879879B (en) Method for displaying health code, electronic equipment and storage medium
CN116723460B (en) Method for creating personal geo-fence and related equipment
CN117014803B (en) Positioning method, recommending method, readable medium and electronic device
CN115526221B (en) Positioning abnormality detection and processing method and related equipment
CN116233749B (en) Message pushing method, mobile terminal and computer readable storage medium
US20240353230A1 (en) Probe based routing directions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Terminal Co.,Ltd.

Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong

Applicant before: Honor Device Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant