Disclosure of Invention
In view of this, the present application provides a positioning method for reaching an interest area, an electronic device and a readable storage medium, which can accurately position the interest area set by a user, so as to improve user experience.
Some embodiments of the present application provide a method of locating an arrival at a region of interest. The present application is described below in terms of several aspects, embodiments and advantages of which are mutually referenced.
In a first aspect, the present application provides a method for locating an arrival region, including:
the method is applied to the electronic equipment, and is characterized by comprising the following steps:
the method comprises the steps that an interest area to be reached input by a user is received by electronic equipment, and geo-fence data corresponding to the interest area are obtained from a cloud end, wherein the ground area covered by the geo-fence is larger than that covered by the interest area; the electronic equipment acquires positioning satellite data in real time based on a satellite positioning system so as to acquire the current geographic position of the electronic equipment; the electronic device acquires first image data of the surrounding environment of the electronic device, and determines that the current geographic position of the electronic device is in the interest area based on the first image data, the geofence data and the current geographic position of the electronic device.
According to the positioning method for reaching the interest area, the electronic device judges whether the electronic device is located in the interest area to be reached or not according to the fence data corresponding to the interest area, the first image data and the current geographic position of the electronic device.
In one embodiment of the first aspect of the present application, when the electronic device determines that its current geographic location is within the geo-fenced ground area, the electronic device acquires first image data of its surroundings.
In one embodiment of the first aspect of the present application, the electronic device determining that the current geographic location thereof is within the area of interest based on the first image data, the geofence data, and the current geographic location thereof, includes:
the electronic device determines that the current geographic position of the electronic device is located within the ground area corresponding to the geo-fence data, and determines that the current geographic position of the electronic device is within the interest area if the electronic device determines that the first image data is the second image data within the interest area, wherein the second image data is the real image data within the interest area in the real scene.
In one embodiment of the first aspect of the present application, the electronic device determining that the first image data is the second image data within the region of interest includes: the electronic equipment acquires an image visual identification model from the cloud, and determines that the first image data is second image data in the interest area through calculation of the image visual identification model; the image visual recognition model is obtained by training a plurality of second image data.
In an embodiment of the first aspect of the present application, the electronic device determines that the current geographic location is within the area of interest based on the first image data, the geofence data, and the current geographic location, and further includes: when the first image data acquired by the electronic equipment is picture data of a ground parking lot in the interest area, the electronic equipment determines that the current geographic position of the electronic equipment is in the ground parking lot in the interest area.
In one embodiment of the first aspect of the present application, the electronic device determines that the current geographic location is within the area of interest based on the first image data, the geofence data, and the current geographic location, and further comprises: when the first image data acquired by the electronic equipment is the picture data of the charging port of the ground parking lot in the interest area, the electronic equipment determines that the geographic position where the electronic equipment is located is at the charging port of the ground parking lot in the interest area.
In an embodiment of the first aspect of the present application, the electronic device obtains positioning satellite data in real time based on a satellite positioning system to obtain a current geographic location of the electronic device, and further includes: the electronic device obtains the number of satellites for which the current time can be used for positioning based on the positioning satellite data.
In an embodiment of the first aspect of the application, the method further comprises: the method comprises the steps that the electronic equipment obtains first image data of the surrounding environment of the electronic equipment, and determines whether the current geographic position of the electronic equipment is indoors in an interest area based on the first image data, the geo-fence data, the current geographic position of the electronic equipment and the number of satellites used for positioning; when the electronic equipment determines that the current geographic position of the electronic equipment is located in the ground area corresponding to the geo-fence data, the electronic equipment determines that the first image data is the second image data in the interest area, and the number of the satellites used for positioning is zero, the electronic equipment determines that the current geographic position of the electronic equipment is indoors in the interest area and the indoor space comprises an underground parking lot.
In an embodiment of the first aspect of the application, the method further comprises: the electronic equipment distributes the event information of the electronic equipment reaching the interest area to the application side equipment and sends the event information to the cloud side, so that the cloud side distributes the event information to the business side equipment.
In a second aspect, the present application further provides an electronic device, including:
the communication module is used for receiving an interest area to be reached and input by a user, and acquiring geo-fence data corresponding to the interest area from a cloud end, wherein the ground area covered by the geo-fence is larger than that covered by the interest area;
the processing module is used for acquiring positioning satellite data in real time based on a satellite positioning system so as to acquire the current geographic position of the positioning satellite data;
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first image data of the surrounding environment;
the processing module is used for determining that the current geographic position of the first image data, the geographic fence data and the current geographic position is in the interest area based on the first image data, the geographic fence data and the current geographic position.
In one embodiment of the second aspect of the present application, the acquisition module acquires first image data of its surroundings when the processing module determines that its current geographic location is within the geo-fenced area of the ground.
In an embodiment of the second aspect of the present application, the processing module is specifically configured to: and determining that the current geographic position of the mobile terminal is located in the ground area corresponding to the geo-fence data, determining that the first image data is second image data in the interest area by the processing module, and determining that the current geographic position of the mobile terminal is in the interest area by the processing module, wherein the second image data is real image data in the interest area in a real scene.
In an embodiment of the second aspect of the present application, the communication module is further configured to obtain an image visual recognition model from a cloud, and the processing module determines, through calculation of the image visual recognition model, that the first image data is second image data in the region of interest; the image visual recognition model is obtained by training a plurality of second image data.
In an embodiment of the second aspect of the application, the processing module is further configured to: when the first image data acquired by the acquisition module is the picture data of the ground parking lot in the interest area, the processing module determines the ground parking lot of which the current geographic position is in the interest area.
In one embodiment of the second aspect of the application, the processing module is further to: when the first image data acquired by the acquisition module is the picture data of the charging port of the ground parking lot in the interest area, the processing module determines that the geographic position where the processing module is located is the charging port of the ground parking lot in the interest area.
In an embodiment of the second aspect of the application, the processing module is further configured to: the number of satellites for which the current time can be used for positioning is obtained based on the positioning satellite data.
In one embodiment of the second aspect of the present application, the electronic device further comprises: the method comprises the steps that an acquisition module acquires first image data of the surrounding environment, and a processing module determines that the current geographic position of the processing module is indoors in an interest area based on the first image data, the geo-fence data, the current geographic position of the processing module and the number of satellites used for positioning; when the processing module determines that the current geographic position of the processing module is located in the ground area corresponding to the geo-fence data, the processing module determines that the first image data is the second image data in the interest area, and the number of the satellites used for positioning is zero, the processing module determines that the current geographic position of the processing module is located indoors in the interest area, wherein the indoor places comprise underground parking lots.
In one embodiment of the second aspect of the present application, the electronic device further comprises: the electronic equipment distributes the event information reaching the interest area to the application side equipment and sends the event information to the cloud side, so that the cloud side distributes the event information to the business side equipment.
In a third aspect, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, causes the processor to execute the method of the first aspect.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The method for reaching the interest area of the present application is described below with reference to a specific scenario.
First, describing the prior art, fig. 1 shows a scene diagram of communication between an electronic device and a cloud server. As shown in fig. 1, the scene diagram includes an electronic device and a cloud server 110, where the electronic device is a terminal device 120 installed on a vehicle, and a user may find a map and select a destination to which the user wants to go, i.e., a Point of Interest (POI), such as a place of a shopping mall, a supermarket, and the like, through an application of the terminal device 120. After the user selects the POI, the terminal device 120 may obtain the geo-fence data from the cloud, and use the ground area corresponding to the geo-fence data as the ground area corresponding to the POI. However, generally the geo-fence covers a larger area of ground than the POI covers. That is, when the terminal device 120 arrives within the area of the ground covered by the geo-fence as in the prior art, the terminal device 120 assumes the destination is reached and prompts the user that the selected POI has been reached. However, due to the error of about 10 meters in the satellite navigation system, when the terminal device arrives in the ground area covered by the geo-fence, the POI area may not be actually reached.
Referring to fig. 2 in conjunction with fig. 1, fig. 2 shows a map schematic of a geofence and a POI. As shown in fig. 2, including POI area 210 and geo-fence 220, geo-fence 220 covers an area that is larger than the area covered by POI area 210. When the terminal device 120 reaches within the error area 230, the terminal device 120 also considers that the POI zone 210 is reached, but does not actually reach the POI zone 210, which reduces the user experience.
In order to solve the above problem, the present application provides a method for locating an arrival region. Referring to fig. 3, fig. 3 shows an architecture diagram of the electronic device and the cloud end communication of the present application, where the architecture diagram includes a terminal device 310 and a cloud end server 320, and the terminal device 310 includes a camera module 311 and a navigation module 312. The navigation module 312 stores a satellite navigation system, such as GPS. The camera module 311 may acquire images of its week. When the user selects a POI area to be reached, the terminal device 310 acquires geo-fence data for the POI area from the cloud server 320. The terminal device 310 obtains satellite positioning data according to its internal navigation system, and positions the currently reached position in real time. When the terminal device 310 reaches an area covered by the geo-fence, the terminal device 310 may acquire image data around the terminal device 310, acquire an image visual recognition model in the POI area from the cloud server 320, calculate whether the image data is an image in the POI area through the image visual recognition model, or compare the image data with an image acquired before, and if the image data is matched with the image acquired before, determine that the set POI area is reached. In the present application, details of the following embodiments are described with reference to determining whether the acquired image data is an image in an actual scene of a POI, and specific reference may be made to the following description of the embodiments.
The terminal device in this application may be a car terminal installed in a car, or may also be an electronic device that has a camera function or is connected to a camera function device, such as a mobile phone, a computer, a pad, and the like used by a user, which is not limited herein.
According to the positioning method for the point of interest, the image data in the real environment and the geo-fence positioning are combined, so that whether the image data reach the interest area or not can be calculated more accurately, and the user experience is improved.
Taking an electronic device as an in-vehicle terminal as an example for explanation, referring to fig. 4a, fig. 4a shows a flowchart of a positioning method for reaching an interest area, as shown in fig. 4a, including the following steps:
s410, the vehicle terminal receives the POI address input by the user.
Specifically, the user may select a POI area to go to, such as a mall, a supermarket, or the like, through a set application program, such as a navigation map.
And S420, the vehicle terminal acquires the geo-fence data from the cloud in real time.
The vehicle-mounted terminal can acquire the geo-fence data aiming at the POI from the cloud server in real time so as to realize synchronization with the cloud server.
S430, the vehicle-mounted terminal acquires positioning satellite data according to the embodiment of the satellite positioning system so as to acquire the current geographic position of the vehicle-mounted terminal. Specifically, the specific positioning manner of the satellite positioning system is the same as that of the prior art, and is not specifically described here.
And S440, the vehicle-mounted terminal acquires image data of the surrounding environment.
Acquiring an image of the surrounding environment in the present application may include two ways.
In the first mode, the vehicle-mounted terminal can acquire the surrounding environment in real time to calculate the matching degree.
In the second mode, after the vehicle-mounted terminal reaches the designated position, the vehicle-mounted terminal acquires surrounding environment data and calculates the matching degree.
In the embodiment of the application, when the vehicle-mounted terminal reaches the inside of the geographic fence of the POI, the vehicle-mounted terminal starts to acquire image data in the surrounding environment, so that the calculation amount is reduced, and the calculation is more accurate.
S450, the vehicle-mounted terminal judges whether the vehicle-mounted terminal reaches the interest area or not based on the image data, the geo-fence data and the current geographic position of the vehicle-mounted terminal.
In an embodiment of the application, whether the current vehicle reaches the ground area covered by the geo-fence corresponding to the POI can be judged through the vehicle-mounted terminal according to the current geographic position and the geo-fence data. Further, when the vehicle reaches the ground area covered by the geo-fence corresponding to the POI, the vehicle-mounted terminal may acquire an image of the surrounding environment through a camera of the vehicle. And inputting the image into an image visual identification model MobileNet stored locally in advance, calculating the matching degree of the acquired image and a real scene in the POI area according to the model, and judging that the vehicle terminal reaches the POI if the image is matched with the real scene.
And S460, the vehicle terminal distributes the event information of the vehicle terminal reaching the interest area to the application side equipment and sends the event information to the cloud side, so that the cloud side distributes the event information to the business side equipment.
The image visual recognition model in the application can be obtained from a cloud server. The cloud server trains and obtains an image visual recognition model based on image data (second image data) in a large number of POI real object scenes.
In an embodiment of the application, a 1-to-1 or 1-to-N comparison manner may also be performed between an image acquired by the car terminal and a picture stored in advance, so as to determine whether the image acquired by the car terminal is matched with an image in the POI practical scene.
According to the method, the POI area to be reached by the user can be more accurately positioned by combining the image data with the geo-fence positioning.
In one embodiment of the present application, when the vehicle arrives in a city, the number of satellites capable of positioning the in-vehicle terminal at the current time is obtained by combining the positioning satellite data. Therefore, the user can be judged more accurately not only to reach the interest area, but also to enter the city of the interest area. The number of satellites can be determined according to the acquisition condition of the positioning satellite data, for example, when the vehicle arrives at the urban satellite system and the vehicle cannot be detected, the number of satellites is displayed as 0. And when the number of the satellites is 0, the vehicle is located in the city. By the method, the vehicle can be further judged to not only arrive at the POI area but also be located in the city. Thereby, the place to be reached can be accurately positioned. The city in the present application may be a place such as an on-ground parking lot in the city. This is not limiting.
According to the method and the device, the vehicle-mounted terminal can accurately reach the interest area, and therefore the vehicle-mounted terminal of the user recommends corresponding services after the vehicle-mounted terminal reaches the interest area, for example, the user goes to a supermarket, and after the vehicle-mounted terminal accurately reaches the supermarket, a server in the supermarket receives arrival notification of the user and recommends a proper map of the supermarket, or discount information, new product information or hot-sell products and the like of commodities to the user, so that user experience is improved.
The method for locating the region of interest is further described below with reference to fig. 4a and fig. 4 b.
Referring to fig. 4b, fig. 4b shows an architecture diagram of a system for reaching an area of interest, which includes a cloud server 401, a car terminal 402, a camera 403, and a GPS sensor 404. The cloud server 401 includes an information synchronization module and an event sending module, the in-vehicle terminal 402 includes a processing module and a distribution module, and the in-vehicle terminal and the camera 403 and the GPS sensor 404 are disposed on the vehicle. When the vehicle of the user is set to go to the market, the cloud server 401 may synchronize the geo-fence of the market to the in-vehicle terminal through the information synchronization module according to the position of the market set by the user, the GPS sensor 404 acquires the positioning satellite data in real time and sends the positioning satellite data to the in-vehicle terminal 402, the processing module of the in-vehicle terminal 402 calculates the current position of the vehicle according to the positioning satellite data and judges whether the vehicle reaches the geo-fence, and after the processing module judges that the vehicle reaches the geo-fence, the processing module acquires the image data of the camera 403 and calculates whether the vehicle reaches the POI area through the image visual recognition. And the processing module can also calculate the number of satellites from the positioning satellite data, and when the number of the satellites is 0, the vehicle arrives in the market of the shopping mall. Therefore, whether the user arrives at the POI area or not can be accurately positioned. In addition, after the vehicle arrives at the market, the time distribution module of the in-vehicle terminal 402 sends the arriving information to the cloud server 401 and the application server 405, and the cloud server 401 sends the information to the corresponding service server 406, so that the merchant provides the application or service responding to the user, and the user experience is improved.
In the embodiments of the present application, a system composed of a car terminal, a camera, and a GPS sensor is used to complete a positioning process, in some embodiments of the present application, the camera, the GPS sensor, a processing module, and the like may also be integrated into the same electronic device, for example, a mobile phone, a computer, and the like, which is not limited herein.
The method for locating the region of interest according to the embodiment of the present application is described below with reference to several scenarios in PO I.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating the car terminal confirming the arrival at the interest area. As shown in fig. 5, the destination selected by the user is "good living supermarket", the area covered by the "good living supermarket" is the interest area 510, and the surrounding area is the corresponding geo-fence 520. The area where the area of interest overlaps the geo-fence fee is an error area 530, i.e., the vehicle arrives at the geo-fence but does not arrive at the area where the POI is located. As shown in fig. 5, "underground parking lot", "above-ground parking lot", and "parking lot charging doorway" all belong to an area covered by the POI. When the vehicle arrives at the three positions, the POI area is reached.
Several scenarios for the vehicle to reach the POI zone are described below with reference to fig. 5.
And 1, when the vehicle locates the underground parking lot of the supermarket.
The vehicle-mounted terminal adopts a calculation mode that whether the vehicle reaches a supermarket geofence detection result is recorded as L, when L is true, the vehicle enters the supermarket geofence, and when false, the vehicle does not enter the supermarket geofence; recording the detection result of the GPS positioning satellite number as G, wherein the positioning satellite number is 0 when G is true, and the positioning satellite number is not 0 when false; the result of the visual indoor detection by the deep learning is denoted as C, and when C is true, it indicates that the user is recognized indoors, and when false, it indicates that the user is recognized outdoors. The result of the fusion algorithm is M-L and G-C. I.e., L, G, C are both true, the fusion algorithm output is true. If only one of the parameters is false, the fusion algorithm outputs false. Because the visual detection calculated amount is large, when the detection of the geographic fence L and the detection of the GPS satellite number G are true, the image data acquired by the camera can be read, and the visual detection G based on the deep learning model is performed once, so that the overall calculated amount of the fusion algorithm is reduced in a mode of reducing the G calculation times.
And 2, when the vehicle is only positioned to reach the ground parking lot of the supermarket.
The vehicle-mounted terminal records the detection result of the reached geo-fence as L, when the detection result of the reached geo-fence is L equals to true, the vehicle-mounted terminal indicates that the vehicle-mounted terminal enters the geo-fence, and when the detection result of the reached geo-fence is false, the vehicle-mounted terminal does not enter the geo-fence; the visual ground parking lot or the parking lot charging gate detection result based on deep learning is recorded as C, when C is true, the visual recognition is the ground parking lot or the parking lot charging gate, and when false, the visual recognition is not the ground parking lot or the parking lot charging gate. And detecting the result as M-L and C by adopting a fusion algorithm. I.e., L, C are both true, the fusion algorithm output is true. If only one of the parameters is false, the fusion algorithm outputs false. Because the visual detection calculated amount is large, when the geofence detection L is true, the camera data is read and the visual detection G based on the deep learning model is performed once, so that the overall calculated amount of the fusion algorithm is reduced in a mode of reducing the calculation times of G.
And 3, the vehicle arrives at the supermarket.
And respectively judging the two scenes, and when one scene is the future, the vehicle-mounted terminal considers that the vehicle-mounted terminal arrives at the supermarket. The above scenarios may be specifically referred to for the determination process of the two scenarios, and are not described herein again.
According to the positioning method for reaching the interest area, the images in the scene are combined with the geo-fencing technology, the fusion algorithm is adopted, the advantages of the three algorithms are complemented, the accuracy rate of detection and positioning can be better, and meanwhile the false alarm rate is reduced. The accuracy rate of the calculation of the method can reach 98%, which is far higher than 81% of the accuracy rate of the traditional geo-fencing technology.
Based on the above description, an electronic device of the present application, which is configured to execute the above method embodiments, is described in detail below. Fig. 6 shows a schematic structural diagram of the electronic device. As shown in fig. 6, the terminal device includes:
the communication module 610 is configured to receive an interest area to be reached, which is input by a user, and acquire geofence data corresponding to the interest area from a cloud end, where a ground area covered by the geofence is larger than a ground area covered by the interest area;
the processing module 620 is configured to obtain positioning satellite data in real time based on a satellite positioning system to obtain a current geographic location of the positioning satellite;
an obtaining module 630, configured to obtain first image data of its surroundings;
the processing module 620 is configured to determine that the geographic location in which the first image data is currently located is within the area of interest based on the first image data, the geofence data, and the geographic location in which the first image data is currently located.
According to an embodiment of the present application, when the processing module 620 determines that its current geographic location is within the geo-fenced ground area, the acquisition module 630 acquires first image data of its surroundings.
According to an embodiment of the present application, the processing module 620 is specifically configured to: and determining that the current geographic position of the user is located within the ground area corresponding to the geo-fence data, and determining that the first image data is the second image data in the region of interest by the processing module 620, determining that the current geographic position of the user is within the region of interest by the processing module, where the second image data is the real image data in the region of interest in the real scene.
According to an embodiment of the application, the communication module 610 is further configured to obtain an image visual recognition model from a cloud, and the processing module 620 determines that the first image data is the second image data in the region of interest through image visual recognition model calculation; the image visual recognition model is obtained by training a plurality of second image data.
According to an embodiment of the application, the processing module 620 is further configured to: when the first image data acquired by the acquisition module 630 is the picture data of the above-ground parking lot in the area of interest, the processing module determines that the geographic location where the processing module is currently located is the above-ground parking lot in the area of interest.
According to an embodiment of the application, the processing module 620 is further configured to: when the first image data acquired by the acquisition module 630 is the picture data of the above-ground parking lot charging port in the area of interest, the processing module 620 determines that the geographic location where the first image data is currently located is the above-ground parking lot charging port in the area of interest.
According to an embodiment of the application, the processing module 620 is further configured to: the number of satellites for which the current time can be used for positioning is obtained based on the positioning satellite data.
According to an embodiment of the application, the electronic device further comprises: the acquisition module 630 acquires first image data of the surrounding environment, and the processing module 620 determines that the current geographic position is indoors in the interest area based on the first image data, the geofence data, the current geographic position of the current geographic position, and the number of satellites used for positioning; when the processing module 620 determines that the current geographic position of the mobile terminal is located within the ground area corresponding to the geo-fence data, and the processing module 620 determines that the first image data is the second image data in the area of interest, and the number of satellites used for positioning is zero, the processing module 620 determines that the current geographic position of the mobile terminal is indoors in the area of interest, wherein the indoor space includes an underground parking lot. The method and the device have the advantages that the fusion algorithm is adopted, so that the false alarm rate is reduced while the better detection accuracy rate is obtained.
According to an embodiment of the application, the electronic device further comprises: the electronic equipment distributes the event information reaching the interest area to the application side equipment and sends the event information to the cloud side, so that the cloud side distributes the event information to the business side equipment.
The functions of the modules of the electronic device of the present application have been described in detail in the foregoing embodiments, and reference may be made to the positioning method of the foregoing embodiments specifically, which are not described herein again.
According to the electronic equipment provided by the embodiment of the application, the image data and the geo-fencing technology are combined, so that whether a user arrives at a specified place can be judged more accurately. In addition, the method and the device also combine with the checking of the number of satellites so as to further judge that the user is in the city of the POI. More accurate location to when the user reachs in the POI service area, the server of the trade company in the POI can provide market activity information etc. to the user, not only convenience of customers can realize the more accurate drainage to the trade company simultaneously.
The present application further provides an electronic device, including:
a memory for storing instructions for execution by one or more processors of the device, an
A processor for performing the method of the above embodiment shown in fig. 4 a.
The present application also provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, causes the processor to perform the method of the above-mentioned embodiment shown in fig. 4 a.
Referring now to FIG. 7, shown is a block diagram of an apparatus 1200 in accordance with one embodiment of the present application. The device 1200 may include one or more processors 1201 coupled to a controller hub 1203. For at least one embodiment, the controller hub 1203 communicates with the processor 1201 via a multi-drop Bus such as a Front Side Bus (FSB), a point-to-point interface such as a Quick Path Interconnect (QPI), or similar connection 1206. The processor 1201 executes instructions that control general types of data processing operations. In one embodiment, Controller Hub 1203 includes, but is not limited to, a Graphics Memory Controller Hub (GMCH) (not shown) and an Input/Output Hub (IOH) (which may be on separate chips) (not shown), where the GMCH includes a Memory and a Graphics Controller and is coupled to the IOH.
The device 1200 may also include a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203. Alternatively, one or both of the memory and GMCH may be integrated within the processor (as described herein), with the memory 1204 and coprocessor 1202 being directly coupled to the processor 1201 and to the controller hub 1203, with the controller hub 1203 and IOH being in a single chip. The Memory 1204 may be, for example, a Dynamic Random Access Memory (DRAM), a Phase Change Memory (PCM), or a combination of the two. In one embodiment, coprocessor 1202 is a special-Purpose processor, such as, for example, a high-throughput MIC processor (MIC), a network or communication processor, compression engine, graphics processor, General Purpose Graphics Processor (GPGPU), embedded processor, or the like. The optional nature of coprocessor 1202 is represented in FIG. 7 by dashed lines.
Memory 1204, as a computer-readable storage medium, may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. For example, the memory 1204 may include any suitable non-volatile memory, such as flash memory, and/or any suitable non-volatile storage device, such as one or more Hard-Disk drives (Hard-Disk drives, hdd (s)), one or more Compact Discs (CD) drives, and/or one or more Digital Versatile Discs (DVD) drives.
In one embodiment, device 1200 may further include a Network Interface Controller (NIC) 1206. Network interface 1206 may include a transceiver to provide a radio interface for device 1200 to communicate with any other suitable device (e.g., front end module, antenna, etc.). In various embodiments, the network interface 1206 may be integrated with other components of the device 1200. The network interface 1206 may implement the functions of the communication unit in the above-described embodiments.
The device 1200 may further include an Input/Output (I/O) device 1205. I/O1205 may include: a user interface designed to enable a user to interact with the device 1200; the design of the peripheral component interface enables peripheral components to also interact with the device 1200; and/or sensors may be configured to determine environmental conditions and/or location information associated with device 1200.
It is noted that fig. 7 is merely exemplary. That is, although fig. 7 shows that the apparatus 1200 includes a plurality of devices, such as the processor 1201, the controller hub 1203, the memory 1204, etc., in practical applications, an apparatus using the methods of the present application may include only a part of the devices of the apparatus 1200, for example, only the processor 1201 and the NIC1206 may be included. The nature of the alternative device in fig. 7 is shown in dashed lines.
According to some embodiments of the present application, the memory 1204 serving as a computer-readable storage medium stores instructions, which when executed on a computer, enable the system 1200 to perform the calculation method according to the above embodiments, which may specifically refer to the method shown in fig. 4a in the above embodiments, and will not be described herein again.
Referring now to fig. 8, shown is a block diagram of a SoC (System on Chip) 1300 in accordance with an embodiment of the present application. In fig. 8, like parts have the same reference numerals. In addition, the dashed box is an optional feature of more advanced socs. In fig. 8, the SoC1300 includes: an interconnect unit 1350 coupled to the application processor 1310; a system agent unit 1380; a bus controller unit 1390; an integrated memory controller unit 1340; a set or one or more coprocessors 1320 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a Static Random Access Memory (SRAM) unit 1330; a Direct Memory Access (DMA) unit 1360. In one embodiment, the coprocessor 1320 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.
Included in Static Random Access Memory (SRAM) unit 1330 may be one or more computer-readable media for storing data and/or instructions. A computer-readable storage medium may have stored therein instructions, in particular, temporary and permanent copies of the instructions. The instructions may include: when executed by at least one unit in the processor, Soc1300 may execute the calculation method according to the foregoing embodiment, which specifically refers to the method shown in fig. 4a in the foregoing embodiment, and is not described herein again.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this Application, a processing system includes any system having a Processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, Compact disk Read Only memories (CD-ROMs), magneto-optical disks, Read Only Memories (ROMs), Random Access Memories (RAMs), Erasable Programmable Read Only Memories (EPROMs), Electrically Erasable Programmable Read Only Memories (EEPROMs), magnetic or optical cards, flash Memory, or a tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signals, digital signals, etc.) using the Internet in electrical, optical, acoustical or other forms of propagated signals. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.