Disclosure of Invention
The invention overcomes the defects of the prior art, provides a data transmission lifting method of a self-learning intelligent scale, and aims to solve the problems that the self-learning intelligent scale in the prior art has an incompact data packet structure, more redundant information, lack of effective error control and correction mechanism, insufficient application of multiplex communication technology and lack of effective performance monitoring and optimizing means in the aspect of data transmission.
In order to achieve the purpose, the technical scheme adopted by the invention is that the data transmission lifting method of the self-learning intelligent scale comprises the following steps:
starting self-learning by a plurality of main scales, and recording and storing self-learning data into a local memory of the corresponding main scales;
the self-learning main scale compresses the self-learning data packet;
The self-learning main scales send the compressed self-learning data packets to a communication channel, and the self-learning data are transmitted among the self-learning main scales to aggregate the self-learning data;
The self-learning main scale sends the aggregated self-learning data packet to the central system, and the aggregated self-learning data packet is used for triggering the central system to distribute the aggregated self-learning data packet to the main scale and all the auxiliary scales which do not perform self-learning;
Wherein all primary scales, secondary scales and central systems apply error detection and error correction mechanisms, and the communication channels are planned by time division multiplexing and frequency division multiplexing methods.
In a preferred embodiment of the present invention, the collecting and storing steps of the self-learning data include:
The main scale starts self-learning, recognizes commodities and measured weights through a built-in sensor and algorithm, and records self-learning data of the main scale into a local memory of the main scale;
the central system detects that the main scale starts self-learning, and sends out a data collecting instruction to a plurality of auxiliary scales, the auxiliary scales communicate with the corresponding main scales, and commodity related data of the auxiliary scales are uploaded to the corresponding connected main scales;
and after receiving the related data of the commodity of the auxiliary scale uploaded by the auxiliary scale, the main scale is integrated with the self-learning data of the main scale and stored in a local memory of the main scale.
In a preferred embodiment of the present invention, the steps of the error detection mechanism include:
Generating a first check code for the compressed self-learning data packet by using a cyclic redundancy check algorithm at a port for transmitting the self-learning data, and adding the first check code to the end of the self-learning data packet;
at the port for receiving the self-learning data, recalculating a second check code of the received self-learning data packet by using a cyclic redundancy check algorithm, and comparing the second check code with the first check code;
when the second check code is not matched with the first check code, errors occur in the transmission process of the self-learning data packet.
In a preferred embodiment of the present invention, when the error detection mechanism detects that an error occurs in the self-learning data packet during transmission, the error correction mechanism is triggered;
at the port for transmitting the self-learning data, adding redundant information to the self-learning data packet by using the Reed-Solomon code;
When the second check code does not match the first check code, a Reed-Solomon decoder is used to correct data transmission errors in the self-learning data packet at the port receiving the self-learning data.
In a preferred embodiment of the present invention, the codeword length of the Reed-Solomon code is setAnd check the number of sign bits,,The number of the self-learning data bits after splicing; The number of errors corrected and detected for the reed-solomon code; , To the number of errors corrected;
Within a limited area Generating polynomialsWherein, the method comprises the steps of, wherein,Is the element of the primitive and is a basic element,Is a formal variable for expressing polynomial terms, and is provided with spliced self-learning data setCorresponding polynomialsThe check symbol polynomial isAnd calculating to obtain corresponding check symbols of the self-learning data, and splicing the check symbols to the self-learning data set to form the encrypted self-learning data code.
In a preferred embodiment of the present invention, the step of planning the communication channel includes:
Determining the total bandwidth of a communication channel, and dividing the total bandwidth into a plurality of sub-bands, wherein each sub-band is used for transmitting one signal to realize frequency division multiplexing, the number of the plurality of sub-bands is larger than that of the communication channels, and each sub-band is allocated to one communication channel at most;
And determining a complete communication period, dividing the communication period into a plurality of time slots in each sub-band, and distributing each time slot to the main scale and the auxiliary scale for data transmission to realize time division multiplexing.
In a preferred embodiment of the present invention, the time slot interval is set between time slots, and the time slot interval is between 20-60 microseconds.
In a preferred embodiment of the present invention, the self-learning data transmission step between the self-learning main scales comprises:
each self-learning main scale adds the compressed self-learning data packet to a corresponding sub-frequency band in the allocated time slot and sends the sub-frequency band to a communication channel;
the self-learning main scale performs error detection when receiving a self-learning data packet from a communication channel;
When the data is detected to be free, the self-learning main scales are used for carrying out arrangement verification on the received self-learning data packets and the self-learning data packets to form an aggregate self-learning data packet;
The aggregate self-learning data packets are updated into the local memory of the self-learning master scale.
In a preferred embodiment of the present invention, the self-learning data packet may be parsed, split and transmitted, and the specific steps include:
For each commodity ID, the self-learning data packet comprises all pictures, weight and time stamps corresponding to the commodity ID, and the pictures, the weight and the time stamps are sequenced in sequence according to the time stamps;
Adding a piece of head information to each divided self-learning data packet, wherein the head information comprises a serial number, commodity IDs and the number of pictures;
In different time slots, the self-learning main scale sends the divided data packets to a communication channel;
When other self-learning main scales receive the split self-learning data packets, sequencing according to the attached serial numbers until all the related serial numbers are received, and performing error detection;
After error detection, analyzing the header information, and checking and updating the data in the local storage according to the commodity ID and the number of pictures;
When the data of the same commodity ID already exist in the local storage, the number of the pictures is the same and the content of the pictures is consistent, and the data of the commodity ID does not need to be updated;
otherwise, only updating the missing self-learning data;
And aggregating the updated self-learning data into an aggregate self-learning data packet.
In a preferred embodiment of the present invention, the central system updates the self-learning data packet in the step of:
The self-learning main scale sends the aggregate self-learning data packet to a central system, and the central system performs error detection;
when the data transmission of the aggregate self-learning data packet is correct, the central system distributes the aggregate self-learning data packet to the main scale and all the auxiliary scales which do not perform self-learning;
And after the main scale and all the auxiliary scales which do not perform self-learning receive the self-learning data from the central system, updating the self-learning data in the local memory.
The invention solves the defects existing in the background technology, and has the following beneficial effects:
(1) The invention collects and stores self-learning data through the main scales and the auxiliary scales, compresses the self-learning data packet to reduce the transmission data quantity, applies error detection and correction mechanisms to all the main scales, the auxiliary scales and the central system to ensure the integrity and the accuracy of the data, optimizes the communication channel by combining a time division multiplexing method and a frequency division multiplexing method, realizes that a plurality of intelligent scales simultaneously perform high-efficiency data transmission with the central system, and finally, the main scales send the aggregated self-learning data packet to the central system, and the central system distributes the data to all the main scales and the auxiliary scales to realize the real-time updating of the data.
(2) The invention directly optimizes the data transmission process by combining time division multiplexing and frequency division multiplexing and dividing transmission of the self-learning data packet, and can more flexibly allocate network resources. Different small packets can be transmitted in different time slots, and space-time resources of the network are fully utilized. The self-learning data packets are partitioned according to the commodity IDs before being sent, and each partitioned data packet contains related data of the specific commodity ID. The method reduces the data quantity of single transmission, avoids transmission delay or failure caused by overlarge data packets, further improves the efficiency of data transmission, and can simultaneously process a plurality of small packets without waiting for the complete reception of the whole large data packet and then processing the whole large data packet, thereby reducing the calculated quantity of data merging, and only updating missing data without analyzing the data completely, thereby avoiding unnecessary repeated data transmission and storage.
(3) The invention combines the time division multiplexing technology and the frequency division multiplexing technology, divides the total bandwidth of the communication channel into a plurality of sub-bands, each sub-band independently transmits one path of signal, avoids mutual interference among signals, divides the communication period into a plurality of time slots in each sub-band by the time division multiplexing technology, distributes the time slots to different main scales and auxiliary scales for data transmission, realizes simultaneous communication of a plurality of intelligent scales, and utilizes the communication resource to the maximum extent by the combination of the time division multiplexing technology and the frequency division multiplexing technology, improves the utilization rate of the communication channel and ensures that the data transmission is more efficient and ordered.
(4) The invention transmits the compressed self-learning data packet among the main scales and distributes the data to the central system through the data aggregation among the main scales and the central system, each main scale transmits the compressed self-learning data packet among the main scales in an allocated time slot, carries out error detection and data aggregation among the main scales, carries out data verification and validation through data exchange among the main scales, can lead the error and inconsistent positions of the data to be corrected before being synchronously transmitted to the auxiliary scales, then transmits the aggregated self-learning data packet to the central system, distributes the data to all the main scales and the auxiliary scales after the central system detects the error, realizes the real-time updating and synchronization of the data, reduces the transmission of redundant data, improves the data transmission efficiency and the accuracy, and ensures the consistency and the real-time performance of the data of all the intelligent scales through the unified distribution of the central system, and provides accurate and timely data support for store management.
(5) The invention integrates error detection and error correction mechanisms, uses a cyclic redundancy check algorithm to generate check codes, carries out error detection on transmitted data packets, finds and marks the data packets with errors in time, adds redundant information into the data packets by utilizing the Reed-Solomon code when the errors are detected, corrects the errors at a receiving end by using a Reed-Solomon decoder without retransmitting the data, improves the reliability of data transmission, ensures the double guarantee of the error detection and correction mechanism, ensures the high integrity and accuracy in the data transmission process, and avoids systematic errors or decision errors caused by the data transmission errors.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Summary of the application:
the planning method of the communication channel is divided into two types, namely time division multiplexing and frequency division multiplexing;
Time division multiplexing is a technique of dividing time into a plurality of time slots, each time slot being assigned to a different signal source for transmission; in time division multiplexing, the time domain is divided into a number of small segments of periodic cycles, each of which is fixed in length and each of which is used to transmit one sub-channel, improving channel utilization so that a plurality of digital signals can be sequentially transmitted over a communication line in time, but if the data transmission rates of all signal sources are close to the maximum transmission rate of the channel, the efficiency of time division multiplexing may be reduced;
The frequency division multiplexing is a technology for dividing the total bandwidth of a communication channel into a plurality of sub-bands, each sub-band transmits a signal, the center frequency of each sub-band is not overlapped, a certain width of isolation bandwidth is reserved between the sub-channels to prevent the mutual interference between the signals, each sub-channel works in a parallel mode, the transmission delay is not needed to be considered when the signals are transmitted, but more frequency spectrum resources are needed to be occupied, the equipment cost is increased along with the increase of the input channel number, and the intelligent balance is not easy to miniaturize and suitable for the intelligent balance.
In the data transmission lifting method of the self-learning intelligent scale, the simultaneous data transmission among a plurality of main scales and the high-efficiency data transmission with a central system can be realized by combining the time division multiplexing technology and the frequency division multiplexing technology, so that the timeliness and the reliability of the data transmission are ensured.
As shown in fig. 1 and 2, a data transmission lifting method of a self-learning intelligent scale includes the steps of:
starting self-learning by a plurality of main scales, and recording and storing self-learning data into a local memory of the corresponding main scales;
the self-learning main scale compresses the self-learning data packet;
The self-learning main scales send the compressed self-learning data packets to a communication channel, and the self-learning data are transmitted among the self-learning main scales to aggregate the self-learning data;
The self-learning main scale sends the aggregated self-learning data packet to the central system, and the aggregated self-learning data packet is used for triggering the central system to distribute the aggregated self-learning data packet to the main scale and all the auxiliary scales which do not perform self-learning;
Wherein all primary scales, secondary scales and central systems apply error detection and error correction mechanisms, and the communication channels are planned by time division multiplexing and frequency division multiplexing methods.
The main scale is an intelligent scale with higher-level functions or plays a leading role in a network, the auxiliary scale is an auxiliary scale in the network, and the auxiliary scale does not have all the higher-level functions of the main scale and is used for executing basic commodity identification and weighing tasks.
The main scale, the auxiliary scale and the central system together form a distributed weighing system, the main scale bears more coordination and management responsibilities, the auxiliary scale is mainly responsible for executing basic tasks, and the efficiency of data transmission and the overall performance of the system can be effectively improved through the distributed weighing system.
Specifically, the self-learning data comprises commodity identification results, weight, time stamps and commodity pictures, and each commodity is assigned with a commodity ID;
specifically, the main scale identifies commodities and measures weight through a built-in sensor and an algorithm, wherein the built-in sensor comprises a camera and a weight sensor, and is used for identifying the commodities by combining the algorithm, the algorithm specifically comprises the steps of preprocessing an original image captured by the camera, extracting commodity features of the preprocessed image by using a convolutional neural network deep learning model, enabling the commodity features to represent key information in the commodity image, and identifying the commodities in the image by using a target detection algorithm.
Specifically, the collecting and storing steps of the self-learning data include:
The main scale starts self-learning, recognizes commodities and measured weights through a built-in sensor and algorithm, and records self-learning data of the main scale into a local memory of the main scale;
the central system detects that the main scale starts self-learning, and sends out a data collecting instruction to a plurality of auxiliary scales, the auxiliary scales communicate with the corresponding main scales, and commodity related data of the auxiliary scales are uploaded to the corresponding connected main scales;
and after receiving the related data of the commodity of the auxiliary scale uploaded by the auxiliary scale, the main scale is integrated with the self-learning data of the main scale and stored in a local memory of the main scale.
Specifically, the integrated self-learning data is arranged into a self-learning data packet, wherein the format of the self-learning data packet is commodity pictures, commodity IDs, weight and time stamps, and only key data elements are reserved in the self-learning data packet so as to reduce redundant information;
specifically, an efficient compression algorithm is used, including ZIP and GZIP, to compress self-learning data packets to reduce the amount of data transmitted, reduce network congestion, and reduce bandwidth requirements.
Error detection and error correction are important techniques in communication systems for ensuring that data remains intact and accurate during transmission, and during data transmission errors or losses occur due to problems such as signal interference, equipment failure, transmission medium problems, etc., and in order to detect and correct these errors, error control and correction mechanisms are required for the communication system.
Error detection refers to checking whether received self-learning data contains errors in some way, and error correction refers to automatically correcting errors after detecting the errors of the self-learning data without requesting retransmission of the data.
As shown in fig. 3, the steps of the error detection mechanism specifically include:
Generating a first check code for the compressed self-learning data packet by using a cyclic redundancy check algorithm at a port for transmitting the self-learning data, and adding the first check code to the end of the self-learning data packet;
at the port for receiving the self-learning data, recalculating a second check code of the received self-learning data packet by using a cyclic redundancy check algorithm, and comparing the second check code with the first check code;
when the second check code is not matched with the first check code, errors occur in the transmission process of the self-learning data packet;
As shown in fig. 4, specifically, when the error detection mechanism detects that an error occurs in the self-learning data packet during transmission, the error correction mechanism is triggered;
at the port for transmitting the self-learning data, adding redundant information to the self-learning data packet by using the Reed-Solomon code;
When the second check code does not match the first check code, a Reed-Solomon decoder is used to correct data transmission errors in the self-learning data packet at the port receiving the self-learning data.
Reed-solomon coding is a technique widely used in digital communication and storage systems to enhance the error resilience of data by adding redundant data.
Determining codeword length of reed-solomon codesAnd check the number of sign bits,,The number of the self-learning data bits after splicing; the number of errors that can be corrected and detected for the reed-solomon code; , is the number of errors that can be corrected;
Within a limited area Generating polynomialsWherein, the method comprises the steps of, wherein,Is the element of the primitive and is a basic element,Is a formal variable for expressing polynomial terms, and is provided with spliced self-learning data setCorresponding polynomialsThe check symbol polynomial isAnd calculating to obtain corresponding check symbols of the self-learning data, and splicing the check symbols to the self-learning data set to form the encrypted self-learning data code.
In the data transmission of the intelligent scale and the central system, the reliability of the data transmission can be obviously improved by applying an error control and correction mechanism, and the received data can be ensured to be complete and accurate by detecting and correcting errors in the transmission process, so that system faults or erroneous decisions caused by the data errors are avoided.
The time division multiplexing is a technology for dividing time into a plurality of time slots, each time slot is allocated to different signal sources for transmission, the frequency division multiplexing is a technology for dividing the total bandwidth of a communication channel into a plurality of sub-bands and transmitting one signal path for each sub-band, and the two technologies are combined to realize efficient data transmission between a plurality of intelligent scales and a central system at the same time, so that timeliness and reliability of data transmission are ensured.
Specifically, the step of planning the communication channel includes:
Determining the total bandwidth of a communication channel, dividing the total bandwidth into a plurality of sub-bands, and transmitting a signal by each sub-band to realize frequency division multiplexing;
The number of the sub-bands is larger than the number of the communication channels, each sub-band is at most allocated to one communication channel, so that the sub-bands of the communication channels are not overlapped with each other to avoid interference, the sub-bands of the communication channels for information transmission between the intelligent scales are allocated according to the type of the intelligent scales, the people flow rate of the area where the intelligent scales are positioned and the data transmission requirement, and the total bandwidth of the system is assumed to be Hz, total bandwidthDivided intoFrequency sub-bands, width of each sub-band;
The regional people flow changes along with time, redundant sub-band allocation has flexibility, and sub-bands can be freely recombined and split, so that the dynamic allocation of the frequency bands is realized, and the adjustment is convenient.
Determining a complete communication period, dividing the communication period into a plurality of time slots in each sub-band, and distributing each time slot to a main scale and a secondary scale for data transmission to realize time division multiplexing;
according to the data quantity and data transmission requirements of intelligent balance including main balance and auxiliary balance, the length of correspondent time slot is set to ensure that every balance has enough time to send its data, and a complete communication period is set Within each sub-band, the communication period is divided intoA time slot;
In particular, in time slots Setting time slot interval time between the two channels to avoid conflict on the channelsBetween 20-60 microseconds, the time slot interval and the time slot form a complete communication period;
Specifically, the self-learning data transmission step between the self-learning main scales comprises the following steps:
each self-learning main scale adds the compressed self-learning data packet to a corresponding sub-frequency band in the allocated time slot and sends the sub-frequency band to a communication channel;
the self-learning main scale performs error detection when receiving a self-learning data packet from a communication channel;
When the data is detected to be free, the self-learning main scales are used for carrying out arrangement verification on the received self-learning data packets and the self-learning data packets to form an aggregate self-learning data packet;
the self-learning data packet is updated to a local memory of the self-learning main scale;
However, the self-learning data packet contains a large amount of data, such as commodity pictures, recognition results, weight and time stamp, if the data packet is too large, direct transmission may stress the network bandwidth, resulting in a decrease in transmission efficiency, and the problem cannot be directly solved only by combining time division multiplexing and frequency division multiplexing, which is faced with the situation:
further, the self-learning data packet analysis, segmentation and transmission comprises the following specific steps of
For each commodity ID, the self-learning data packet comprises all pictures, weight and time stamps corresponding to the commodity ID, and the pictures, the weight and the time stamps are sequenced in sequence according to the time stamps;
Adding a piece of head information to each divided self-learning data packet, wherein the head information comprises a serial number, commodity IDs and the number of pictures;
The segmentation of the self-learning data packet is helpful to prevent transmission failure caused by oversized data packet and avoid the problems of sticking and unpacking caused by network delay or packet loss;
the self-learning main scale sends the divided data packets to the communication channel in different time slots, and other self-learning main scales sort according to the attached serial numbers when receiving the divided self-learning data packets until all the related serial numbers are received, and perform error detection;
After error detection, analyzing the header information, and checking and updating the data in the local storage according to the commodity ID and the number of pictures;
When the data of the same commodity ID already exist in the local storage, the number of the pictures is the same and the content of the pictures is consistent, and the data of the commodity ID does not need to be updated;
otherwise, only updating the missing self-learning data;
And aggregating the updated self-learning data into an aggregate self-learning data packet.
The self-learning data packet is divided, so that a receiving end can process a plurality of small packets at the same time, the receiving end does not need to wait for the complete reception of the whole large data packet and then process the large data packet, the calculated amount of data merging is reduced, only certain commodity missing data is required to be updated, all data are not required to be completely analyzed and updated, and unnecessary repeated data transmission and storage are avoided.
Clock synchronization between the smart scale and the central system uses synchronization signals to control the start and end of data transmission.
The self-learning data is firstly aggregated among the self-learning main scales, thereby reducing the data redundancy, being beneficial to ensuring the consistency and accuracy of the data among different main scales,
By self-learning the data exchange between the primary scales and performing the checksum verification of the data, potential errors or inconsistencies can be found and corrected before synchronizing to the secondary scales.
Specifically, the central system updates the step of aggregating the self-learning data packets:
The self-learning main scale sends the aggregate self-learning data packet to a central system, and the central system performs error detection;
when the data transmission of the aggregate self-learning data packet is correct, the central system distributes the aggregate self-learning data packet to the main scale and all the auxiliary scales which do not perform self-learning;
And after the main scale and all the auxiliary scales which do not perform self-learning receive the self-learning data from the central system, updating the self-learning data in the local memory.
The above-described preferred embodiments according to the present invention are intended to suggest that, from the above description, various changes and modifications can be made by the person skilled in the art without departing from the scope of the technical idea of the present invention. The technical scope of the present invention is not limited to the description, but must be determined according to the scope of claims.