Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
The interface of the DDR host to the memory module is 64 bits of data plus 16 bits of Error Correction Code (ECC). This is a two channel system in which a 64-bit data bus is divided into two sub-channels, each sub-channel containing 32-bit data and 8-bit ECC, the sub-channel also including a command/address interface.
FIG. 1 is a computer system 900 that includes a Central Processing Unit (CPU) 910, a memory controller (Memory Controller, MC) 920, and DDR5 Load Reduced DIMM (DDR 5 Load Reduced DIMM, DDR5 LRDIMM or D5-LRDIMM) that meet the JEDEC standard. The D5-LRDIMM includes two sub-channels, one of which is shown at 100 in FIG. 1. As shown in fig. 1, the sub-channel 100 includes 5 x4 DRAM chipsets 141-145 (which contain 20 x4 DRAM chips in total), and 5 Data Buffers (DB) 111-115 and clock latch drivers (REGISTERING CLOCK DRIVER, RCD) 130. Wherein the RCD 130 is shared by two sub-channels (the other sub-channel is not shown in fig. 1), the command/address bus 120 connects the MC 920 and the RCD 130, the first set of data buses 101-105 between the external host interface of the sub-channel 100 and the DBs 111-115, the second set of data buses 121-125 between the DBs 111-115 and the x4 DRAM chipset 141-145, the two sets of data buses having the same data rate.
DDR5 high bandwidth DIMMs (DDR 5 HBDIMM or D5-HBDIMM) are defined as DDR5 DIMMs with a double speed that use slower but interleaved DDR4 DRAM chips or DDR5 DRAM chips, that is, can provide twice the bandwidth of a conventional DDR5-DIMM with low cost DDR4 DRAM chips or DDR5 DRAM chips. For x 4 DIMM, the number of DRAM chips on the two ranks of DIMMs may be 36 and 40, respectively, and for x 8 DIMM, the number of DRAM chips on the two ranks of DIMMs may be 18 and 20, respectively.
FIG. 2 shows a computer system 930 that includes a CPU 940, an MC 950, and D5-HBDIMM that meet the JEDEC standard. The D5-HBDIMM includes two sub-channels, one of which sub-channel 200 of D5-HBDIMM is shown in FIG. 2. As shown in FIG. 2, sub-channel 200 includes 5 DRAM chipsets 241-245 (which contain 20 or 18 DRAM chips in total), 5 High Bandwidth Data Buffers (HBDB) 211-215, and a high bandwidth clock latch driver (HBRCD) 230. Wherein HBRCD230 is shared by the two sub-channels, command/address bus 220 connects MC 950 and HBRCD, the first set of data buses 201-205 is between the external host interface of sub-channel 200 and HBDB-215, the second set of data buses 221-225 is between HBDB-215 and DRAM chips 241-245, the second set of data buses 221-225 is running at half the data rate of the first set of data buses 201-205, e.g., the data rate of the first set of data buses 201-205 is 6400MT/s, and the data rate of the second set of data buses 221-225 is 3200MT/s.
Fig. 3 is a schematic diagram of a sub-channel (first sub-channel) structure of D5-HBDIMM according to an embodiment of the present invention. As shown in fig. 3, the first sub-channel 300 includes a first set of data buffers, clock latch drivers, and a first set of DRAM chips. In some embodiments, the first set of data buffers includes 5 High Bandwidth Data Buffers (HBDB) 311-315, the clock latch driver includes a high bandwidth clock latch driver (HBRCD) 330, and the first set of DRAM chips includes 5 DRAM chip sets 341-345 containing 20 x 8 DRAM or 18 x4 DRAM chips. d5-HBDIMM also includes a second sub-channel (not shown in fig. 3), which may have the same structure as the first sub-channel, e.g., correspondingly, the second sub-channel includes a second set of data buffers, clock latch drivers, and a second set of DRAM chips. Wherein the clock latch driver is shared by the first sub-channel and the second sub-channel. For details of the second sub-channel, reference may be made to the first sub-channel, and details thereof will not be repeated here.
The command/address bus 320 connects the MC (not shown in FIG. 3) and the high bandwidth clock latch driver 330, the first set of data buses 301-305 are between the external host interface and the high bandwidth data buffers 311-315 of the first sub-channel 300, the second set of data buses 321-325 are between the high bandwidth data buffers 311-315 and the DRAM chip sets 341-345, the second set of data buses 321-325 are running at half the data rate of the first set of data buses 301-305, e.g., the data rate of the first set of data buses 301-305 is 6400MT/s, and the data rate of the second set of data buses 321-325 is 3200MT/s.
It should be appreciated that the memory system (or computer system) may include a CPU, MC, and D5-HBDIMM having the first and second sub-channels described above. A DIMM (dual in-line memory module) is a module, which includes a plurality of Random Access Memory (RAM) chips on a small circuit board, and stores programs and data that a CPU needs to execute. The CPU manages and reads and writes the DIMM through the MC. The RCD is used for signal conditioning to mediate between the host and the DRAM.
FIG. 4 is a schematic diagram of MC writing data to the sub-channels of D5-HBDIMM shown in FIG. 3 in a4 configuration. As shown in fig. 4, the first sub-channel 300 includes 5×4 DRAM chipsets 341-345, specifically including a first×4 DRAM chipset 341, a second×4 DRAM chipset 342, a third×4 DRAM chipset 343, a fourth×4 DRAM chipset 344, and a fifth×4 DRAM chipset 345, each of the first to fourth×4 DRAM chipsets 341-344 includes 4×4 DRAM chips, the fifth×4 DRAM chipset 345 includes 2×4 DRAM chips, and the×4 DRAM chipsets 341-345 collectively include 18×4 DRAM chips. The first set of data buses 301-305 specifically includes a first host side data bus 301, a second host side data bus 302, a third host side data bus 303, a fourth host side data bus 304, and a fifth host side data bus 305. The second set of data buses 321-325 specifically includes a first memory side data bus 321, a second memory side data bus 322, a third memory side data bus 323, a fourth memory side data bus 324, and a fifth memory side data bus 325.
The first host side data bus 301 connects the external host interface and the first high bandwidth data buffer 311, the first storage side data bus 321 connects the first high bandwidth data buffer 311 and the first x4 DRAM chipset 341, the second host side data bus 302 connects the external host interface and the second high bandwidth data buffer 312, the second storage side data bus 322 connects the second high bandwidth data buffer 312 and the second x4 DRAM chipset 342, the third host side data bus 303 connects the external host interface and the third high bandwidth data buffer 313, the third storage side data bus 323 connects the third high bandwidth data buffer 313 and the third x4 DRAM chipset 343, the fourth host side data bus 304 connects the external host interface and the fourth high bandwidth data buffer 314, the fourth storage side data bus 324 connects the fourth high bandwidth data buffer 314 and the fourth x4 DRAM chipset 344, the fifth host side data bus 305 connects the external host interface and the fifth high bandwidth data buffer 315, and the fifth storage side data bus 325 connects the fifth high bandwidth data buffer DRAM and the fifth x4 chipset 345.
In x 4 DIMM, the 64-bit data bus and 8-bit ECC are divided into two sub-channels, with 32-bit data and 4-bit ECC per clock edge of each sub-channel. In an embodiment of the present invention, four consecutive bits on the data bus are defined as nibbles. On the clock rising edge, the external host interface of the first sub-channel 300 has 8 nibbles of data (N0, N1,..sub.n 7) and one ECC nibble (ECC 0), and on the clock falling edge, the external host interface of the first sub-channel 300 has 8 data nibbles (N8, N9,..sub.n 15) and one ECC nibble (ECC 1).
Taking the first high bandwidth data buffer 311 as an example, it fetches data nibbles N0 and N1 from the first host side data bus 301 at a rate of 6400MT/s on the rising edge of clock DCK, fetches data nibbles N8 and N9 from the first host side data bus 301 on the falling edge of clock DCK, and sends N0, N1, N8 and N9 together to 4X 4 DRAM chips within the first X4 DRAM chipset 341 at a rate of 3200MT/s over the first memory side data bus 321 after a fixed delay time Tpdm, each DRAM chip receiving one data nibble. In some embodiments Tpdm is between 1.1ns+tck/4 and 1.62ns+tck/4, where tck is the period of clock DCK.
Similarly, the second high bandwidth data buffer 312 fetches nibble data N2 and N3 on the rising edge of clock DCK, fetches nibble data N10 and N11 on the falling edge of clock DCK, and sends N2, N3, N10 and N11 together to 4 x 4 DRAM chips within the second x 4 DRAM chipset 342 after a fixed delay time Tpdm, one data nibble being received by each DRAM chip. The third high bandwidth data buffer 313 acquires nibble data N4 and N5 on the rising edge of the clock DCK, acquires nibble data N12 and N13 on the falling edge of the clock DCK, and sends N4, N5, N12 and N13 together to 4 x 4 DRAM chips within the third x 4 DRAM chipset 343 after a fixed delay time Tpdm, one data nibble being received by each DRAM chip. The fourth high bandwidth data buffer 314 fetches the nibble data N6 and N7 on the rising edge of the clock DCK, fetches the nibble data N14 and N15 on the falling edge of the clock DCK, and sends the N6, N7, N14 and N15 together to the 4 x 4 DRAM chips within the fourth x 4 DRAM chipset 344 after a fixed delay time Tpdm, one nibble data per DRAM chip.
In the embodiment of the present invention, each nibble is defined as an ECC symbol, and the MC generates two ECC nibbles by generating a Reed-Solomon (Reed-Solomon) code in a Galois Field (GF) for 16 consecutive data nibbles to support the chipkill correction algorithm. Specifically, RS (18, 16) codes are generated in GF (2 4) space for 16 consecutive nibble data to support chipkill correction algorithm, and two ECC nibbles ECC0 and ECC1 are generated.
The fifth high bandwidth data buffer 315 retrieves nibble ECC0 from the fifth host side data bus 305 at a rate of 6400MT/s on the rising edge of clock DCK, retrieves nibble ECC1 from the fifth host side data bus 305 on the falling edge of clock DCK, and sends ECC0 along with ECC1 at a rate of 3200MT/s over the fifth storage side data bus 325 to 2x 4 DRAM chips within the fifth x 4 DRAM chipset 345, each DRAM chip receiving one nibble ECC, after a fixed delay time Tpdm.
In some embodiments, 18 x4 DRAM chips within the chipset 341-345 are DDR4 DRAM chips. In some embodiments 18 x4 DRAM chips within DRAM chipset 341-345 are DDR5 DRAM chips.
In the x4 DIMM configuration, the data bus width of each DRAM chip is 4 bits, each access to each channel will activate 18 DRAM chips, and error detection will only result in 12.5% memory overhead, since there are 16 data chips per channel and 2 chips provide chipkill reliability. Compared with the prior art, the memory overhead generated by error detection is reduced by 12.5%, the number of DRAM chips is reduced, and the chip has chipkill reliability.
FIG. 5 is a schematic diagram of MC writing data to the sub-channels of D5-HBDIMM shown in FIG. 3 in a 8 configuration. As shown in fig. 5, the first sub-channel 300 includes 5 x8 DRAM chip sets 341-345, specifically including a first x8 DRAM chip set 341, a second x8 DRAM chip set 342, a third x8 DRAM chip set 343, a fourth x8 DRAM chip set 344, and a fifth x8 DRAM chip set 345, each of the first to fifth x8 DRAM chip sets 341-345 includes 4 x8 DRAM chips, and the x8 DRAM chip sets 341-345 include 20 x8 DRAM chips in total.
In x 8 DIMM, the 64-bit data bus and 16-bit ECC are divided into two sub-channels, with 32-bit data and 8-bit ECC per clock edge of each sub-channel. On the clock rising edge, the external host interface connecting the first sub-channel 300 has 8 nibbles of data (N0, N1,..n 7) and one byte of ECC (ECC 0), on the clock falling edge, the external host interface connecting the first sub-channel 300 has 8 nibbles of data (N8, N9,..n 15) and one byte of ECC (ECC 1).
Taking the first high bandwidth data buffer 311 as an example, it takes the nibble data N0 and N1 from the first host side data bus 301 at the rate of 6400MT/s, takes the nibble data N8 and N9 from the first host side data bus 301 at the rising edge of the clock DCK, and sends N0 and N1 together to one x8 DRAM chip in the first x8 DRAM chipset 341 and sends N8 and N9 together to the other x8 DRAM chip in the first x8 DRAM chipset 341 through the first memory side data bus 321 after a fixed delay time Tpdm.
Similarly, the second high bandwidth data buffer 312 retrieves nibble data N2 and N3 on the rising edge of clock DCK, retrieves nibble data N10 and N11 on the falling edge of clock DCK, and sends N2 and N3 together to one x 8 DRAM chip within the second x 8 DRAM chipset 342 and sends N10 and N11 together to the other x 8 DRAM chip within the second x 8 DRAM chipset 342 after a fixed delay time Tpdm. The third high bandwidth data buffer 313 acquires nibble data N4 and N5 on the rising edge of the clock DCK, acquires nibble data N12 and N13 on the falling edge of the clock DCK, and after a fixed delay time Tpdm, sends N4 and N5 together to one x 8 DRAM chip within the third x 8 DRAM chipset 343, and sends N12 and N13 together to another x 8 DRAM chip within the third x 8 DRAM chipset 343. The fourth high bandwidth data buffer 314 retrieves nibble data N6 and N7 on the rising edge of clock DCK, retrieves nibble data N14 and N15 on the falling edge of clock DCK, and after a fixed delay time Tpdm, sends N6 and N7 together to one x 8 DRAM chip within the fourth x 8 DRAM chipset 344, and sends N14 and N15 together to another x 8 DRAM chip within the fourth x 8 DRAM chipset 344.
In the embodiment of the present invention, each byte is defined as one ECC symbol, and MC generates a Reed-Solomon code in GF space by generating a Reed-Solomon code for 8 consecutive data bytes to support chipkill correction algorithm, and generates two ECC bytes. Specifically, RS (10, 8) codes are generated in GF (2 8) space for 8 consecutive data bytes to support chipkill correction algorithms, and two ECC bytes, ECC0 and ECC1, are generated.
The fifth high bandwidth data buffer 315 fetches one byte of ECC0 from the fifth host side data bus 305 at a rate of 6400MT/s on the rising edge of clock DCK, fetches one byte of ECC1 from the fifth host side data bus 305 on the falling edge of clock DCK, and sends ECC0 and ECC1 together at a rate of 3200MT/s over the fifth memory side data bus 325 to 2X 8 DRAM chips within the fifth X8 DRAM chipset 345, each DRAM chip receiving one byte of ECC, after a fixed delay time Tpdm.
In some embodiments, x 8 DRAM chipsets 341-345 are 20 x 8 DDR4 DRAM chips. In some embodiments, x 8 DRAM chipsets 341-345 are 20 x 8DDR5 DRAM chips.
In the x 8 DIMM configuration, the data bus width of each DRAM chip is 8 bits, only 10 DRAM chips need to be activated for each access of each channel, and the chipkill function can be realized by selecting an appropriate ECC module for encoding and storing two ECC data in two x 8 DRAM chips respectively. Therefore, compared with the prior art, the invention has chipkill reliability and lower power consumption.
Fig. 6 is a schematic diagram of the structure of a reed-solomon code comprising k original data symbols of m bits and 2t parity symbols of m bits, n=k+2t representing the total number of code symbols, t representing the number of error symbols that can be corrected. In x 4 D5-HBDIMM, m=4, k=16, t=1, n=18. In x8 D5-HBDIMM, m=8, k= 8,t =1, n=10. Therefore, any single DRAM chip failure in D5-HBDIMM of the present embodiment can be corrected, thereby implementing the chipkill function.
In some embodiments, in x 4 D5-HBDIMM, two nibbles of ECC0 and ECC1 may use the following encoding scheme to generate 2 4-bit parity symbols:
ECC0=N0+N1+...+N15;
ECC1=α[1]*N0+α[2]*N1+...α[16]*N15。
Where α [ ] is the corresponding Galois field over GF (2 4).
In some embodiments, in x 8 D5-HBDIMM, two bytes of ECC0 and ECC1 may be encoded to generate 2 8-bit parity symbols using the following encoding scheme:
ECC0={N1,N0}+{N3,N2}+...+{N15,N14};
ECC1=alpha[1]*{N1,N0}+alpha[2]*{N3,N2}+...+alpha[8]*{N15,N14}。
Where alpha [ ] is the corresponding Galois field over GF (2 8).
Fig. 7 is a schematic flow chart of the implementation of chipkill in D5-HBDIMM according to an embodiment of the present invention. As shown in fig. 7, the method for implementing chipkill in D5-HBDIMM specifically includes the following steps:
step S1, initializing D5-HBDIMM;
S2, reading the configuration of D5-HBDIMM;
step S3, judging whether D5-HBDIMM supports a chipkill function according to the read configuration of D5-HBDIMM;
step S4-1, further judging whether D5-HBDIMM is multiplied by 4 DIMM when D5-HBDIMM supports chipkill function;
Step S4-2, when D5-HBDIMM does not support the chipkill function, using Hamming (Hamming) ECC encoding, the memory controller is ready to access the DIMM, and step S6 is entered;
step S5-1, when D5-HBDIMM is X4 DIMM, combining the received 4 continuous data bits into one nibble data with the same data width as the DRAM, generating RS (18, 16) codes for 16 continuous nibble data in GF (2 4) space to support chipkill correction algorithm, generating two nibble ECCs, preparing the memory controller to access the DIMM, and entering step S6;
Step S5-2, when D5-HBDIMM is not multiplied by 4 DIMM, combining two continuous data nibbles into one byte data with the same data width as the DRAM, generating RS (10, 8) codes for 8 continuous byte data in GF (2 8) space to support chipkill correction algorithm, generating two ECCs with one byte length, and making a memory controller ready to access the DIMM to enter step S6;
And S6, the memory controller accesses the D5-HBDIMM.
Wherein, when D5-HBDIMM supports chipkill function, step S6 further includes the steps of:
transmitting first data at a first rate to a first sub-channel of D5-HBDIMM, the first sub-channel including a first set of data buffers and a first set of DRAM chips;
Transmitting second data at a first rate to a second sub-channel of D5-HBDIMM, the second sub-channel including a second set of data buffers and a second set of DRAM chips;
the first set of data buffers and the second set of data buffers respectively receive the first data and the second data and store the first data and the second data to the first set of DRAM chips and the second set of DRAM chips respectively at a second rate, wherein the first set of DRAM chips and the second set of DRAM chips each comprise a plurality of DRAM chip sets.
Transmitting the first ECC code to the first subchannel of D5-HBDIMM at a first rate;
transmitting the second ECC code to the second subchannel of D5-HBDIMM at the first rate;
The first set of data buffers and the second set of data buffers receive the first ECC code and the second ECC code, respectively, and store the first ECC code and the second ECC code to the first set of DRAM chips and the second set of DRAM chips, respectively, at a second rate.
As described above, the second sub-channel has the same structure as the first sub-channel, and the process of receiving and storing data will be described in detail below using the first sub-channel as an example. The second sub-channel has the same procedure as the first sub-channel and is not described here again.
The first group of data buffers acquire first data and first ECC codes from a plurality of host side data buses at a first rate, and store the first data and the first ECC codes to a plurality of DRAM chip sets at a second rate, wherein the host side data buses, the data buffers, the storage side data buses and the DRAM chip sets are in one-to-one correspondence.
In some implementations, the first data includes a first portion and a second portion, and the first ECC code includes the first ECC data and the second ECC data. The first sub-channel obtains a first portion of the first data and the first ECC data from the plurality of host-side data buses at a rising edge of the clock and obtains a second portion of the first data and the second ECC data from the plurality of host-side data buses at a falling edge of the clock. In some embodiments, the first sub-channel, after retrieving the first data and the first ECC code, stores the first data and the first ECC code from the plurality of memory-side data buses to the plurality of DRAM chip sets at a second rate with a fixed delay time Tpdm. In some embodiments Tpdm is between 1.1ns+tck/4 and 1.62ns+tck/4, where tck is the period of clock DCK.
In some embodiments, the second rate is half the first rate, e.g., the first rate is 6400MT/s and the second rate is 3200MT/s.
In some embodiments, D5-HBDIMM is a x 4 dimm and the DRAM chips are x 4 DRAM chips, the first set of DRAM chips comprising a plurality of DRAM chip sets, one DRAM chip set having 2 DRAM chips and the other DRAM chip sets having 4 DRAM chips each. The first ECC code is stored in the group of 2 DRAM chips. In some embodiments, the first data comprises a plurality of nibble data, the first ECC data and the second ECC data each being nibble in length, the plurality of nibble data, the first ECC data and the second ECC data each being stored in a DRAM chip of the first set of DRAM chips. That is, each of the nibble data, the first ECC data, and the second ECC data occupy one DRAM chip each. In some embodiments, the first data comprises 16 nibbles of data, the first set of DRAM chips comprises 18 DRAM chips, and the 16 nibbles of data, the first ECC data, and the second ECC data are stored in the 18 DRAM chips of the first set of DRAM chips, respectively, that is, 18 DRAM chips need to be activated per access subchannel.
In some embodiments, D5-HBDIMM is a x 8 dimm and the DRAM chips are x 8 DRAM chips, the first set of DRAM chips comprising a plurality of DRAM chip sets each having 4 DRAM chips. The first ECC code is stored in one of the sets of DRAM chips. In some embodiments, the first data comprises a plurality of nibble data, the first ECC data and the second ECC data each being one byte in length, each two of the plurality of nibble data, the first ECC data and the second ECC data being stored in a DRAM chip of the first set of DRAM chips, respectively. That is, each two of the nibble data, the first ECC data, and the second ECC data occupy one DRAM chip. In some embodiments, the first data comprises 16 nibbles of data, the first set of DRAM chips comprises 20 DRAM chips, and each two of the 16 nibbles of data, the first ECC data, and the second ECC data are stored in 10 DRAM chips of the first set of DRAM chips, respectively.
That is, each sub-channel of x 8 DIMM has two more DRAM chips as compared to x 4 DIMM, but only 10 of the DRAM chips need to be activated per access sub-channel. By generating a proper ECC code and storing the data of the ECC code in two x 8 DRAM chips, chipkill error detection can be realized with high reliability, and meanwhile, the power consumption can be remarkably reduced because of fewer chips activated by each access.
The method can execute the chipkill algorithm according to the needs of the computer system user, further determine whether the D5-HBDIMM has the required number of ECC chips to execute the chipkill algorithm on the DRAM chips, and can switch between the chipkill algorithm and HAMMING ECC, thereby being suitable for more application scenes.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.