Disclosure of Invention
Based on the technical problems, the invention aims to configure a plurality of immediately adjacent pulse rearrangement residual modules based on a pulse rearrangement depth residual neural network, take the pulse rearrangement depth residual neural network after the pulse rearrangement residual blocks are configured as a first pulse rearrangement model, and process target data by adopting the first pulse rearrangement model.
The first aspect of the invention provides a data processing method based on a pulse rearrangement depth residual neural network, which comprises the following steps:
Configuring a plurality of immediately adjacent pulse rearrangement residual modules in a pulse rearrangement depth residual neural network;
taking the pulse rearrangement depth residual neural network based on which the pulse rearrangement residual block is configured as a first pulse rearrangement model;
Training the first impulse rearrangement model;
and inputting the target data into a trained first pulse rearrangement model for processing to obtain a target processing result.
In some embodiments of the present invention, the first pulse rearrangement model is further provided with a first convolution layer, a pooling layer and a full connection layer, and the inputting the target data into the trained first pulse rearrangement model for processing comprises:
inputting target data into the first convolution layer for downsampling;
Inputting the down-sampled target data into the plurality of immediately adjacent pulse rearrangement residual modules to perform pulse rearrangement residual processing to obtain a first processing result;
and sequentially inputting the first processing result into the pooling layer and the full-connection layer to obtain a second processing result.
In some embodiments of the present invention, the obtaining the target processing result includes:
Acquiring a category corresponding to the target data according to the second processing result;
obtaining a regression sequence and/or a regression single vector based on the second processing result;
And taking the category, the regression sequence and/or the regression single vector corresponding to the target data as a target processing result.
In some embodiments of the present invention, the pulse rearrangement residual error module is sequentially configured with a pulse rearrangement layer, a second convolution layer, a normalization layer, a pulse neuron layer and a pulse inverse rearrangement layer, where the inputting the target data after downsampling into the plurality of immediately adjacent pulse rearrangement residual error modules performs pulse rearrangement residual error processing to obtain a first processing result, and the method includes:
inputting the down-sampled target data into a pulse rearrangement layer to obtain a pulse rearrangement result;
and processing the pulse rearrangement result sequentially through a second convolution layer, a normalization layer, a pulse neuron layer and a pulse inverse rearrangement layer to obtain a first processing result.
In some embodiments of the present invention, the down-sampled target data is input into a pulse rearrangement layer to obtain a pulse rearrangement result, where the formula is:
wherein Y represents rearrangement operation performed by the pulse rearrangement layer, X represents target data after downsampling, n, z, Y, X sequentially represents batch size, channel number, height and width of the target data after downsampling, M represents total channel number, r represents rearrangement coefficient, and% represents remainder, Representing a rounding down.
In some embodiments of the invention, a first one of the plurality of closely adjacent pulse reorder residual modules is further configured with a downsampling function.
In some embodiments of the present invention, before the target data is input into the trained first pulse rearrangement model for processing, the method further includes:
Acquiring data to be input;
If the data to be input is a sequence with a length of T formed by a plurality of numbers, wherein T is greater than 1, the sequence does not need to be repeated;
and taking the sequence with the length of a preset length and the sequence with the length of T as target data.
A second aspect of the present invention provides a data processing apparatus based on a pulse-reorder depth residual neural network, the apparatus comprising:
The configuration module is used for configuring a plurality of adjacent pulse rearrangement residual modules in the pulse rearrangement depth residual neural network;
the rearrangement module is used for taking the pulse rearrangement depth residual neural network based on which the pulse rearrangement residual block is configured as a first pulse rearrangement model;
the training module is used for training the first pulse rearrangement model;
and the processing module is used for inputting the target data into the trained first pulse rearrangement model for processing to obtain a target processing result.
A third aspect of the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, the processor operating the computer program to effect the steps of:
Configuring a plurality of immediately adjacent pulse rearrangement residual modules in a pulse rearrangement depth residual neural network;
taking the pulse rearrangement depth residual neural network based on which the pulse rearrangement residual block is configured as a first pulse rearrangement model;
Training the first impulse rearrangement model;
and inputting the target data into a trained first pulse rearrangement model for processing to obtain a target processing result.
A fourth aspect of the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Configuring a plurality of immediately adjacent pulse rearrangement residual modules in a pulse rearrangement depth residual neural network;
taking the pulse rearrangement depth residual neural network based on which the pulse rearrangement residual block is configured as a first pulse rearrangement model;
Training the first impulse rearrangement model;
and inputting the target data into a trained first pulse rearrangement model for processing to obtain a target processing result.
The technical scheme provided by the embodiment of the application has at least the following technical effects or advantages:
The application configures a plurality of adjacent pulse rearrangement residual modules in the pulse rearrangement depth residual neural network, takes the pulse rearrangement depth residual neural network after the pulse rearrangement residual block is configured as a first pulse rearrangement model, trains the first pulse rearrangement model, inputs target data into the trained first pulse rearrangement model for processing, and obtains a target processing result, and the rearrangement processing greatly reduces the parameter quantity, reduces the risk of overfitting, reduces the cost of storage and calculation, and further improves the efficiency of data processing. Meanwhile, the method and the device can acquire the category corresponding to the target data, and can acquire the regression sequence or the regression single vector, so that the method and the device are suitable for various application scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the application. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present application. It will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and certain details may be omitted for clarity of presentation. The shapes of the various regions, layers and relative sizes, positional relationships between them shown in the drawings are merely exemplary, may in practice deviate due to manufacturing tolerances or technical limitations, and one skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions as actually required.
Several examples are given below in connection with the description of fig. 1-6 to describe exemplary embodiments according to the present application. It should be noted that the following application scenarios are only shown for facilitating understanding of the spirit and principles of the present application, and embodiments of the present application are not limited in this respect. Rather, embodiments of the application may be applied to any scenario where applicable.
Example 1:
the embodiment provides a data processing method based on a pulse rearrangement depth residual neural network, as shown in fig. 1, the method includes:
S1, configuring a plurality of adjacent pulse rearrangement residual modules in a depth residual neural network based on pulse rearrangement;
s2, taking a pulse rearrangement depth residual neural network based on which a pulse rearrangement residual block is configured as a first pulse rearrangement model;
S3, training the first pulse rearrangement model;
s4, inputting the target data into the trained first pulse rearrangement model for processing, and obtaining a target processing result.
In a specific implementation manner, referring to fig. 2, the first pulse rearrangement model is further provided with a first convolution layer, a pooling layer and a full connection layer, and inputting target data into the trained first pulse rearrangement model for processing includes inputting the target data into the first convolution layer for downsampling, inputting the downsampled target data into a plurality of immediately adjacent pulse rearrangement residual modules for pulse rearrangement residual processing to obtain a first processing result, and sequentially inputting the first processing result into the pooling layer and the full connection layer to obtain a second processing result.
In a specific implementation mode, the target processing result is obtained, wherein the target processing result comprises the steps of obtaining the category corresponding to the target data according to the second processing result, obtaining the regression sequence and/or the regression single vector based on the second processing result, and taking the category, the regression sequence and/or the regression single vector corresponding to the target data as the target processing result.
In a specific implementation mode, the pulse rearrangement residual error module is sequentially provided with a pulse rearrangement layer, a second convolution layer, a normalization layer, a pulse neuron layer and a pulse inverse rearrangement layer, the down-sampled target data is input into a plurality of immediately adjacent pulse rearrangement residual error modules to carry out pulse rearrangement residual error processing to obtain a first processing result, the method comprises the steps of inputting the down-sampled target data into the pulse rearrangement layer to obtain the pulse rearrangement result, and sequentially processing the pulse rearrangement result through the second convolution layer, the normalization layer, the pulse neuron layer and the pulse inverse rearrangement layer to obtain the first processing result.
In a specific implementation manner, the target data after downsampling is input into a pulse rearrangement layer to obtain a pulse rearrangement result, which is represented by a formula (1):
wherein Y represents rearrangement operation performed by the pulse rearrangement layer, X represents target data after downsampling, n, z, Y, X sequentially represents batch size, channel number, height and width of the target data after downsampling, M represents total channel number, r represents rearrangement coefficient, and% represents remainder, Representing a rounding down.
In a specific implementation, a first one of the plurality of immediately adjacent pulse reorder residual modules is further configured with a downsampling function.
In a specific implementation manner, before target data is input into a trained first pulse rearrangement model to be processed to obtain a target processing result, the method further comprises the steps of obtaining data to be input, repeatedly presetting the single number to obtain a sequence with a preset length if the data to be input is the single number, repeatedly setting the sequence with the preset length and the sequence with the length of T as target data if the data to be input is the sequence with the length of T formed by a plurality of numbers, wherein the T is larger than 1, and the sequence with the length of the preset length and the sequence with the length of T are not needed to be repeated.
Example 2:
The embodiment provides a data processing method based on a pulse rearrangement depth residual neural network, and the steps included in the method are described in detail below.
The first step is to configure a plurality of closely adjacent pulse rearrangement residual modules in a depth residual neural network based on pulse rearrangement.
In a specific implementation manner, as shown in fig. 2, the pulse rearrangement-based depth residual error neural network is provided with a first convolution layer, a pooling layer and a full connection layer, wherein the pooling layer is pooled to 1*1 sizes, and then full connection is performed through the full connection layer. The pulse rearrangement residual module (referred to as a pulse rearrangement element-by-element residual block in fig. 2) is sequentially configured with a pulse rearrangement layer, a second convolution layer, a normalization layer, a pulse neuron layer, and a pulse anti-rearrangement layer, which can be regarded as being stacked. There is no substantial difference in the structure of the first and second convolution layers, but only to distinguish convolutions at different locations. The plurality of pulse rearrangement residual modules are arranged next to one another as one stage, and the whole pulse rearrangement depth residual-based neural network can be configured with i (i > 0) stages. The structure in which the plurality of pulse reorder residual modules are disposed in close proximity can be expressed as:
Wherein, S [ t ] is the input of the whole pulse rearrangement residual error module at t time steps, Is the i < th > { pulse reorder-convolution-normalization-pulse neuron layer-pulse inverse reorder } stack, where n is the total number of stacks (different from n in the fourth step). In formula (2)The output obtained after the actions of a plurality of { pulse rearrangement-convolution-normalization-pulse neuron layer-pulse inverse rearrangement } is obtained through the action of a connection function g together with the original input S [ t ], so as to obtain the output O [ t ]. The complete form of the join function g is s o=g(sa,sb), essentially an element-by-element logic function. For distinction, the corresponding g is represented by binary conversion of the valued components of s o into decimal. For example, in the example shown in table 1, s o takes values of 1,0,1, respectively, and is converted to decimal value of 11, so the corresponding g is denoted as g 11.
Table 1 correspondence table of logical function values element by element
According to the truth table composed of (s a,sb,so), 16 kinds of logic functions can be taken, and the corresponding g is { g 0,g1,…,g15 }. It should be noted here that in the training network of the third step, the gradient of g may be defined by a numerical gradient. For example, using g 11 in Table 1, if a solution is requiredCan be usedTo solve for.
And secondly, taking the pulse rearrangement depth residual neural network based on which the pulse rearrangement residual block is configured as a first pulse rearrangement model.
And thirdly, training the first impulse rearrangement model.
During specific training, the parameters are learning rate E and networkAnd its parameter θ, training set D containing N data, loss functionTotal number of training E.
[1] Let e=1
[2] Let i=1
[3] Extracting (X, Y) =di from the data set to obtain an input sequence of length T
[4] Let t=1
[5] Inputting X [ t ] into network to obtain
[6] Let t=t+1
[7] If t >, go to [8], otherwise return to [5]
[8] Calculating loss
[9] Counter-propagating, gradient descent, update parameters
[10] Let i=i+1
[11] If i >, go to [12], otherwise return to [3]
[12] Let e=e+1
[13] If e >, then exit, otherwise return to [2]
For data classification tasks, when the true class is j, for any t, Y [ t ] [ j ] =1, and for any k+.j, Y [ t ] [ k ] =0. The loss function may be a mean square errorOr cross entropyOr other measure of distance loss. If the data is a regression task, the loss function may be directly the one when the regression objective is a sequenceWhen the regression objective is a single element Y, the loss function needs to be modified accordingly according to the type of output. When the average of all time instants is used as regression result, the loss function may beWhen the output of the last moment or the membrane potential of the last layer of impulse neurons at the last moment is used as a regression result, the loss-loss function can be
And fourthly, inputting the target data into a trained first pulse rearrangement model for processing, and obtaining a target processing result.
In one implementation, inputting the target data into the trained first pulse rearrangement model comprises inputting the target data into a first convolution layer for downsampling, inputting the downsampled target data into a plurality of adjacent pulse rearrangement residual modules for pulse rearrangement residual processing to obtain a first processing result, and sequentially inputting the first processing result into a pooling layer and a full-connection layer to obtain a second processing result. The pulse rearrangement residual error module is sequentially provided with a pulse rearrangement layer, a second convolution layer, a normalization layer, a pulse neuron layer and a pulse inverse rearrangement layer, and the down-sampled target data is input into a plurality of immediately adjacent pulse rearrangement residual error modules to carry out pulse rearrangement residual error processing to obtain a first processing result.
In a specific implementation manner, the target data after downsampling is input into a pulse rearrangement layer to obtain a pulse rearrangement result, which is represented by a formula (1):
wherein Y represents rearrangement operation performed by the pulse rearrangement layer, X represents target data after downsampling, n, z, Y, X sequentially represents batch size, channel number, height and width of the target data after downsampling, M represents total channel number, r represents rearrangement coefficient, and% represents remainder, Representing a rounding down.
Pulse reordering refers to the sequential splitting of data on channels to wide and high, while pulse inverse reordering is an inverse operation, a special dimensional transformation, such as an input pulse matrix of size [ N, M, H, W ] (where N is the batch size, M is the number of channels, H is the height, W is the width), a reordering coefficient r 2 (where r is a positive integer), which is modified to a shapeThe inverse rearrangement of the pulse is performed in a reverse direction to reshape the [ N, M, H, W ] shape intoReferring to FIG. 3, the target data is a 4-channel pulse matrix consisting of four pulse matrices, namely a, b, c and d, with the size of 2x2, and the shape is [4,2,2], and a large 4x4 pulse matrix with the size of [1, 4] is obtained after pulse rearrangement. Splitting the large pulse matrix of [1, 4] into 4 channels to obtain the 4-channel pulse matrix of [4,2,2], namely the inverse rearrangement of the pulses. As a transformable embodiment, if the target data is an image, the target data is subjected to pixel rearrangement processing by a pulse rearrangement residual module, then subjected to convolution processing, normalization processing and pulse neuron layer processing, then subjected to pixel inverse rearrangement processing, and finally subjected to pooling and full connection. In addition, if the data to be input is a single number, the single number is repeated for preset times to obtain a sequence with a preset length, if the data to be input is a sequence with a length of T formed by a plurality of numbers, wherein T is larger than 1, the repeated operation is not needed, and the sequence with the preset length and the sequence with the length of T are used as target data.
The function of the normalization layer is to complete batch normalization or layer normalization, if batch normalization is used, parameters of batch normalization can be combined with the convolution layer, so that the number of network parameters is reduced, and the calculation speed is increased. The specific way is that the weight of convolution is W conv, the bias is generally set to 0, because the batch is normalized and provided with bias, the weight of batch is W bn, the bias term is B bn, the statistical data mean value is X m, and the variance is X v, and the weight and bias of convolution after combination are respectively:
the impulse neuron layer refers to a layer composed of impulse neurons. The behavior of the impulse neuron layer can be described using 3 equations of charge, discharge, reset. Equation (3) represents the first equation as a charge equation:
H[t]=f(V[t-1],X[t]) (3)
Wherein X [ t ] is the input at time t. In order to avoid confusion, H t is used to denote the voltage after charging and V t is used to denote the voltage after discharging. Where f represents the charge equation, different neurons have different charge equations. The charge equation is discretized by a continuous time differential equation. For example, the subthreshold dynamics for LIF neurons described using continuous time differential equations is represented by equation (4):
discretizing to obtain a subliminal discrete time difference equation, namely a charging equation, which is represented by a formula (5):
Where V rest is the resting potential and τ is the membrane time constant. The second equation is a discharge equation, represented by equation (6):
S[t]=Θ(H[t]-Vth) (6)
S [ t ] is the pulse of neuron release, Θ (x) is a Heaviside step function, if and only if x is≥0, outputting 1, otherwise outputting 0. The discharge equation indicates that pulse 1 is released when the voltage after the neuron is charged exceeds threshold V th, otherwise a 0 is output. The third equation is the reset equation, which is represented by equation (7):
Wherein V reset represents a reset potential. HARD RESET denotes a hard reset, which is a direct reset of the voltage to V reset after a neuron release pulse. Soft reset indicates a Soft reset, which is a voltage decrease V th after a neuron release pulse.
Note that the Θ (x) derivative in the discharge equation is infinite at x=0 and 0 at x+.0. Such derivatives are used directly for gradient descent, which can render the network untrainable. To solve this problem, gradient substitution is used, with Θ (x) still being used in the forward propagation, ensuring that the impulse neuron outputs an impulse, and derivative σ' (x) of substitution function σ (x) is used in the backward propagation. Sigma (x) is typically chosen as a continuous function with a value range of (0, 1), such as the usual sigmoid function.
In a specific implementation manner, referring to fig. 2 again, the obtaining the target processing result includes obtaining a category corresponding to the target data according to the second processing result, obtaining a regression sequence and/or a regression single vector based on the second processing result, and taking the category corresponding to the target data, the regression sequence and/or the regression single vector as the target processing result. Wherein, as shown in fig. 2, the output of the last moment of the model or the membrane potential of the last moment of the last layer of impulse neurons can be used as a single vector of regression. The method is applicable to various application scenes.
The present application reduces the parameter from M in·Mout·Kh·Kw to a common convolution without pulse rearrangement and inverse rearrangementThe parameter number is greatly reduced, the risk of overfitting is reduced, and the storage and calculation cost is also reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Example 3:
the embodiment provides a data processing device based on a pulse rearrangement depth residual neural network, as shown in fig. 4, the device includes:
A configuration module 401, configured to configure a plurality of immediately adjacent pulse rearrangement residual modules in a pulse rearrangement depth residual neural network;
A rearrangement module 402, configured to use the pulse rearrangement depth residual neural network after configuring the pulse rearrangement residual block as a first pulse rearrangement model;
A training module 403, configured to train the first impulse rearrangement model;
and the processing module 404 is configured to input the target data into the trained first pulse rearrangement model for processing, so as to obtain a target processing result.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
It is also emphasized that the system provided in embodiments of the present application may acquire and process relevant data based on artificial intelligence techniques. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Reference is now made to fig. 5, which is a schematic illustration of a computer device, according to some embodiments of the application. As shown in fig. 5, the computer device 2 includes a processor 200, a memory 201, a bus 202 and a communication interface 203, where the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202, and a computer program capable of running on the processor 200 is stored in the memory 201, and when the processor 200 runs the computer program, the data processing method based on the pulse rearrangement depth residual error neural network provided by any one of the foregoing embodiments of the present application is executed.
The memory 201 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 203 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 202 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. The memory 201 is configured to store a program, and the processor 200 executes the program after receiving an execution instruction, and the data processing method based on the pulse rearrangement depth residual error neural network disclosed in any embodiment of the present application may be applied to the processor 200 or implemented by the processor 200.
The processor 200 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 200 or by instructions in the form of software. The processor 200 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc., or may be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201, and in combination with its hardware, performs the steps of the above method.
The embodiment of the present application further provides a computer readable storage medium corresponding to the data processing method based on a pulse rearrangement depth residual error neural network provided in the foregoing embodiment, referring to fig. 6, the computer readable storage medium shown in fig. 6 is an optical disc 30, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, performs the data processing method based on a pulse rearrangement depth residual error neural network provided in any of the foregoing embodiments.
In addition, examples of the computer readable storage medium may include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage medium, which will not be described in detail herein.
The computer readable storage medium provided by the above embodiment of the present application has the same beneficial effects as the method adopted, operated or implemented by the application program stored in the same concept of the application as the method for distributing the quantum key distribution channel in the space division multiplexing optical network provided by the embodiment of the present application.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program when being executed by a processor realizes the steps of the data processing method based on the pulse rearrangement depth residual error neural network provided by any embodiment, and the method comprises the steps of configuring a plurality of adjacent pulse rearrangement residual error modules in the pulse rearrangement depth residual error neural network, taking the pulse rearrangement depth residual error neural network based on which the pulse rearrangement residual error blocks are configured as a first pulse rearrangement model, training the first pulse rearrangement model, and inputting target data into the trained first pulse rearrangement model for processing to obtain a target processing result.
It should be noted that the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present application is not directed to any particular programming language. It will be appreciated that the teachings of the present application described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present application. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification, and all processes or units of any method or apparatus so disclosed, may be employed, except that at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the creation means of a virtual machine according to an embodiment of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be implemented as an apparatus or device program for performing part or all of the methods described herein. The program embodying the present application may be stored on a computer readable medium or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.