Disclosure of Invention
In view of the above, the present invention aims to provide a method, an apparatus, a device and a medium for accessing a hardware chip, which can reduce the difficulty of developing the hardware chip access, and improve the development efficiency and the flexibility of the hardware chip access, and the specific scheme is as follows:
In a first aspect, the present application discloses a hardware chip access method, including:
Determining a kernel header file based on a platform operator library issued by a deep learning platform, wherein parameter information of each operator in the kernel header file is consistent with parameter information in the platform operator library;
Determining a configuration file for configuring the implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on the access modes of hardware operator libraries of different hardware chip manufacturers, and generating a front-end kernel function through the kernel header file, the configuration file and the target template;
And accessing the front-end kernel function to a back-end operator interface, and butting the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface to realize the access of the hardware chips.
Optionally, the parameter information of each operator in the kernel header file includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator.
Optionally, the configuration file includes operator names of the operators in the kernel header file, interface names of the back-end operator interfaces, and target parameters for adjusting parameter transfer orders, where the back-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer orders are orders when the front-end kernel function transfers parameters to the back-end operator interfaces.
Optionally, the generating the front-end kernel function through the kernel header file, the configuration file and the target template includes:
and reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function.
Optionally, the front-end kernel function includes an operator function declaration statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
And reading the parameter types of the operators, the parameter names of the operators and the first parameter sequence of the operators in the kernel header file through a first statement template in the target template so as to automatically generate the operator function statement in the front-end kernel function.
Optionally, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, where the input parameter transfer order is an order when an input of the front end kernel function is to an input transfer parameter of the back end operator interface, and the output parameter transfer order is an order when an output of the front end kernel function is to an output transfer parameter of the back end operator interface.
Optionally, the front-end kernel function includes a memory allocation statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
And reading the second target parameters in the configuration file through a second statement template in the target templates so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameters.
Optionally, the front-end kernel function includes an interface call statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
and reading the interface name of the back-end operator interface in the configuration file through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
Optionally, the hardware chip access method further includes:
Judging whether any operator needs target operation processing for any operator;
If any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for any operator in the configuration file.
Optionally, the determining whether the any operator needs to perform the target operation processing includes:
Judging whether any operator needs to perform the target operation processing or not by searching a pre-configured target database, wherein the target database comprises operators needing to perform the target operation processing;
correspondingly, if the any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for the any operator in the configuration file includes:
if any operator is found in the target database, judging that any operator needs to be subjected to the target operation processing, and adding a corresponding operation processing identifier for any operator in the configuration file.
Optionally, the hardware chip access method further includes:
and carrying out interface custom operation based on the hardware operator libraries of different hardware chip manufacturers to obtain the back-end operator interface.
Optionally, the hardware chip access method further includes:
And generating operator names of the operators in the configuration file in a custom mode.
In a second aspect, the present application discloses a hardware chip access device, including:
The system comprises a header file determining module, a core header file processing module and a database processing module, wherein the header file determining module is used for determining a core header file based on a platform operator library issued by a deep learning platform;
the configuration file and template determining module is used for determining a configuration file for configuring the implementation modes of the operators in the kernel header file and determining target templates for carrying out standardization processing on the access modes of the hardware operator libraries of different hardware chip manufacturers;
The front-end kernel function generating module is used for generating a front-end kernel function through the kernel header file, the configuration file and the target template;
And the hardware chip access module is used for accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of different hardware chip manufacturers through the rear-end operator interface so as to realize the access of the hardware chip.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
And a processor for executing the computer program to implement the hardware chip access method disclosed above.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the hardware chip access method disclosed above.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The traditional hardware chip access mode has the following defects that firstly, kernel functions need engineers to develop one by one based on hardware operator libraries of different hardware chip manufacturers, each kernel function needs to be written manually, and because of certain similarity of different kernel functions of the same type of hardware, developers need to take a lot of time to complete repeated work, so that the development period is longer, the working efficiency is lower, errors are easy to occur, and secondly, different kernel function access modes need to be adopted for adapting different hardware, so that the development difficulty is higher.
Therefore, the embodiment of the application provides a hardware chip access scheme, which can reduce the development difficulty of hardware chip access and improve the development efficiency and the flexibility of hardware chip access.
The embodiment of the application discloses a hardware chip access method, which is shown in fig. 1 and comprises the following steps:
and step S11, determining a kernel header file based on a platform operator library issued by the deep learning platform, wherein the parameter information of each operator in the kernel header file is consistent with the parameter information in the platform operator library.
In this embodiment, first, a kernel header file is determined based on a platform operator library issued by a deep learning platform, where the kernel header file includes parameter information of each operator. For example, the parameter information of each operator includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator. It should be noted that the deep learning platform includes, but is not limited to, deep learning frame fly-slurry, and the like.
It can be understood that each operator represents an operation mode, the parameter type is a type of a parameter participating in the operation, including an integer type, a floating point type, a character string type, and the like, the parameter name is a name of a parameter participating in the operation, for example, a parameter a, a parameter B, and the like, the first parameter order is a sequence of each parameter participating in the operation, for example, add (a, B) indicates that the parameter a performs an addition operation with the parameter, and the first parameter order is that the parameter a appears first and then the parameter B appears later.
Step S12, determining a configuration file for configuring the implementation modes of the operators in the kernel header file, determining a target template for normalizing the access modes of hardware operator libraries of different hardware chip manufacturers, and generating a front-end kernel function through the kernel header file, the configuration file and the target template.
It should be noted that, only the kernel header file cannot complete the automatic generation of the front-end kernel function, so that the embodiment perfects the front-end kernel function through the configuration file to automatically generate the required relevant parameters.
Thus, in this embodiment, a configuration file for configuring the implementation of each operator in the kernel header file is determined. Specifically, the configuration file comprises operator names of operators in a kernel header file, interface names of rear-end operator interfaces and target parameters for adjusting parameter transfer sequences, wherein the rear-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer sequences are sequences when front-end kernel functions transfer parameters to the rear-end operator interfaces.
The operator names of the operators in the configuration file can be generated in a self-defined mode. That is, the operator names of the operators in the configuration file can be customized by the programmer, so that the development and the code reading and writing of the programmer are facilitated. Furthermore, the deep learning platform can automatically access to the corresponding operator in the platform operator library according to the parameter information in the kernel header file, and the programmer does not need to manually interface.
The back-end operator interfaces with different interface names correspond to the hardware operator libraries of different hardware chip manufacturers, so that the embodiment can match the corresponding hardware operator libraries according to the interface names in the configuration file, and the access of the hardware chips can be realized rapidly and accurately.
The parameter orders of the front-end kernel function and the back-end operator interface may not be completely corresponding, so the embodiment defines the target parameter for adjusting the parameter transfer order in the configuration file, and the order of the front-end kernel function when transferring the parameters to the back-end operator interface can be adjusted through the target parameter, thereby solving the problem of parameter transfer errors caused by the fact that the parameter orders of the front-end kernel function and the back-end operator interface are not completely corresponding.
Furthermore, in order to normalize access modes of hardware operator libraries of different hardware chip manufacturers, the embodiment generates a target template, in which a plurality of statement templates are specified, and different statement templates are used to generate different contents in a front-end kernel function, and in a specific embodiment, the target template may be a python file.
Finally, the embodiment generates the front-end kernel function through the kernel header file, the configuration file and the target template. As a specific implementation manner for generating the front-end kernel, the embodiment reads the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function. The content contained in the front-end kernel function is explained below.
In some embodiments, the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator in the kernel header file are read through a first statement template in a target template, so as to automatically generate the operator function statement in the front-end kernel function.
That is, in this embodiment, parameter information of each operator in the kernel header file is read through the first statement template in the target template, and an operator function statement in the front-end kernel function is automatically generated.
In some embodiments, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, wherein the input parameter transfer order is an order in which inputs of the front-end kernel function transfer parameters to inputs of the back-end operator interface, and the output parameter transfer order is an order in which outputs of the front-end kernel function transfer parameters to outputs of the back-end operator interface. Further, the front-end kernel function comprises memory allocation statements, and the kernel header file and the configuration file are read through the target template so as to automatically generate the front-end kernel function.
That is, the present embodiment can adjust not only the order when the transfer parameters are input, but also the order when the output parameters are transferred, and since there may be a plurality of output parameters for an individual operator, a plurality of pieces of memory need to be allocated, so the present embodiment determines the number of output parameters according to the second target parameter, and determines the number of memory allocation based on the number of output parameters, it can be understood that the code statement for describing memory allocation, that is, the memory allocation statement in the front-end kernel function.
In some embodiments, the front-end kernel function comprises an interface call statement, the kernel header file and the configuration file are read through the target template so as to automatically generate the front-end kernel function, and the interface name of the back-end operator interface in the configuration file is read through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
That is, since the back-end operator interfaces with different interface names correspond to the hardware operator libraries of different hardware chip manufacturers, the present embodiment reads the interface names of the back-end operator interfaces in the configuration file through the third statement template in the target template so as to match the corresponding hardware operator libraries.
In addition, the front-end kernel function further comprises a copyright statement, a header file introduction, a name space definition, front-end and back-end stream unification, an input and output parameter transfer and a registration function, wherein the copyright statement, the header file introduction, the name space definition and the front-end and back-end stream unification of each operator are the same, the input and output parameter transfer and the registration function can be obtained by reading configuration files, the input and output parameter transfer is to transfer the packed input parameters to a back-end operator interface and transfer the packed output parameters to the back-end operator interface, and the registration function is to inform the operator name customized by the front-end kernel function.
And S13, accessing the front-end kernel function to a back-end operator interface, and butting the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface to realize the access of the hardware chips.
In this embodiment, the front-end kernel function is accessed to a back-end operator interface, and the hardware operator libraries of different hardware chip manufacturers are docked through the back-end operator interface, so as to implement the access of the hardware chip, and it is noted that the back-end operator interface can be obtained by performing interface custom operation on the hardware operator libraries of different hardware chip manufacturers.
It should be noted that, in this embodiment, the hardware chip access facing the deep learning platform may be applied to the face recognition scenario. The method comprises the steps of determining a kernel header file by a platform operator library issued by a deep learning platform based on face recognition, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library of the deep learning platform based on face recognition, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface to realize access of the hardware chips for face recognition.
In addition, the technical scheme in the embodiment can be applied to the application scenes such as voice recognition and unmanned driving besides the scene of face recognition.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
The embodiment of the application discloses a specific hardware chip access method, and compared with the previous embodiment, the technical scheme of the embodiment is further described and optimized. Referring to fig. 2, the method specifically includes:
and S21, judging whether any operator needs target operation processing for any operator.
In this embodiment, there may be a mismatch between the operator parameters in the platform operator library of the deep learning platform and the back-end operator interface, so that some operators need to be subjected to target operation processing to match the operator parameters in the platform operator library of the deep learning platform with the back-end operator interface.
Therefore, in this embodiment, the operator performing the target operation processing in advance generates the target database according to the need, so that by searching the target database configured in advance, it can be determined whether any operator needs to perform the target operation processing, and the target operation processing includes absolute value operation, power operation, exponential operation, and the like.
And S22, if any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for any operator in the configuration file.
It can be understood that if any operator is found in the target database, it is determined that any operator needs to perform target operation processing, and a corresponding operation processing identifier is added to any operator in the configuration file. Therefore, a corresponding operation processing scheme can be generated based on the operation processing identification, so that the problem that operator parameters in a platform operator library of the deep learning platform are not matched with a rear-end operator interface is solved.
Referring to fig. 3, taking a deep learning platform as an example of deep learning frame slurry, first generating a kernel header file, a configuration file and a target template, wherein the kernel header file is abutted against the deep learning frame slurry, generating a front-end kernel function according to the kernel header file, the configuration file and the target template, then accessing the front-end kernel function into a back-end operator interface, and abutting against hardware operator libraries of different hardware chip manufacturers through the back-end operator interface, wherein the hardware operator libraries of different hardware chip manufacturers include, but are not limited to, the hardware operator libraries of hardware chip manufacturers such as the jungle, intel, the kansui, etc. In a specific embodiment, the application can realize front-end and back-end decoupling of hardware chip access based on a multi-element heterogeneous AI (artificial intelligence ) computing power execution framework (TensorGlue), so that the difficulty of deep learning framework slurry flying access to the hardware chip is reduced. TensorGlue is connected with the deep learning frame to fly slurry upwards, the hardware operator library of a hardware chip manufacturer is called downwards, the work of connecting the deep learning frame to fly slurry into different hardware chips is divided into a front end module and a rear end module, the rear end is connected with the hardware operator library of different hardware chip manufacturers through a self-defined rear end operator interface, and the front end can complete the adaptation of different hardware chips only by connecting a platform operator library in the deep learning frame to the rear end operator interface. The application reduces the difficulty of butting complex hardware operator libraries in a front-end and back-end decoupling mode, increases the access flexibility, and simultaneously, as the front end only needs to butt the deep learning framework to fly slurry, all hardware chip manufacturers can multiplex a set of front-end codes and can automatically generate repeated codes in a templatized mode. Therefore, the application greatly improves the development efficiency. Furthermore, the writing of the front-end kernel function is standardized through the target template, so that the code reading feasibility is improved, and the problem of stealth caused by negligence of a developer is avoided.
It should be noted that the method can be applied to the deep learning framework slurry, and has the potential of being applied to other deep learning frameworks, for example, the method can be adopted when Pytorch is accessed to each hardware chip, so as to reduce the difficulty of accessing the hardware chip.
Correspondingly, the embodiment of the application also discloses a hardware chip access device, which is shown in fig. 4 and comprises:
the head file determining module 11 is used for determining a kernel head file based on a platform operator library issued by the deep learning platform, wherein parameter information of each operator in the kernel head file is consistent with parameter information in the platform operator library;
A configuration file and template determining module 12, configured to determine a configuration file for configuring implementation manners of the operators in the kernel header file, and determine target templates for performing standardization processing on access manners of hardware operator libraries of different hardware chip manufacturers;
A front-end kernel function generating module 13, configured to generate a front-end kernel function through the kernel header file, the configuration file, and the target template;
The hardware chip access module 14 is configured to access the front-end kernel function to a back-end operator interface, and interface the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface, so as to implement access of a hardware chip.
The more specific working process of each module may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
In some specific embodiments, the parameter information of each operator in the kernel header file includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator.
In some embodiments, the configuration file includes operator names of the operators in the kernel header file, interface names of the back-end operator interfaces, and target parameters for adjusting parameter transfer orders, where the back-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer orders are orders in which the front-end kernel function transfers parameters to the back-end operator interfaces.
In some specific embodiments, the front-end kernel function generating module 13 may specifically include:
And the front-end kernel function generating unit is used for reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function.
In some specific embodiments, the front-end kernel function includes an operator function declaration statement, and the front-end kernel function generating unit is specifically configured to:
And reading the parameter types of the operators, the parameter names of the operators and the first parameter sequence of the operators in the kernel header file through a first statement template in the target template so as to automatically generate the operator function statement in the front-end kernel function.
In some specific embodiments, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, wherein the input parameter transfer order is an order when an input of the front-end kernel function is to an input transfer parameter of the back-end operator interface, and the output parameter transfer order is an order when an output of the front-end kernel function is to an output transfer parameter of the back-end operator interface.
In some embodiments, the front-end kernel function includes a memory allocation statement, and the front-end kernel function generating unit is specifically configured to:
And reading the second target parameters in the configuration file through a second statement template in the target templates so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameters.
In some specific embodiments, the front-end kernel function includes an interface call statement, and the front-end kernel function generating unit is specifically configured to:
and reading the interface name of the back-end operator interface in the configuration file through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
In some specific embodiments, the hardware chip access device may further include:
The operation processing judging unit is used for judging whether any operator needs to perform target operation processing for the any operator;
And the operation processing identifier adding unit is used for adding a corresponding operation processing identifier for any operator in the configuration file if any operator needs to perform the target operation processing.
In some specific embodiments, the operation processing determining unit is specifically configured to:
Judging whether any operator needs to perform the target operation processing or not by searching a pre-configured target database, wherein the target database comprises operators needing to perform the target operation processing;
correspondingly, the operation processing identifier adding unit is specifically configured to:
if any operator is found in the target database, judging that any operator needs to be subjected to the target operation processing, and adding a corresponding operation processing identifier for any operator in the configuration file.
In some specific embodiments, the hardware chip access device may further include:
And the back-end operator interface definition unit is used for carrying out interface custom operation based on the hardware operator libraries of different hardware chip manufacturers to obtain the back-end operator interface.
In some specific embodiments, the hardware chip access device may further include:
and the operator name definition unit is used for generating the operator names of the operators in the configuration file in a custom mode.
Further, the embodiment of the application also provides electronic equipment. FIG. 5 is a block diagram of an electronic device, according to an exemplary embodiment, and is not intended to limit the scope of use of the present application in any way.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may comprise, in particular, at least one processor 21, at least one memory 22, a display screen 23, an input-output interface 24, a communication interface 25, a power supply 26 and a communication bus 27. The memory 22 is configured to store a computer program, which is loaded and executed by the processor 21 to implement relevant steps in the hardware chip access method disclosed in any of the foregoing embodiments. In addition, the electronic device in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 26 is configured to provide working voltages for each hardware device on the electronic device, the communication interface 25 is configured to create a data transmission channel with an external device for the electronic device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein, and the input/output interface 24 is configured to obtain external input data or output data to the outside, where the specific interface type may be selected according to the specific application needs, which is not specifically limited herein.
The memory 22 may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, and the resources stored thereon may include the computer program 221, which may be stored in a temporary or permanent manner. The computer program 221 may further include a computer program for performing other specific tasks in addition to the computer program for performing the hardware chip access method performed by the electronic device as disclosed in any of the foregoing embodiments.
Further, the embodiment of the application also discloses a computer readable storage medium for storing a computer program, wherein the computer program is executed by a processor to realize the hardware chip access method disclosed in the prior art.
For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In the present disclosure, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and the same or similar parts between the embodiments refer to each other, that is, for the device disclosed in the embodiments, since the device corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant parts refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing describes the method, apparatus, device and storage medium for accessing a hardware chip in detail, and specific examples are provided herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding of the method and core concept of the present application, and meanwhile, to those skilled in the art, according to the concept of the present application, there are variations in the specific embodiments and application ranges, so the disclosure should not be interpreted as limiting the application.