[go: up one dir, main page]

CN118193071B - A hardware chip access method, device, equipment and medium - Google Patents

A hardware chip access method, device, equipment and medium Download PDF

Info

Publication number
CN118193071B
CN118193071B CN202410384463.6A CN202410384463A CN118193071B CN 118193071 B CN118193071 B CN 118193071B CN 202410384463 A CN202410384463 A CN 202410384463A CN 118193071 B CN118193071 B CN 118193071B
Authority
CN
China
Prior art keywords
operator
parameter
target
kernel
kernel function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410384463.6A
Other languages
Chinese (zh)
Other versions
CN118193071A (en
Inventor
王义元
吴韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IEIT Systems Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202410384463.6A priority Critical patent/CN118193071B/en
Publication of CN118193071A publication Critical patent/CN118193071A/en
Application granted granted Critical
Publication of CN118193071B publication Critical patent/CN118193071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

本申请公开了一种硬件芯片接入方法、装置、设备及介质,涉及计算机技术领域,包括:基于深度学习平台发布的平台算子库确定内核头文件;内核头文件中的各个算子的参数信息与平台算子库中的参数信息相一致;确定用于配置内核头文件中的所述各个算子的实现方式的配置文件,并确定用于对不同硬件芯片厂商的硬件算子库的接入方式进行规范化处理的目标模板,然后通过所述内核头文件、所述配置文件和所述目标模板生成前端内核函数;将所述前端内核函数接入至后端算子接口,并通过所述后端算子接口对接所述不同硬件芯片厂商的硬件算子库,以实现硬件芯片的接入。可见,本申请能够降低硬件芯片接入的开发难度,提高开发效率和硬件芯片接入的灵活性。

The present application discloses a method, device, equipment and medium for hardware chip access, which relates to the field of computer technology, including: determining a kernel header file based on a platform operator library released by a deep learning platform; the parameter information of each operator in the kernel header file is consistent with the parameter information in the platform operator library; determining a configuration file for configuring the implementation method of each operator in the kernel header file, and determining a target template for normalizing the access method of the hardware operator library of different hardware chip manufacturers, and then generating a front-end kernel function through the kernel header file, the configuration file and the target template; connecting the front-end kernel function to the back-end operator interface, and connecting the hardware operator library of the different hardware chip manufacturers through the back-end operator interface to realize the access of the hardware chip. It can be seen that the present application can reduce the development difficulty of hardware chip access, improve development efficiency and the flexibility of hardware chip access.

Description

Hardware chip access method, device, equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for accessing a hardware chip.
Background
Along with the development of technology, different hardware architectures and instruction sets designed by different hardware chip manufacturers may have different, and different software stacks and application scenes of different hardware also have obvious differences, so that diversified access schemes exist between the hardware and the deep learning framework.
Taking a deep learning framework flyer (PADDLEPADDLE) as an example, the core work of the deep learning framework flyer in accessing different hardware is to realize Kernel functions (Kernel) on the different hardware. Based on a Custom Kernel module provided by the flight officer, a development engineer can realize unified scheduling and execution of new access hardware by only accessing a written Kernel function into a highly reusable operator library of the flight framework. However, the access mode has the following defects that in the first aspect, an engineer is required to develop kernel functions one by one based on hardware operator libraries of different hardware chip manufacturers, each kernel function is required to be written manually, and because of certain similarity of different kernel functions of the same type of hardware, a developer needs to take a lot of time to complete repeated work, so that the development period is longer, the working efficiency is lower, errors are easy to occur, and in the second aspect, different kernel function access modes are required to be adopted for adapting different hardware, so that the development difficulty is higher.
For this reason, the above technical problems are to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention aims to provide a method, an apparatus, a device and a medium for accessing a hardware chip, which can reduce the difficulty of developing the hardware chip access, and improve the development efficiency and the flexibility of the hardware chip access, and the specific scheme is as follows:
In a first aspect, the present application discloses a hardware chip access method, including:
Determining a kernel header file based on a platform operator library issued by a deep learning platform, wherein parameter information of each operator in the kernel header file is consistent with parameter information in the platform operator library;
Determining a configuration file for configuring the implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on the access modes of hardware operator libraries of different hardware chip manufacturers, and generating a front-end kernel function through the kernel header file, the configuration file and the target template;
And accessing the front-end kernel function to a back-end operator interface, and butting the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface to realize the access of the hardware chips.
Optionally, the parameter information of each operator in the kernel header file includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator.
Optionally, the configuration file includes operator names of the operators in the kernel header file, interface names of the back-end operator interfaces, and target parameters for adjusting parameter transfer orders, where the back-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer orders are orders when the front-end kernel function transfers parameters to the back-end operator interfaces.
Optionally, the generating the front-end kernel function through the kernel header file, the configuration file and the target template includes:
and reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function.
Optionally, the front-end kernel function includes an operator function declaration statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
And reading the parameter types of the operators, the parameter names of the operators and the first parameter sequence of the operators in the kernel header file through a first statement template in the target template so as to automatically generate the operator function statement in the front-end kernel function.
Optionally, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, where the input parameter transfer order is an order when an input of the front end kernel function is to an input transfer parameter of the back end operator interface, and the output parameter transfer order is an order when an output of the front end kernel function is to an output transfer parameter of the back end operator interface.
Optionally, the front-end kernel function includes a memory allocation statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
And reading the second target parameters in the configuration file through a second statement template in the target templates so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameters.
Optionally, the front-end kernel function includes an interface call statement, and the reading, by the target template, the kernel header file and the configuration file so as to automatically generate the front-end kernel function includes:
and reading the interface name of the back-end operator interface in the configuration file through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
Optionally, the hardware chip access method further includes:
Judging whether any operator needs target operation processing for any operator;
If any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for any operator in the configuration file.
Optionally, the determining whether the any operator needs to perform the target operation processing includes:
Judging whether any operator needs to perform the target operation processing or not by searching a pre-configured target database, wherein the target database comprises operators needing to perform the target operation processing;
correspondingly, if the any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for the any operator in the configuration file includes:
if any operator is found in the target database, judging that any operator needs to be subjected to the target operation processing, and adding a corresponding operation processing identifier for any operator in the configuration file.
Optionally, the hardware chip access method further includes:
and carrying out interface custom operation based on the hardware operator libraries of different hardware chip manufacturers to obtain the back-end operator interface.
Optionally, the hardware chip access method further includes:
And generating operator names of the operators in the configuration file in a custom mode.
In a second aspect, the present application discloses a hardware chip access device, including:
The system comprises a header file determining module, a core header file processing module and a database processing module, wherein the header file determining module is used for determining a core header file based on a platform operator library issued by a deep learning platform;
the configuration file and template determining module is used for determining a configuration file for configuring the implementation modes of the operators in the kernel header file and determining target templates for carrying out standardization processing on the access modes of the hardware operator libraries of different hardware chip manufacturers;
The front-end kernel function generating module is used for generating a front-end kernel function through the kernel header file, the configuration file and the target template;
And the hardware chip access module is used for accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of different hardware chip manufacturers through the rear-end operator interface so as to realize the access of the hardware chip.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
And a processor for executing the computer program to implement the hardware chip access method disclosed above.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the hardware chip access method disclosed above.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for accessing a hardware chip disclosed by the application;
FIG. 2 is a flowchart of a specific hardware chip access method disclosed in the present application;
FIG. 3 is a schematic diagram of a specific hardware chip access method disclosed in the present application;
Fig. 4 is a schematic structural diagram of a hardware chip access device according to the present disclosure;
fig. 5 is a block diagram of an electronic device according to the present disclosure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The traditional hardware chip access mode has the following defects that firstly, kernel functions need engineers to develop one by one based on hardware operator libraries of different hardware chip manufacturers, each kernel function needs to be written manually, and because of certain similarity of different kernel functions of the same type of hardware, developers need to take a lot of time to complete repeated work, so that the development period is longer, the working efficiency is lower, errors are easy to occur, and secondly, different kernel function access modes need to be adopted for adapting different hardware, so that the development difficulty is higher.
Therefore, the embodiment of the application provides a hardware chip access scheme, which can reduce the development difficulty of hardware chip access and improve the development efficiency and the flexibility of hardware chip access.
The embodiment of the application discloses a hardware chip access method, which is shown in fig. 1 and comprises the following steps:
and step S11, determining a kernel header file based on a platform operator library issued by the deep learning platform, wherein the parameter information of each operator in the kernel header file is consistent with the parameter information in the platform operator library.
In this embodiment, first, a kernel header file is determined based on a platform operator library issued by a deep learning platform, where the kernel header file includes parameter information of each operator. For example, the parameter information of each operator includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator. It should be noted that the deep learning platform includes, but is not limited to, deep learning frame fly-slurry, and the like.
It can be understood that each operator represents an operation mode, the parameter type is a type of a parameter participating in the operation, including an integer type, a floating point type, a character string type, and the like, the parameter name is a name of a parameter participating in the operation, for example, a parameter a, a parameter B, and the like, the first parameter order is a sequence of each parameter participating in the operation, for example, add (a, B) indicates that the parameter a performs an addition operation with the parameter, and the first parameter order is that the parameter a appears first and then the parameter B appears later.
Step S12, determining a configuration file for configuring the implementation modes of the operators in the kernel header file, determining a target template for normalizing the access modes of hardware operator libraries of different hardware chip manufacturers, and generating a front-end kernel function through the kernel header file, the configuration file and the target template.
It should be noted that, only the kernel header file cannot complete the automatic generation of the front-end kernel function, so that the embodiment perfects the front-end kernel function through the configuration file to automatically generate the required relevant parameters.
Thus, in this embodiment, a configuration file for configuring the implementation of each operator in the kernel header file is determined. Specifically, the configuration file comprises operator names of operators in a kernel header file, interface names of rear-end operator interfaces and target parameters for adjusting parameter transfer sequences, wherein the rear-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer sequences are sequences when front-end kernel functions transfer parameters to the rear-end operator interfaces.
The operator names of the operators in the configuration file can be generated in a self-defined mode. That is, the operator names of the operators in the configuration file can be customized by the programmer, so that the development and the code reading and writing of the programmer are facilitated. Furthermore, the deep learning platform can automatically access to the corresponding operator in the platform operator library according to the parameter information in the kernel header file, and the programmer does not need to manually interface.
The back-end operator interfaces with different interface names correspond to the hardware operator libraries of different hardware chip manufacturers, so that the embodiment can match the corresponding hardware operator libraries according to the interface names in the configuration file, and the access of the hardware chips can be realized rapidly and accurately.
The parameter orders of the front-end kernel function and the back-end operator interface may not be completely corresponding, so the embodiment defines the target parameter for adjusting the parameter transfer order in the configuration file, and the order of the front-end kernel function when transferring the parameters to the back-end operator interface can be adjusted through the target parameter, thereby solving the problem of parameter transfer errors caused by the fact that the parameter orders of the front-end kernel function and the back-end operator interface are not completely corresponding.
Furthermore, in order to normalize access modes of hardware operator libraries of different hardware chip manufacturers, the embodiment generates a target template, in which a plurality of statement templates are specified, and different statement templates are used to generate different contents in a front-end kernel function, and in a specific embodiment, the target template may be a python file.
Finally, the embodiment generates the front-end kernel function through the kernel header file, the configuration file and the target template. As a specific implementation manner for generating the front-end kernel, the embodiment reads the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function. The content contained in the front-end kernel function is explained below.
In some embodiments, the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator in the kernel header file are read through a first statement template in a target template, so as to automatically generate the operator function statement in the front-end kernel function.
That is, in this embodiment, parameter information of each operator in the kernel header file is read through the first statement template in the target template, and an operator function statement in the front-end kernel function is automatically generated.
In some embodiments, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, wherein the input parameter transfer order is an order in which inputs of the front-end kernel function transfer parameters to inputs of the back-end operator interface, and the output parameter transfer order is an order in which outputs of the front-end kernel function transfer parameters to outputs of the back-end operator interface. Further, the front-end kernel function comprises memory allocation statements, and the kernel header file and the configuration file are read through the target template so as to automatically generate the front-end kernel function.
That is, the present embodiment can adjust not only the order when the transfer parameters are input, but also the order when the output parameters are transferred, and since there may be a plurality of output parameters for an individual operator, a plurality of pieces of memory need to be allocated, so the present embodiment determines the number of output parameters according to the second target parameter, and determines the number of memory allocation based on the number of output parameters, it can be understood that the code statement for describing memory allocation, that is, the memory allocation statement in the front-end kernel function.
In some embodiments, the front-end kernel function comprises an interface call statement, the kernel header file and the configuration file are read through the target template so as to automatically generate the front-end kernel function, and the interface name of the back-end operator interface in the configuration file is read through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
That is, since the back-end operator interfaces with different interface names correspond to the hardware operator libraries of different hardware chip manufacturers, the present embodiment reads the interface names of the back-end operator interfaces in the configuration file through the third statement template in the target template so as to match the corresponding hardware operator libraries.
In addition, the front-end kernel function further comprises a copyright statement, a header file introduction, a name space definition, front-end and back-end stream unification, an input and output parameter transfer and a registration function, wherein the copyright statement, the header file introduction, the name space definition and the front-end and back-end stream unification of each operator are the same, the input and output parameter transfer and the registration function can be obtained by reading configuration files, the input and output parameter transfer is to transfer the packed input parameters to a back-end operator interface and transfer the packed output parameters to the back-end operator interface, and the registration function is to inform the operator name customized by the front-end kernel function.
And S13, accessing the front-end kernel function to a back-end operator interface, and butting the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface to realize the access of the hardware chips.
In this embodiment, the front-end kernel function is accessed to a back-end operator interface, and the hardware operator libraries of different hardware chip manufacturers are docked through the back-end operator interface, so as to implement the access of the hardware chip, and it is noted that the back-end operator interface can be obtained by performing interface custom operation on the hardware operator libraries of different hardware chip manufacturers.
It should be noted that, in this embodiment, the hardware chip access facing the deep learning platform may be applied to the face recognition scenario. The method comprises the steps of determining a kernel header file by a platform operator library issued by a deep learning platform based on face recognition, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library of the deep learning platform based on face recognition, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface to realize access of the hardware chips for face recognition.
In addition, the technical scheme in the embodiment can be applied to the application scenes such as voice recognition and unmanned driving besides the scene of face recognition.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
The embodiment of the application discloses a specific hardware chip access method, and compared with the previous embodiment, the technical scheme of the embodiment is further described and optimized. Referring to fig. 2, the method specifically includes:
and S21, judging whether any operator needs target operation processing for any operator.
In this embodiment, there may be a mismatch between the operator parameters in the platform operator library of the deep learning platform and the back-end operator interface, so that some operators need to be subjected to target operation processing to match the operator parameters in the platform operator library of the deep learning platform with the back-end operator interface.
Therefore, in this embodiment, the operator performing the target operation processing in advance generates the target database according to the need, so that by searching the target database configured in advance, it can be determined whether any operator needs to perform the target operation processing, and the target operation processing includes absolute value operation, power operation, exponential operation, and the like.
And S22, if any operator needs to perform the target operation processing, adding a corresponding operation processing identifier for any operator in the configuration file.
It can be understood that if any operator is found in the target database, it is determined that any operator needs to perform target operation processing, and a corresponding operation processing identifier is added to any operator in the configuration file. Therefore, a corresponding operation processing scheme can be generated based on the operation processing identification, so that the problem that operator parameters in a platform operator library of the deep learning platform are not matched with a rear-end operator interface is solved.
Referring to fig. 3, taking a deep learning platform as an example of deep learning frame slurry, first generating a kernel header file, a configuration file and a target template, wherein the kernel header file is abutted against the deep learning frame slurry, generating a front-end kernel function according to the kernel header file, the configuration file and the target template, then accessing the front-end kernel function into a back-end operator interface, and abutting against hardware operator libraries of different hardware chip manufacturers through the back-end operator interface, wherein the hardware operator libraries of different hardware chip manufacturers include, but are not limited to, the hardware operator libraries of hardware chip manufacturers such as the jungle, intel, the kansui, etc. In a specific embodiment, the application can realize front-end and back-end decoupling of hardware chip access based on a multi-element heterogeneous AI (artificial intelligence ) computing power execution framework (TensorGlue), so that the difficulty of deep learning framework slurry flying access to the hardware chip is reduced. TensorGlue is connected with the deep learning frame to fly slurry upwards, the hardware operator library of a hardware chip manufacturer is called downwards, the work of connecting the deep learning frame to fly slurry into different hardware chips is divided into a front end module and a rear end module, the rear end is connected with the hardware operator library of different hardware chip manufacturers through a self-defined rear end operator interface, and the front end can complete the adaptation of different hardware chips only by connecting a platform operator library in the deep learning frame to the rear end operator interface. The application reduces the difficulty of butting complex hardware operator libraries in a front-end and back-end decoupling mode, increases the access flexibility, and simultaneously, as the front end only needs to butt the deep learning framework to fly slurry, all hardware chip manufacturers can multiplex a set of front-end codes and can automatically generate repeated codes in a templatized mode. Therefore, the application greatly improves the development efficiency. Furthermore, the writing of the front-end kernel function is standardized through the target template, so that the code reading feasibility is improved, and the problem of stealth caused by negligence of a developer is avoided.
It should be noted that the method can be applied to the deep learning framework slurry, and has the potential of being applied to other deep learning frameworks, for example, the method can be adopted when Pytorch is accessed to each hardware chip, so as to reduce the difficulty of accessing the hardware chip.
Correspondingly, the embodiment of the application also discloses a hardware chip access device, which is shown in fig. 4 and comprises:
the head file determining module 11 is used for determining a kernel head file based on a platform operator library issued by the deep learning platform, wherein parameter information of each operator in the kernel head file is consistent with parameter information in the platform operator library;
A configuration file and template determining module 12, configured to determine a configuration file for configuring implementation manners of the operators in the kernel header file, and determine target templates for performing standardization processing on access manners of hardware operator libraries of different hardware chip manufacturers;
A front-end kernel function generating module 13, configured to generate a front-end kernel function through the kernel header file, the configuration file, and the target template;
The hardware chip access module 14 is configured to access the front-end kernel function to a back-end operator interface, and interface the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface, so as to implement access of a hardware chip.
The more specific working process of each module may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein.
The hardware chip access method includes the steps of determining a kernel header file based on a platform operator library issued by a deep learning platform, enabling parameter information of each operator in the kernel header file to be consistent with parameter information in the platform operator library, determining a configuration file for configuring implementation modes of each operator in the kernel header file, determining a target template for carrying out standardization processing on access modes of hardware operator libraries of different hardware chip manufacturers, generating a front-end kernel function through the kernel header file, the configuration file and the target template, accessing the front-end kernel function to a rear-end operator interface, and interfacing the hardware operator libraries of the different hardware chip manufacturers through the rear-end operator interface so as to achieve hardware chip access. In summary, the application decouples the hardware chip access into a front end and a rear end, the front end is connected with the platform operator library of the deep learning platform through the front end kernel function, and the rear end is connected with the hardware operator libraries of different hardware chip manufacturers through the rear end operator interface. Because the front-end kernel function only needs to be butted with the deep learning platform, different hardware chip manufacturers can reuse one set of front-end kernel function, so that a developer in the application can automatically generate the related kernel function only by configuring related files. Furthermore, the front end kernel function is connected to the back end operator interface, and the back end operator interface is used for butting the hardware operator libraries of different hardware chip manufacturers, and as only one mode exists for butting the front end kernel function and the back end operator interface, different kernel function connection modes are not needed for adapting to different hardware chips, so that development difficulty is reduced, and access flexibility is improved.
In some specific embodiments, the parameter information of each operator in the kernel header file includes a parameter type of each operator, a parameter name of each operator, and a first parameter order of each operator.
In some embodiments, the configuration file includes operator names of the operators in the kernel header file, interface names of the back-end operator interfaces, and target parameters for adjusting parameter transfer orders, where the back-end operator interfaces with different interface names correspond to hardware operator libraries of different hardware chip manufacturers, and the parameter transfer orders are orders in which the front-end kernel function transfers parameters to the back-end operator interfaces.
In some specific embodiments, the front-end kernel function generating module 13 may specifically include:
And the front-end kernel function generating unit is used for reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function.
In some specific embodiments, the front-end kernel function includes an operator function declaration statement, and the front-end kernel function generating unit is specifically configured to:
And reading the parameter types of the operators, the parameter names of the operators and the first parameter sequence of the operators in the kernel header file through a first statement template in the target template so as to automatically generate the operator function statement in the front-end kernel function.
In some specific embodiments, the target parameters include a first target parameter for adjusting an input parameter transfer order and a second target parameter for adjusting an output parameter transfer order, wherein the input parameter transfer order is an order when an input of the front-end kernel function is to an input transfer parameter of the back-end operator interface, and the output parameter transfer order is an order when an output of the front-end kernel function is to an output transfer parameter of the back-end operator interface.
In some embodiments, the front-end kernel function includes a memory allocation statement, and the front-end kernel function generating unit is specifically configured to:
And reading the second target parameters in the configuration file through a second statement template in the target templates so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameters.
In some specific embodiments, the front-end kernel function includes an interface call statement, and the front-end kernel function generating unit is specifically configured to:
and reading the interface name of the back-end operator interface in the configuration file through a third statement template in the target template so as to automatically generate the interface call statement in the front-end kernel function.
In some specific embodiments, the hardware chip access device may further include:
The operation processing judging unit is used for judging whether any operator needs to perform target operation processing for the any operator;
And the operation processing identifier adding unit is used for adding a corresponding operation processing identifier for any operator in the configuration file if any operator needs to perform the target operation processing.
In some specific embodiments, the operation processing determining unit is specifically configured to:
Judging whether any operator needs to perform the target operation processing or not by searching a pre-configured target database, wherein the target database comprises operators needing to perform the target operation processing;
correspondingly, the operation processing identifier adding unit is specifically configured to:
if any operator is found in the target database, judging that any operator needs to be subjected to the target operation processing, and adding a corresponding operation processing identifier for any operator in the configuration file.
In some specific embodiments, the hardware chip access device may further include:
And the back-end operator interface definition unit is used for carrying out interface custom operation based on the hardware operator libraries of different hardware chip manufacturers to obtain the back-end operator interface.
In some specific embodiments, the hardware chip access device may further include:
and the operator name definition unit is used for generating the operator names of the operators in the configuration file in a custom mode.
Further, the embodiment of the application also provides electronic equipment. FIG. 5 is a block diagram of an electronic device, according to an exemplary embodiment, and is not intended to limit the scope of use of the present application in any way.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may comprise, in particular, at least one processor 21, at least one memory 22, a display screen 23, an input-output interface 24, a communication interface 25, a power supply 26 and a communication bus 27. The memory 22 is configured to store a computer program, which is loaded and executed by the processor 21 to implement relevant steps in the hardware chip access method disclosed in any of the foregoing embodiments. In addition, the electronic device in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 26 is configured to provide working voltages for each hardware device on the electronic device, the communication interface 25 is configured to create a data transmission channel with an external device for the electronic device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein, and the input/output interface 24 is configured to obtain external input data or output data to the outside, where the specific interface type may be selected according to the specific application needs, which is not specifically limited herein.
The memory 22 may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, and the resources stored thereon may include the computer program 221, which may be stored in a temporary or permanent manner. The computer program 221 may further include a computer program for performing other specific tasks in addition to the computer program for performing the hardware chip access method performed by the electronic device as disclosed in any of the foregoing embodiments.
Further, the embodiment of the application also discloses a computer readable storage medium for storing a computer program, wherein the computer program is executed by a processor to realize the hardware chip access method disclosed in the prior art.
For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In the present disclosure, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and the same or similar parts between the embodiments refer to each other, that is, for the device disclosed in the embodiments, since the device corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant parts refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing describes the method, apparatus, device and storage medium for accessing a hardware chip in detail, and specific examples are provided herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding of the method and core concept of the present application, and meanwhile, to those skilled in the art, according to the concept of the present application, there are variations in the specific embodiments and application ranges, so the disclosure should not be interpreted as limiting the application.

Claims (5)

1.一种硬件芯片接入方法,其特征在于,包括:1. A hardware chip access method, characterized by comprising: 基于深度学习平台发布的平台算子库确定内核头文件;所述内核头文件中的各个算子的参数信息与所述平台算子库中的参数信息相一致;Determine a kernel header file based on a platform operator library released by a deep learning platform; parameter information of each operator in the kernel header file is consistent with parameter information in the platform operator library; 确定用于配置所述内核头文件中的所述各个算子的实现方式的配置文件,并确定用于对不同硬件芯片厂商的硬件算子库的接入方式进行规范化处理的目标模板,然后通过所述内核头文件、所述配置文件和所述目标模板生成前端内核函数;Determine a configuration file for configuring the implementation mode of each operator in the kernel header file, and determine a target template for standardizing the access mode of the hardware operator library of different hardware chip manufacturers, and then generate a front-end kernel function through the kernel header file, the configuration file and the target template; 将所述前端内核函数接入至后端算子接口,并通过所述后端算子接口对接所述不同硬件芯片厂商的硬件算子库,以实现硬件芯片的接入;Connecting the front-end kernel function to the back-end operator interface, and connecting to the hardware operator libraries of the different hardware chip manufacturers through the back-end operator interface to achieve access to the hardware chip; 其中,所述内核头文件中的所述各个算子的参数信息包括所述各个算子的参数类型、所述各个算子的参数名称以及所述各个算子的第一参数顺序;The parameter information of each operator in the kernel header file includes the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator; 所述配置文件包含所述内核头文件中的所述各个算子的算子名称、所述后端算子接口的接口名称以及用于调整参数传递顺序的目标参数;其中,不同接口名称的所述后端算子接口对应所述不同硬件芯片厂商的硬件算子库,所述参数传递顺序为所述前端内核函数向所述后端算子接口传递参数时的顺序;The configuration file includes the operator name of each operator in the kernel header file, the interface name of the backend operator interface, and the target parameter for adjusting the parameter passing order; wherein the backend operator interfaces with different interface names correspond to the hardware operator libraries of the different hardware chip manufacturers, and the parameter passing order is the order in which the frontend kernel function passes parameters to the backend operator interface; 所述通过所述内核头文件、所述配置文件和所述目标模板生成前端内核函数,包括:The generating of the front-end kernel function by using the kernel header file, the configuration file and the target template comprises: 通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数;Reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function; 所述前端内核函数中包含算子函数声明语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes an operator function declaration statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第一语句模板读取所述内核头文件中的所述各个算子的参数类型、所述各个算子的参数名称以及所述各个算子的第一参数顺序,以便自动生成所述前端内核函数中的所述算子函数声明语句;Reading the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator in the kernel header file through the first statement template in the target template, so as to automatically generate the operator function declaration statement in the front-end kernel function; 所述目标参数包括用于调整输入参数传递顺序的第一目标参数以及用于调整输出参数传递顺序的第二目标参数,其中,所述输入参数传递顺序为所述前端内核函数的输入向所述后端算子接口的输入传递参数时的顺序,所述输出参数传递顺序为所述前端内核函数的输出向所述后端算子接口的输出传递参数时的顺序;The target parameters include a first target parameter for adjusting the order of input parameter transfer and a second target parameter for adjusting the order of output parameter transfer, wherein the input parameter transfer order is the order in which the input of the front-end kernel function transfers parameters to the input of the back-end operator interface, and the output parameter transfer order is the order in which the output of the front-end kernel function transfers parameters to the output of the back-end operator interface; 其中,所述前端内核函数中包含内存分配语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes a memory allocation statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第二语句模板读取所述配置文件中的所述第二目标参数,以便根据所述第二目标参数自动生成所述前端内核函数中的所述内存分配语句;Reading the second target parameter in the configuration file through the second statement template in the target template, so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameter; 所述前端内核函数中包含接口调用语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes an interface call statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第三语句模板读取所述配置文件中的所述后端算子接口的接口名称,以便自动生成所述前端内核函数中的所述接口调用语句;Reading the interface name of the backend operator interface in the configuration file through the third statement template in the target template, so as to automatically generate the interface call statement in the frontend kernel function; 其中,对于任一算子,判断所述任一算子是否需要进行目标运算处理;Wherein, for any operator, determining whether the operator needs to perform target operation processing; 若所述任一算子需要进行所述目标运算处理,则在所述配置文件中为所述任一算子添加相应的运算处理标识;If any of the operators needs to perform the target operation processing, a corresponding operation processing identifier is added for the any of the operators in the configuration file; 其中,所述判断所述任一算子是否需要进行目标运算处理,包括:Wherein, the determining whether any of the operators needs to perform target operation processing includes: 通过查找预先配置的目标数据库,判断所述任一算子是否需要进行所述目标运算处理;所述目标数据库中包括需要进行所述目标运算处理的算子;By searching a pre-configured target database, determining whether any of the operators needs to perform the target operation processing; the target database includes operators that need to perform the target operation processing; 相应的,所述若所述任一算子需要进行所述目标运算处理,则在所述配置文件中为所述任一算子添加相应的运算处理标识,包括:Correspondingly, if any of the operators needs to perform the target operation processing, a corresponding operation processing identifier is added to the configuration file for the any of the operators, including: 若在所述目标数据库中查找到所述任一算子,则判定所述任一算子需要进行所述目标运算处理,并在所述配置文件中为所述任一算子添加相应的运算处理标识;If any of the operators is found in the target database, it is determined that any of the operators needs to perform the target operation processing, and a corresponding operation processing identifier is added for any of the operators in the configuration file; 其中,所述硬件芯片接入方法,还包括:Wherein, the hardware chip access method further includes: 基于所述不同硬件芯片厂商的硬件算子库进行接口自定义操作以得到所述后端算子接口;Performing interface customization operation based on the hardware operator libraries of the different hardware chip manufacturers to obtain the backend operator interface; 其中,每一种算子表示一种运算方式,参数类型为参与运算的参数的类型,包括整数类型、浮点类型、字符类型、字符串类型,参数名称为参与运算的参数的名称,第一参数顺序为参与运算的各个参数的先后顺序。Among them, each operator represents an operation mode, the parameter type is the type of the parameter involved in the operation, including integer type, floating point type, character type, string type, the parameter name is the name of the parameter involved in the operation, and the first parameter order is the order of the parameters involved in the operation. 2.根据权利要求1所述的硬件芯片接入方法,其特征在于,还包括:2. The hardware chip access method according to claim 1, further comprising: 通过自定义方式生成所述配置文件中的所述各个算子的算子名称。The operator names of the operators in the configuration file are generated in a customized manner. 3.一种硬件芯片接入装置,其特征在于,包括:3. A hardware chip access device, comprising: 头文件确定模块,用于基于深度学习平台发布的平台算子库确定内核头文件;所述内核头文件中的各个算子的参数信息与所述平台算子库中的参数信息相一致;A header file determination module, used to determine a kernel header file based on a platform operator library released by a deep learning platform; parameter information of each operator in the kernel header file is consistent with parameter information in the platform operator library; 配置文件和模板确定模块,用于确定用于配置所述内核头文件中的所述各个算子的实现方式的配置文件,并确定用于对不同硬件芯片厂商的硬件算子库的接入方式进行规范化处理的目标模板;A configuration file and template determination module, used to determine a configuration file for configuring the implementation mode of each operator in the kernel header file, and to determine a target template for standardizing the access mode of the hardware operator library of different hardware chip manufacturers; 前端内核函数生成模块,用于通过所述内核头文件、所述配置文件和所述目标模板生成前端内核函数;A front-end kernel function generation module, used to generate a front-end kernel function through the kernel header file, the configuration file and the target template; 硬件芯片接入模块,用于将所述前端内核函数接入至后端算子接口,并通过所述后端算子接口对接所述不同硬件芯片厂商的硬件算子库,以实现硬件芯片的接入;A hardware chip access module, used to access the front-end kernel function to the back-end operator interface, and to connect to the hardware operator libraries of different hardware chip manufacturers through the back-end operator interface to achieve hardware chip access; 其中,所述内核头文件中的所述各个算子的参数信息包括所述各个算子的参数类型、所述各个算子的参数名称以及所述各个算子的第一参数顺序;The parameter information of each operator in the kernel header file includes the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator; 所述配置文件包含所述内核头文件中的所述各个算子的算子名称、所述后端算子接口的接口名称以及用于调整参数传递顺序的目标参数;其中,不同接口名称的所述后端算子接口对应所述不同硬件芯片厂商的硬件算子库,所述参数传递顺序为所述前端内核函数向所述后端算子接口传递参数时的顺序;The configuration file includes the operator name of each operator in the kernel header file, the interface name of the backend operator interface, and the target parameter for adjusting the parameter passing order; wherein the backend operator interfaces with different interface names correspond to the hardware operator libraries of the different hardware chip manufacturers, and the parameter passing order is the order in which the frontend kernel function passes parameters to the backend operator interface; 所述通过所述内核头文件、所述配置文件和所述目标模板生成前端内核函数,包括:The generating of the front-end kernel function by using the kernel header file, the configuration file and the target template comprises: 通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数;Reading the kernel header file and the configuration file through the target template so as to automatically generate the front-end kernel function; 所述前端内核函数中包含算子函数声明语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes an operator function declaration statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第一语句模板读取所述内核头文件中的所述各个算子的参数类型、所述各个算子的参数名称以及所述各个算子的第一参数顺序,以便自动生成所述前端内核函数中的所述算子函数声明语句;Reading the parameter type of each operator, the parameter name of each operator and the first parameter order of each operator in the kernel header file through the first statement template in the target template, so as to automatically generate the operator function declaration statement in the front-end kernel function; 所述目标参数包括用于调整输入参数传递顺序的第一目标参数以及用于调整输出参数传递顺序的第二目标参数,其中,所述输入参数传递顺序为所述前端内核函数的输入向所述后端算子接口的输入传递参数时的顺序,所述输出参数传递顺序为所述前端内核函数的输出向所述后端算子接口的输出传递参数时的顺序;The target parameters include a first target parameter for adjusting the order of input parameter transfer and a second target parameter for adjusting the order of output parameter transfer, wherein the input parameter transfer order is the order in which the input of the front-end kernel function transfers parameters to the input of the back-end operator interface, and the output parameter transfer order is the order in which the output of the front-end kernel function transfers parameters to the output of the back-end operator interface; 其中,所述前端内核函数中包含内存分配语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes a memory allocation statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第二语句模板读取所述配置文件中的所述第二目标参数,以便根据所述第二目标参数自动生成所述前端内核函数中的所述内存分配语句;Reading the second target parameter in the configuration file through the second statement template in the target template, so as to automatically generate the memory allocation statement in the front-end kernel function according to the second target parameter; 所述前端内核函数中包含接口调用语句,所述通过所述目标模板读取所述内核头文件和所述配置文件,以便自动生成所述前端内核函数,包括:The front-end kernel function includes an interface call statement, and the kernel header file and the configuration file are read through the target template to automatically generate the front-end kernel function, including: 通过所述目标模板中的第三语句模板读取所述配置文件中的所述后端算子接口的接口名称,以便自动生成所述前端内核函数中的所述接口调用语句;Reading the interface name of the backend operator interface in the configuration file through the third statement template in the target template, so as to automatically generate the interface call statement in the frontend kernel function; 其中,对于任一算子,判断所述任一算子是否需要进行目标运算处理;Wherein, for any operator, determining whether the operator needs to perform target operation processing; 若所述任一算子需要进行所述目标运算处理,则在所述配置文件中为所述任一算子添加相应的运算处理标识;If any of the operators needs to perform the target operation processing, a corresponding operation processing identifier is added for the any of the operators in the configuration file; 其中,所述判断所述任一算子是否需要进行目标运算处理,包括:Wherein, the determining whether any of the operators needs to perform target operation processing includes: 通过查找预先配置的目标数据库,判断所述任一算子是否需要进行所述目标运算处理;所述目标数据库中包括需要进行所述目标运算处理的算子;By searching a pre-configured target database, determining whether any of the operators needs to perform the target operation processing; the target database includes operators that need to perform the target operation processing; 相应的,所述若所述任一算子需要进行所述目标运算处理,则在所述配置文件中为所述任一算子添加相应的运算处理标识,包括:Correspondingly, if any of the operators needs to perform the target operation processing, a corresponding operation processing identifier is added to the configuration file for the any of the operators, including: 若在所述目标数据库中查找到所述任一算子,则判定所述任一算子需要进行所述目标运算处理,并在所述配置文件中为所述任一算子添加相应的运算处理标识;If any of the operators is found in the target database, it is determined that any of the operators needs to perform the target operation processing, and a corresponding operation processing identifier is added for any of the operators in the configuration file; 其中,所述硬件芯片接入装置,还包括:Wherein, the hardware chip access device further includes: 基于所述不同硬件芯片厂商的硬件算子库进行接口自定义操作以得到所述后端算子接口;Performing interface customization operation based on the hardware operator libraries of the different hardware chip manufacturers to obtain the backend operator interface; 其中,每一种算子表示一种运算方式,参数类型为参与运算的参数的类型,包括整数类型、浮点类型、字符类型、字符串类型,参数名称为参与运算的参数的名称,第一参数顺序为参与运算的各个参数的先后顺序。Among them, each operator represents an operation mode, the parameter type is the type of the parameter involved in the operation, including integer type, floating point type, character type, string type, the parameter name is the name of the parameter involved in the operation, and the first parameter order is the order of the parameters involved in the operation. 4.一种电子设备,其特征在于,包括:4. An electronic device, comprising: 存储器,用于保存计算机程序;Memory, used to store computer programs; 处理器,用于执行所述计算机程序,以实现如权利要求1或2所述的硬件芯片接入方法。A processor, configured to execute the computer program to implement the hardware chip access method as claimed in claim 1 or 2. 5.一种计算机可读存储介质,其特征在于,用于保存计算机程序;其中,所述计算机程序被处理器执行时实现如权利要求1或2所述的硬件芯片接入方法。5. A computer-readable storage medium, characterized in that it is used to store a computer program; wherein, when the computer program is executed by a processor, the hardware chip access method according to claim 1 or 2 is implemented.
CN202410384463.6A 2024-03-29 2024-03-29 A hardware chip access method, device, equipment and medium Active CN118193071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410384463.6A CN118193071B (en) 2024-03-29 2024-03-29 A hardware chip access method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410384463.6A CN118193071B (en) 2024-03-29 2024-03-29 A hardware chip access method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN118193071A CN118193071A (en) 2024-06-14
CN118193071B true CN118193071B (en) 2025-01-17

Family

ID=91403377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410384463.6A Active CN118193071B (en) 2024-03-29 2024-03-29 A hardware chip access method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118193071B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932092A (en) * 2023-09-18 2023-10-24 之江实验室 Method, device, medium and equipment for automatically generating operator calling code

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8364802B1 (en) * 2008-09-23 2013-01-29 Gogrid, LLC System and method for monitoring a grid of hosting resources in order to facilitate management of the hosting resources
CN111527501B (en) * 2018-08-28 2023-08-01 深圳鲲云信息科技有限公司 Chip adaptation determining method and related product
CN110716874B (en) * 2019-09-25 2023-08-22 北京计算机技术及应用研究所 Domestic operating system hardware compatibility testing method
CN112688814B (en) * 2020-12-24 2022-07-12 新华三技术有限公司 Equipment access method, device, equipment and machine readable storage medium
CN113312103A (en) * 2021-05-31 2021-08-27 浙江商汤科技开发有限公司 Software definition method and device for intelligent camera, electronic equipment and storage medium
CN114201156B (en) * 2021-12-10 2022-08-05 北京百度网讯科技有限公司 Access method, device, electronic device and computer storage medium
CN116301892A (en) * 2023-03-03 2023-06-23 中国科学技术大学 Coarse-grained reconfigurable array operator design method and system for deep learning
CN116107669B (en) * 2023-04-14 2023-08-18 北京大学 Operator registration method, device, equipment and storage medium of deep learning framework

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116932092A (en) * 2023-09-18 2023-10-24 之江实验室 Method, device, medium and equipment for automatically generating operator calling code

Also Published As

Publication number Publication date
CN118193071A (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN113742202B (en) AI chip verification system, method, device and storage medium
CN111399853A (en) Templated deployment method of machine learning model and custom operator
CN110928529A (en) Method and system for assisting operator development
WO2022142601A1 (en) Application program construction method and apparatus, and computer device
EP4575766A1 (en) Code management method and related device
CN118363932B (en) Unmanned aerial vehicle-based intelligent patrol method and system
US7444618B2 (en) Automatic generation of batch programs with identification, insertion of invariables, declarative statements and variables with the use of place-marks
CN111813393A (en) Application development method and device
CN115756433A (en) Code platform migration method and device, electronic equipment and readable storage medium
CN119576304A (en) Code generation method and related device
CN111930359B (en) System and method for developing algorithm on heterogeneous embedded system
CN118193071B (en) A hardware chip access method, device, equipment and medium
CN118069143B (en) Memory access processing method, device, electronic device and storage medium
CN118069142A (en) Compilation optimization method, device, electronic equipment and storage medium
CN112346736B (en) Data processing method and system
CN113703339A (en) Automatic driving simulation method, device, equipment and storage medium
CN114493360A (en) Process creative evaluation method, device, equipment and medium based on RPA and AI
Bocciarelli et al. A Methodological Template for Model Driven Systems Engineering.
CN115202714A (en) Method, device and storage medium for resolving component version conflicts
CN115292399B (en) Data conversion method, device, equipment and storage medium
CN116155716B (en) DDS deployment method and device based on MCU, electronic equipment and storage medium
CN119621187B (en) Method, device, equipment and medium for changing business status based on data dictionary
CN117992077A (en) Pipeline deployment method, device, equipment and storage medium
CN119690404A (en) Method and device for constructing basic input/output system and electronic equipment
CN117311850A (en) Processing method, device, equipment and storage medium of front end frame assembly

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant