US20130113809A1 - Technique for inter-procedural memory address space optimization in gpu computing compiler - Google Patents
Technique for inter-procedural memory address space optimization in gpu computing compiler Download PDFInfo
- Publication number
- US20130113809A1 US20130113809A1 US13/659,802 US201213659802A US2013113809A1 US 20130113809 A1 US20130113809 A1 US 20130113809A1 US 201213659802 A US201213659802 A US 201213659802A US 2013113809 A1 US2013113809 A1 US 2013113809A1
- Authority
- US
- United States
- Prior art keywords
- memory
- memory space
- pointer
- access operation
- program code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/43—Checking; Contextual analysis
- G06F8/433—Dependency analysis; Data or control flow analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/45—Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/45—Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
- G06F8/456—Parallelism detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
- G06F8/4441—Reducing the execution time required by the program code
- G06F8/4442—Reducing the number of cache misses; Data prefetching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/445—Exploiting fine grain parallelism, i.e. parallelism at instruction level
Definitions
- the present invention generally relates to graphics processing unit (GPU) computing compilers, and, more specifically, to a technique for inter-procedural memory address space optimization in a GPU computing compiler.
- GPU graphics processing unit
- GPUs Graphics processing units
- Such a GPU typically includes a compiler that compiles program instructions for execution on one or more processing cores included within the GPU. Each such core may execute a particular execution thread in parallel with other processing cores executing execution threads.
- a given core within a GPU may be coupled to a local memory space that is available to the GPU for memory access operations when executing a thread.
- Each core may also be coupled to a shared memory space to which one or more other cores may also be coupled. With this configuration, multiple cores may share data via the shared memory space.
- the cores within the GPU may also be coupled to a global memory space that is accessible to all processing cores and possibly to other processing units aside from the GPU itself.
- non-uniform memory architecture includes multiple different memory spaces where data may reside.
- a program designed to execute on a GPU may access data that resides in any or all of the different memory spaces in the non-uniform memory architecture.
- different memory access operations may be specified, such as load/store operations or atomic operations, each of which target a different address.
- a given memory access operation targeting a given memory address may not specify any particular memory space.
- the GPU executing the program typically reads a tag associated with the address that indicates the specific memory space in which to perform the memory access operation.
- a tag is required for each address because, for example, two different variables may both reside at the same address within different memory spaces. Without such a tag, the two variables would be indistinguishable based on the addresses alone.
- One embodiment of the present inventions sets forth a computer-implemented method for optimizing program code capable of being compiled for execution on a parallel processing unit (PPU) having a non-uniform memory architecture, including identifying a first memory access operation that is associated with a first pointer, where the first memory access operation targets a generic memory space, ascending a use-definition chain related to the first pointer, adding the first pointer to a vector upon determining that the first pointer is derived from a specific memory space in the non-uniform memory architecture, and causing the first memory access operation to target the specific memory space by modifying at least a portion of the program code.
- PPU parallel processing unit
- One advantage of the disclosed technique is that a graphics processing unit is not required to resolve all generic memory access operations at run time, thereby conserving resources and accelerating the execution of the application. Further, the graphics processing unit is enabled to perform additional program code optimizations with the application program code, including memory access re-ordering and alias analysis, further accelerating program code execution.
- FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the present invention
- FIG. 2 is a block diagram of a parallel processing subsystem for the computer system of FIG. 1 , according to one embodiment of the present invention
- FIG. 3 illustrates a build process used to compile a co-processor enabled application, according to one embodiment of the present invention
- FIG. 4 is a flow diagram of method steps for optimizing memory access operations, according to one embodiment of the present invention.
- FIG. 5 is a flow diagram of method steps for transferring constant variables to a global memory space, according to one embodiment of the present invention.
- FIG. 6 sets forth a pseudocode example to illustrate the operation of a device compiler and linker, according to one embodiment of the present invention.
- FIG. 1 is a block diagram illustrating a computer system 100 configured to implement one or more aspects of the present invention.
- Computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 communicating via an interconnection path that may include a memory bridge 105 .
- System memory 104 includes an image of an operating system 130 , a driver 103 , and a co-processor enabled application 134 .
- Operating system 130 provides detailed instructions for managing and coordinating the operation of computer system 100 .
- Driver 103 provides detailed instructions for managing and coordinating operation of parallel processing subsystem 112 and one or more parallel processing units (PPUs) residing therein, as described in greater detail below in conjunction with FIG. 2 .
- PPUs parallel processing units
- Co-processor enabled application 134 incorporates instructions capable of being executed on the CPU 102 and PPUs, those instructions being implemented in an abstract format, such as virtual assembly, and mapping to machine code for the PPUs within parallel processing subsystem 112 .
- the machine code for those PPUs may be stored in system memory 104 or in memory coupled to the PPUs.
- co-processor enabled application 134 represents CUDATM code that incorporates programming instructions intended to execute on parallel processing subsystem 112 .
- application or “program” refers to any computer code, instructions, and/or functions that may be executed using a processor.
- co-processor enabled application 134 may include C code, C++ code, etc.
- co-processor enabled application 134 may include a language extension of a computer language (e.g., C, C++, etc.).
- Memory bridge 105 which may be, e.g., a Northbridge chip, is connected via a bus or other communication path 106 (e.g., a HyperTransport link) to an input/output (I/O) bridge 107 .
- I/O bridge 107 which may be, e.g., a Southbridge chip, receives user input from one or more user input devices 108 (e.g., keyboard, mouse) and forwards the input to CPU 102 via communication path 106 and memory bridge 105 .
- Parallel processing subsystem 112 is coupled to memory bridge 105 via a bus or second communication path 113 (e.g., a Peripheral Component Interconnect Express (PCIe), Accelerated Graphics Port (AGP), or HyperTransport link); in one embodiment parallel processing subsystem 112 is a graphics subsystem that delivers pixels to a display device 110 that may be any conventional cathode ray tube, liquid crystal display, light-emitting diode display, or the like.
- a system disk 114 is also connected to I/O bridge 107 and may be configured to store content and applications and data for use by CPU 102 and parallel processing subsystem 112 .
- System disk 114 provides non-volatile storage for applications and data and may include fixed or removable hard disk drives, flash memory devices, and compact disc (CD) read-only memory (ROM), digital video disc (DVD) ROM, Blu-ray, high-definition (HD) DVD, or other magnetic, optical, or solid state storage devices.
- CD compact disc
- ROM read-only memory
- DVD digital video disc
- HD high-definition
- a switch 116 provides connections between I/O bridge 107 and other components such as a network adapter 118 and various add-in cards 120 and 121 .
- Other components including universal serial bus (USB) or other port connections, CD drives, DVD drives, film recording devices, and the like, may also be connected to I/O bridge 107 .
- the various communication paths shown in FIG. 1 including the specifically named communication paths 106 and 113 may be implemented using any suitable protocols, such as PCIe, AGP, HyperTransport, or any other bus or point-to-point communication protocol(s), and connections between different devices may use different protocols as is known in the art.
- the parallel processing subsystem 112 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU).
- the parallel processing subsystem 112 incorporates circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein.
- the parallel processing subsystem 112 may be integrated with one or more other system elements in a single subsystem, such as joining the memory bridge 105 , CPU 102 , and I/O bridge 107 to form a system on chip (SoC).
- SoC system on chip
- connection topology including the number and arrangement of bridges, the number of CPUs 102 , and the number of parallel processing subsystems 112 , may be modified as desired.
- system memory 104 is connected to CPU 102 directly rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102 .
- parallel processing subsystem 112 is connected to I/O bridge 107 or directly to CPU 102 , rather than to memory bridge 105 .
- I/O bridge 107 and memory bridge 105 might be integrated into a single chip instead of existing as one or more discrete devices.
- Large embodiments may include two or more CPUs 102 and two or more parallel processing subsystems 112 .
- the particular components shown herein are optional; for instance, any number of add-in cards or peripheral devices might be supported.
- switch 116 is eliminated, and network adapter 118 and add-in cards 120 , 121 connect directly to I/O bridge 107 .
- FIG. 2 illustrates a parallel processing subsystem 112 , according to one embodiment of the present invention.
- parallel processing subsystem 112 includes one or more parallel processing units (PPUs) 202 , each of which is coupled to a local parallel processing (PP) memory 204 .
- PPUs parallel processing units
- PP parallel processing
- a parallel processing subsystem includes a number U of PPUs, where U is greater than or equal to 1.
- PPUs 202 and parallel processing memories 204 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or memory devices, or in any other technically feasible fashion.
- ASICs application specific integrated circuits
- some or all of PPUs 202 in parallel processing subsystem 112 are graphics processors with rendering pipelines that can be configured to perform various operations related to generating pixel data from graphics data supplied by CPU 102 and/or system memory 104 via memory bridge 105 and the second communication path 113 , interacting with local parallel processing memory 204 (which can be used as graphics memory including, e.g., a conventional frame buffer) to store and update pixel data, delivering pixel data to display device 110 , and the like.
- parallel processing subsystem 112 may include one or more PPUs 202 that operate as graphics processors and one or more other PPUs 202 that are used for general-purpose computations.
- the PPUs may be identical or different, and each PPU may have a dedicated parallel processing memory device(s) or no dedicated parallel processing memory device(s).
- One or more PPUs 202 in parallel processing subsystem 112 may output data to display device 110 or each PPU 202 in parallel processing subsystem 112 may output data to one or more display devices 110 .
- CPU 102 is the master processor of computer system 100 , controlling and coordinating operations of other system components.
- CPU 102 issues commands that control the operation of PPUs 202 .
- CPU 102 writes a stream of commands for each PPU 202 to a data structure (not explicitly shown in either FIG. 1 or FIG. 2 ) that may be located in system memory 104 , parallel processing memory 204 , or another storage location accessible to both CPU 102 and PPU 202 .
- a pointer to each data structure is written to a pushbuffer to initiate processing of the stream of commands in the data structure.
- PPU 202 reads command streams from one or more pushbuffers and then executes commands asynchronously relative to the operation of CPU 102 . Execution priorities may be specified for each pushbuffer by an application program via device driver 103 to control scheduling of the different pushbuffers.
- Each PPU 202 includes an I/O (input/output) unit 205 that communicates with the rest of computer system 100 via communication path 113 , which connects to memory bridge 105 (or, in one alternative embodiment, directly to CPU 102 ).
- the connection of PPU 202 to the rest of computer system 100 may also be varied.
- parallel processing subsystem 112 is implemented as an add-in card that can be inserted into an expansion slot of computer system 100 .
- a PPU 202 can be integrated on a single chip with a bus bridge, such as memory bridge 105 or I/O bridge 107 . In still other embodiments, some or all elements of PPU 202 may be integrated on a single chip with CPU 102 .
- communication path 113 is a PCIe link, as mentioned above, in which dedicated lanes are allocated to each PPU 202 , as is known in the art. Other communication paths may also be used.
- An I/O unit 205 generates packets (or other signals) for transmission on communication path 113 and also receives all incoming packets (or other signals) from communication path 113 , directing the incoming packets to appropriate components of PPU 202 .
- commands related to processing tasks may be directed to a host interface 206
- commands related to memory operations e.g., reading from or writing to parallel processing memory 204
- Host interface 206 reads each pushbuffer and outputs the command stream stored in the pushbuffer to a front end 212 .
- Each PPU 202 advantageously implements a highly parallel processing architecture.
- PPU 202 ( 0 ) includes a processing cluster array 230 that includes a number C of general processing clusters (GPCs) 208 , where C ⁇ 1.
- GPC 208 is capable of executing a large number (e.g., hundreds or thousands) of threads concurrently, where each thread is an instance of a program.
- different GPCs 208 may be allocated for processing different types of programs or for performing different types of computations. The allocation of GPCs 208 may vary dependent on the workload arising for each type of program or computation.
- GPCs 208 receive processing tasks to be executed from a work distribution unit within a task/work unit 207 .
- the work distribution unit receives pointers to processing tasks that are encoded as task metadata (TMD) and stored in memory.
- TMD task metadata
- the pointers to TMDs are included in the command stream that is stored as a pushbuffer and received by the front end unit 212 from the host interface 206 .
- Processing tasks that may be encoded as TMDs include indices of data to be processed, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed).
- the task/work unit 207 receives tasks from the front end 212 and ensures that GPCs 208 are configured to a valid state before the processing specified by each one of the TMDs is initiated.
- a priority may be specified for each TMD that is used to schedule execution of the processing task.
- Processing tasks can also be received from the processing cluster array 230 .
- the TMD can include a parameter that controls whether the TMD is added to the head or the tail for a list of processing tasks (or list of pointers to the processing tasks), thereby providing another level of control over priority.
- Memory interface 214 includes a number D of partition units 215 that are each directly coupled to a portion of parallel processing memory 204 , where D ⁇ 1. As shown, the number of partition units 215 generally equals the number of dynamic random access memory (DRAM) 220 . In other embodiments, the number of partition units 215 may not equal the number of memory devices. Persons of ordinary skill in the art will appreciate that DRAM 220 may be replaced with other suitable storage devices and can be of generally conventional design. A detailed description is therefore omitted. Render targets, such as frame buffers or texture maps may be stored across DRAMs 220 , allowing partition units 215 to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processing memory 204 .
- DRAM dynamic random access memory
- Any one of GPCs 208 may process data to be written to any of the DRAMs 220 within parallel processing memory 204 .
- Crossbar unit 210 is configured to route the output of each GPC 208 to the input of any partition unit 215 or to another GPC 208 for further processing.
- GPCs 208 communicate with memory interface 214 through crossbar unit 210 to read from or write to various external memory devices.
- crossbar unit 210 has a connection to memory interface 214 to communicate with I/O unit 205 , as well as a connection to local parallel processing memory 204 , thereby enabling the processing cores within the different GPCs 208 to communicate with system memory 104 or other memory that is not local to PPU 202 .
- crossbar unit 210 is directly connected with I/O unit 205 .
- Crossbar unit 210 may use virtual channels to separate traffic streams between the GPCs 208 and partition units 215 .
- GPCs 208 can be programmed to execute processing tasks relating to a wide variety of applications, including but not limited to, linear and nonlinear data transforms, filtering of video and/or audio data, modeling operations (e.g., applying laws of physics to determine position, velocity and other attributes of objects), image rendering operations (e.g., tessellation shader, vertex shader, geometry shader, and/or pixel shader programs), and so on.
- modeling operations e.g., applying laws of physics to determine position, velocity and other attributes of objects
- image rendering operations e.g., tessellation shader, vertex shader, geometry shader, and/or pixel shader programs
- PPUs 202 may transfer data from system memory 104 and/or local parallel processing memories 204 into internal (on-chip) memory, process the data, and write result data back to system memory 104 and/or local parallel processing memories 204 , where such data can be accessed by other system components, including CPU 102 or another parallel processing subsystem 112 .
- a PPU 202 may be provided with any amount of local parallel processing memory 204 , including no local memory, and may use local memory and system memory in any combination.
- a PPU 202 can be a graphics processor in a unified memory architecture (UMA) embodiment. In such embodiments, little or no dedicated graphics (parallel processing) memory would be provided, and PPU 202 would use system memory exclusively or almost exclusively.
- UMA unified memory architecture
- a PPU 202 may be integrated into a bridge chip or processor chip or provided as a discrete chip with a high-speed link (e.g., PCI Express) connecting the PPU 202 to system memory via a bridge chip or other communication means.
- PCI Express high-speed link
- any number of PPUs 202 can be included in a parallel processing subsystem 112 .
- multiple PPUs 202 can be provided on a single add-in card, or multiple add-in cards can be connected to communication path 113 , or one or more of PPUs 202 can be integrated into a bridge chip.
- PPUs 202 in a multi-PPU system may be identical to or different from one another.
- different PPUs 202 might have different numbers of processing cores, different amounts of local parallel processing memory, and so on.
- those PPUs may be operated in parallel to process data at a higher throughput than is possible with a single PPU 202 .
- Systems incorporating one or more PPUs 202 may be implemented in a variety of configurations and form factors, including desktop, laptop, or handheld personal computers, servers, workstations, game consoles, embedded systems, and the like.
- each PPU 202 is implemented with a non-uniform memory architecture. Accordingly, each such PPU 202 may have access to multiple different memory spaces, such as, e.g., system memory 104 or PP memory 204 , among others, as directed by co-processor enabled application 134 .
- a compiler and linker application derived from device driver 103 is configured to optimize and compile program code in order to generate co-processor enabled application 134 . That program code may initially include different memory access operations, such as load/store operations or atomic operations, that may not specify a particular memory space with which to perform the memory access operations.
- Such memory access operations are referred to herein as “generic memory access operations.”
- the compiler and linker application is configured to modify that program code, as needed, to resolve generic memory access operations into specific memory access operations that target a particular memory space, as described in greater detail below in conjunction with FIGS. 3-6 .
- FIG. 3 illustrates the build process used to compile the co-processor enabled application 134 of FIG. 1 , according to one embodiment of the present invention.
- Program code 310 includes host source code 312 and device source code 314 .
- Host source code 312 incorporates programming instructions intended to execute on a host, such as an x86-based personal computer (PC) or server.
- the programming instructions in source code 312 may include calls to functions defined in device source code 314 . Any technically feasible mechanism may be used to specify which functions are designated as device source code 314 .
- Host source code 312 is pre-processed, compiled, and linked by a host compiler and linker 322 .
- the host compiler and linker 322 generates host machine code 342 , which is stored within co-processor enabled application 134 .
- Device source code 314 is pre-processed, compiled and linked by a device compiler and linker 324 .
- This compile operation constitutes a first stage compile of device source code 314 .
- Device compiler and linker 324 generates device virtual assembly 346 , which is stored within a device code repository 350 , residing with or within co-processor enabled application 134 .
- a virtual instruction translator 334 may generate device machine code 324 from device virtual assembly 346 .
- This compile operation constitutes a second stage compile of device source code 314 .
- Virtual instruction translator 334 may generate more than one version of device machine code 344 , based on the availability of known architecture definitions.
- virtual instruction translator 334 may generate a first version of device machine code 344 , which invokes native 64-bit arithmetic instructions (available in the first target architecture) and a second version of device machine code 344 , which emulates 64-bit arithmetic functions on targets that do not include native 64-bit arithmetic instructions.
- Architectural information 348 indicates the real architecture version used to generate device machine code 344 .
- the real architecture version defines the features that are implemented in native instructions within a real execution target, such as the PPU 202 .
- Architectural information 348 also indicates the virtual architecture version used to generate device virtual assembly 346 .
- the virtual architecture version defines the features that are assumed to be either native or easily emulated and the features that are not practical to emulate. For example, atomic addition operations are not practical to emulate at the instruction level, although they may be avoided altogether at the algorithmic level in certain cases and, therefore, impact which functions may be compiled in the first compile stage.
- the device code repository also includes architecture information 348 , which indicates which architectural features were assumed when device machine code 344 and device virtual assembly 346 where generated. Persons skilled in the art will recognize that the functions included within device machine code 344 and virtual assembly 346 reflect functions associated with the real architecture of PPU 202 .
- the architecture information 348 provides compatibility information for device machine code 344 and compiler hints for a second stage compile operation, which may be performed by a device driver 103 at some time after the development of co-processor enabled application 134 has already been completed.
- Device compiler and linker 324 is also configured to perform various optimization routines with different procedures and/or functions within program code 310 .
- program code 310 may initially include generic memory access operations that do not specify a particular memory space, and device compiler and linker 324 is configured to modify that program code to resolve the generic memory access operations into memory access operations that target a particular memory space.
- FIG. 4 describes an approach for optimizing memory access operations
- FIG. 5 describes an approach to transferring constant variables to reside in a global memory space
- FIG. 6 outlines an exemplary scenario in which the approaches discussed in conjunction with FIGS. 4 and 5 may be beneficial.
- FIG. 4 is a flow diagram of method steps for optimizing memory access operations, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of FIGS. 1-2 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention.
- Device compiler and linker 324 shown in FIG. 3 is configured to implement the method steps.
- a method 400 begins at step 402 , where device compiler and linker 324 collects memory access operations within program code 310 that target a generic memory space.
- the memory access operations may be load/store operations or atomic operations such as, e.g., pointer de-referencing.
- device compiler and linker 324 ascend a use-definition chain generated for the pointer associated with the memory access operation in order to determine the specific memory space from which the pointer is derived.
- Device compiler and linker 324 may generate the use-definition chain using conventional techniques, such as data flow analysis, in order to identify the use of the pointer and any previous definitions involving the pointer. In one embodiment, device compiler and linker 324 generates the use-definition chain using live analysis-based techniques.
- device compiler and linker 324 adds each pointer derived from a specific memory space (such as, e.g., global memory, local memory, shared memory, etc.) to a vector.
- device compiler and linker 324 modifies the memory access operation associated with that pointer to target the specific memory space from which the pointer was derived. For example, a particular pointer p derived from global memory may be de-referenced during a load operation.
- device compiler and linker 324 could replace the pointer de-reference with a load operation specifically targeting global memory.
- device compiler and linker 324 may not be able to implement the method 400 to modify a given memory access operation to target a specific memory space within program code 310 .
- Such a situation may occur when program code 310 includes a branch instruction. Since the outcome of a branch instruction is unknown until run time, memory access operations that target different memory spaces depending on the outcome of the branch instruction may not be modifiable in the fashion described above. In some cases those memory access operations may be left untouched as generic memory access operations and resolved at run time.
- device compiler and linker 324 is configured to transfer certain constant variables and the associated memory access operations within program code 310 to reside in and target, respectively, a global memory space, as discussed in greater detail below in conjunction with FIG. 5 .
- FIG. 5 is a flow diagram of method steps for transferring constant variables to reside in global memory space, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of FIGS. 1-2 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention.
- Device compiler and linker 324 shown in FIG. 3 is configured to implement the method steps.
- a method 500 begins at step 502 , where, for each constant address in program code 310 , device compiler and linker 324 descends the definition-use chain for the constant address until a memory access operation is reached.
- Device compiler and linker 324 may generate the definition-use chain using conventional techniques, such as data flow analysis, in order to identify the declaration of the constant address and any subsequent uses.
- device compiler and linker 324 generates the definition-use chain using live analysis-based techniques.
- step 504 for each memory access operation reached in step 502 and associated with a particular constant address, device compiler and linker 324 marks a constant declaration associated with the constant address as “must-transfer” if the memory access operation is not resolved to a specific memory space.
- device compiler and linker 324 generates a dependency list for each memory access operation.
- device compiler and linker 324 identifies any dependency lists that include constant addresses with declarations marked as “must-transfer.”
- device compiler and linker 324 marks any memory access operations associated with the dependency lists identified in step 508 as “must-transfer.”
- device compiler and linker 324 marks any constant declarations associated with constant addresses within the identified dependency lists as “must-transfer.”
- device compiler and linker 324 modifies each transferable constant declaration to specify a location in global memory space.
- device compiler and linker 324 modifies each transferable memory access operation to target global memory. The method 500 then ends.
- device compiler and linker 324 is capable of transferring constant variables to reside in a global memory space in situations where branch instructions would otherwise leave memory access operations involving those constant variables as generic memory access operations. Furthermore, device compiler and linker 324 is also configured to transfer any constant variables and associated memory access operations that depend on previously-transferred variables, thereby ensuring that all dependent constant variables are transferred together.
- FIG. 6 sets forth a pseudocode example to illustrate the operation of a device compiler and linker, according to one embodiment of the present invention.
- pseudocode 600 includes pseudocode blocks 610 , 620 , 630 , and 640 .
- Pseudocode block 610 includes two constant int declarations for variables c 1 and c 2 and a shared int declaration for variable s.
- Pseudocode block 620 includes three pointer assignments p 1 , p 2 and p 4 to addresses of the variables c 1 , s, and c 2 .
- Pseudocode block 630 includes branch instructions 632 and 634 that assign pointers p 3 and p 5 , respectively, differently depending on which branch is followed at run time.
- Pseudocode block 640 includes memory access operations that set the data stored at pointers p 3 , p 5 , and p 1 to variables x, y, and z, respectively.
- pseudocode 600 described above could be easily implemented in a variety of programming languages.
- pseudocode 600 may be implemented in the CUDATM programming language and may represent some or all of program code 310 .
- device compiler and linker 324 performing the method 400 described above in conjunction with FIG. 4 .
- device compiler and linker 324 first identifies the memory access operations within pseudocode block 640 , similar to step 402 of the method 400 . Those memory access operations are associated with pointers p 1 , p 3 , and p 5 , as is shown.
- Device compiler and linker 324 then ascends the use-definition chain of each such memory access operation, similar to step 404 of the method 400 .
- device compiler and linker 324 ascends the use-definition chain of p 3 by following each branch of branch instruction 632 up to the pointer assignments of p 1 and p 2 in pseudocode block 620 , then tracing variables c 1 and s back to the declaration of those variables within pseudocode block 610 .
- device compiler and linker 324 ascends the use-definition chain of p 5 by following each branch of branch instruction 634 up to the pointer assignments of p 1 and p 4 in pseudocode block 620 , then tracing variables c 1 and c 2 back to the declaration of those variables within pseudocode block 610 .
- Device compiler and linker 324 ascends the use-definition chain of p 1 by tracing that pointer back to the pointer assignment in pseudocode block 620 , then tracing variable c 1 back to the declaration of that variable within pseudocode block 610 .
- device compiler and linker 324 For each pointer associated with the memory access operations collected in step 404 , device compiler and linker 324 adds the pointer to a vector if that pointer is derived from a specific memory space, similar to step 406 in the method 400 .
- pointer p 1 is derived from constant variable c 1 , which resides in constant memory. Accordingly, device compiler and linker 324 adds p 1 to the vector.
- Pointer p 3 is derived from either p 1 or p 2 , depending on branch instruction 632 . Since p 1 and p 2 are derived from constant memory and shared memory, respectively, the memory access associated with p 3 cannot be resolved to a specific memory space and pointer p 3 is not added to the vector.
- Pointer p 5 is derived from either of constant variables c 1 and c 2 , and so regardless of which branch of branch instruction 634 is followed at run time, p 5 will still be derived from constant memory. Accordingly, device compiler and linker 324 adds p 5 to the vector.
- Device compiler and linker 324 traverses the vector and, for each pointer in the vector, modifies the associated memory access operation to target the specific memory space from which the pointer was derived, similar to step 408 of the method 400 . In doing so, device compiler and linker 324 modifies the memory access operations of p 1 and p 5 to specifically target constant memory. The memory access operation associated with p 3 is left as a generic memory access operation.
- the device compiler and linker 324 may then re-process pseudocode 600 by performing the method 500 of FIG. 5 on the pseudocode 600 , as discussed by way of example below.
- device compiler and linker 324 performing the method 500 described above in conjunction with FIG. 5 .
- device compiler and linker 324 first descends the definition-use chain of each constant address until a memory access is reached, similar to step 502 of the method 500 .
- Device compiler and linker 324 descends the definition-use chain of constant variables c 1 and c 2 declared in pseudocode block 610 , until reaching the memory access operations associated with those constant variables. As shown, c 1 can be traced down to memory access operations involving pointers p 1 , p 3 , and p 5 , while c 2 can be traced down to memory access operations involving just pointer p 5 .
- device compiler and linker 324 For each of those memory access operations derived from a particular constant declaration, device compiler and linker 324 marks that constant declaration as “must-transfer” if the memory access is not resolved to a specific memory space, similar to step 504 of the method 500 . As discussed above in the previous example, the memory access operation associated with pointer p 3 was left as a generic memory access operation, and so device compiler and linker 324 marks the constant declaration associated with that memory access operation (the declaration for c 1 ) as “must-transfer.”
- Device compiler and linker 324 then generates a dependency list for each memory access, similar to step 506 of the method 500 .
- Device compiler and linker 324 is configured to identify any dependency lists that include constant addresses with constant declarations marked as “must-transfer,” similar to step 508 of the method 500 .
- the memory access operation associated with pointer p 1 depends on c 1 , which was marked as “must-transfer.”
- the memory access operation associated with pointer p 3 depends on c 1 and the memory access operation associated with pointer p 5 also depends on c 1 . Accordingly, device compiler and linker 324 would identify the dependency lists associated with those memory access operations.
- Device compiler and linker 324 would then mark the memory access operations associated with the identified dependency lists as “must-transfer,” similar to step 510 of the method 500 . In the example described herein, device compiler and linker 324 would mark all of the memory access operations shown in pseudocode block 640 as “must-transfer.”
- Device compiler and linker 324 would then mark any other constant declarations associated with constant addresses in the identified dependency lists as “must-transfer,” similar to step 512 of the method 500 .
- device compiler and linker 324 would determine that the memory access operation for p 5 depends on constant variable c 2 , and since the dependency list for that memory access operation was identified previously, then the constant variable declaration for c 2 would also be marked as “must-transfer.”
- Device compiler and linker 324 would then modify each “must-transfer” constant variable declaration to reside in global memory, similar to step 514 of the method 500 , and then modify each “must-transfer” memory access operation to target global memory, similar to step 516 of the method 500 . In doing so, device compiler and linker 324 may also promote data from the constant memory space to the global memory space, as needed. By performing the technique described in this example, device compiler and linker 324 transfers all constant memory variables and memory access operations to reside in and target, respectively, global memory, thus avoiding situations where a generic memory access operation may or may not target constant memory depending on the outcome of a branch instruction.
- a device compiler and linker is configured to optimize program code of a co-processor enabled application by resolving generic memory access operations within that program code to target specific memory spaces.
- a generic memory access operation cannot be resolved and may target constant memory, constant variables associated with those generic memory access operations are transferred to reside in global memory.
- a graphics processing unit is not required to resolve all generic memory access operations at run time, thereby conserving resources and accelerating the execution of the application.
- the GPU is enabled to perform additional program code optimizations with the application program code, including memory access re-ordering and alias analysis, further accelerating program code execution.
- One embodiment of the invention may be implemented as a program product for use with a computer system.
- the program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media.
- Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
- non-writable storage media e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Devices For Executing Special Programs (AREA)
- Executing Machine-Instructions (AREA)
Priority Applications (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/659,802 US20130113809A1 (en) | 2011-11-07 | 2012-10-24 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
| PCT/US2012/063756 WO2013070636A1 (en) | 2011-11-07 | 2012-11-06 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
| CN2012800066826A CN103339621A (zh) | 2011-11-07 | 2012-11-06 | 用于gpu计算编译器中的过程间存储器地址空间优化的技术 |
| DE112012000214T DE112012000214T5 (de) | 2011-11-07 | 2012-11-06 | Technik zur inter-prozeduralen Speicheradressenraumoptimierung in GPU-Rechencompiler |
| TW101141369A TWI509561B (zh) | 2011-11-07 | 2012-11-07 | 在圖形處理器電腦編譯器中最佳化程序間記憶體位址空間的技術 |
| US16/195,776 US20190087164A1 (en) | 2011-11-07 | 2018-11-19 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201161556782P | 2011-11-07 | 2011-11-07 | |
| US13/659,802 US20130113809A1 (en) | 2011-11-07 | 2012-10-24 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/195,776 Continuation US20190087164A1 (en) | 2011-11-07 | 2018-11-19 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20130113809A1 true US20130113809A1 (en) | 2013-05-09 |
Family
ID=48223398
Family Applications (6)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/659,802 Abandoned US20130113809A1 (en) | 2011-11-07 | 2012-10-24 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
| US13/659,786 Active 2033-05-03 US9009686B2 (en) | 2011-11-07 | 2012-10-24 | Algorithm for 64-bit address mode optimization |
| US13/660,986 Active US9639336B2 (en) | 2011-11-07 | 2012-10-25 | Algorithm for vectorization and memory coalescing during compiling |
| US13/661,478 Active 2034-12-17 US10228919B2 (en) | 2011-11-07 | 2012-10-26 | Demand-driven algorithm to reduce sign-extension instructions included in loops of a 64-bit computer program |
| US13/669,401 Active 2033-10-07 US9436447B2 (en) | 2011-11-07 | 2012-11-05 | Technique for live analysis-based rematerialization to reduce register pressures and enhance parallelism |
| US16/195,776 Pending US20190087164A1 (en) | 2011-11-07 | 2018-11-19 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
Family Applications After (5)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/659,786 Active 2033-05-03 US9009686B2 (en) | 2011-11-07 | 2012-10-24 | Algorithm for 64-bit address mode optimization |
| US13/660,986 Active US9639336B2 (en) | 2011-11-07 | 2012-10-25 | Algorithm for vectorization and memory coalescing during compiling |
| US13/661,478 Active 2034-12-17 US10228919B2 (en) | 2011-11-07 | 2012-10-26 | Demand-driven algorithm to reduce sign-extension instructions included in loops of a 64-bit computer program |
| US13/669,401 Active 2033-10-07 US9436447B2 (en) | 2011-11-07 | 2012-11-05 | Technique for live analysis-based rematerialization to reduce register pressures and enhance parallelism |
| US16/195,776 Pending US20190087164A1 (en) | 2011-11-07 | 2018-11-19 | Technique for inter-procedural memory address space optimization in gpu computing compiler |
Country Status (5)
| Country | Link |
|---|---|
| US (6) | US20130113809A1 (zh) |
| CN (5) | CN103460188A (zh) |
| DE (5) | DE112012000212T5 (zh) |
| TW (5) | TWI604410B (zh) |
| WO (5) | WO2013070637A1 (zh) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104834532A (zh) * | 2015-06-03 | 2015-08-12 | 星环信息科技(上海)有限公司 | 一种分布式数据向量化处理方法和装置 |
| CN104915180A (zh) * | 2014-03-10 | 2015-09-16 | 华为技术有限公司 | 一种数据操作的方法和设备 |
| US20160041816A1 (en) * | 2013-04-26 | 2016-02-11 | The Trustees Of Columbia University In The City Of New York | Systems and methods for mobile applications |
| US10061592B2 (en) | 2014-06-27 | 2018-08-28 | Samsung Electronics Co., Ltd. | Architecture and execution for efficient mixed precision computations in single instruction multiple data/thread (SIMD/T) devices |
| US10061591B2 (en) | 2014-06-27 | 2018-08-28 | Samsung Electronics Company, Ltd. | Redundancy elimination in single instruction multiple data/thread (SIMD/T) execution processing |
| US10684834B2 (en) * | 2016-10-31 | 2020-06-16 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting inter-instruction data dependency |
| US10877757B2 (en) * | 2017-11-14 | 2020-12-29 | Nvidia Corporation | Binding constants at runtime for improved resource utilization |
| US11663044B2 (en) | 2020-10-22 | 2023-05-30 | Shanghai Biren Technology Co., Ltd | Apparatus and method for secondary offloads in graphics processing unit |
| US11748077B2 (en) | 2020-10-22 | 2023-09-05 | Shanghai Biren Technology Co., Ltd | Apparatus and method and computer program product for compiling code adapted for secondary offloads in graphics processing unit |
| US12524212B2 (en) | 2022-04-15 | 2026-01-13 | Nvidia Corporation | Control of storage aliasing via automatic application of artificial dependences during program compilation |
Families Citing this family (52)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2957010C (en) | 2008-09-26 | 2017-07-04 | Relievant Medsystems, Inc. | Systems and methods for navigating an instrument through bone |
| US10028753B2 (en) | 2008-09-26 | 2018-07-24 | Relievant Medsystems, Inc. | Spine treatment kits |
| US9430204B2 (en) | 2010-11-19 | 2016-08-30 | Microsoft Technology Licensing, Llc | Read-only communication operator |
| US9507568B2 (en) | 2010-12-09 | 2016-11-29 | Microsoft Technology Licensing, Llc | Nested communication operator |
| US9395957B2 (en) * | 2010-12-22 | 2016-07-19 | Microsoft Technology Licensing, Llc | Agile communication operator |
| US20130113809A1 (en) * | 2011-11-07 | 2013-05-09 | Nvidia Corporation | Technique for inter-procedural memory address space optimization in gpu computing compiler |
| US9164743B2 (en) * | 2012-07-02 | 2015-10-20 | International Business Machines Corporation | Strength reduction compiler optimizations for operations with unknown strides |
| US10588691B2 (en) | 2012-09-12 | 2020-03-17 | Relievant Medsystems, Inc. | Radiofrequency ablation of tissue within a vertebral body |
| US9710388B2 (en) * | 2014-01-23 | 2017-07-18 | Qualcomm Incorporated | Hardware acceleration for inline caches in dynamic languages |
| EP3123315B1 (en) * | 2014-03-27 | 2018-11-14 | Microsoft Technology Licensing, LLC | Hierarchical directives-based management of runtime behaviors |
| US9389890B2 (en) | 2014-03-27 | 2016-07-12 | Microsoft Technology Licensing, Llc | Hierarchical directives-based management of runtime behaviors |
| CN106708593B (zh) * | 2015-07-16 | 2020-12-08 | 中兴通讯股份有限公司 | 一种程序链接的编译方法及装置 |
| CN105183433B (zh) * | 2015-08-24 | 2018-02-06 | 上海兆芯集成电路有限公司 | 指令合并方法以及具有多数据通道的装置 |
| KR20170047957A (ko) * | 2015-10-26 | 2017-05-08 | 삼성전자주식회사 | 반도체 장치의 동작 방법 및 반도체 시스템 |
| CN105302577B (zh) * | 2015-11-26 | 2019-05-07 | 上海兆芯集成电路有限公司 | 驱动执行单元的机器码产生方法以及装置 |
| GB2546308B (en) * | 2016-01-15 | 2019-04-03 | Advanced Risc Mach Ltd | Data processing systems |
| CN107292808B (zh) * | 2016-03-31 | 2021-01-05 | 阿里巴巴集团控股有限公司 | 图像处理方法、装置及图像协处理器 |
| CN105955892A (zh) * | 2016-04-25 | 2016-09-21 | 浪潮电子信息产业股份有限公司 | 一种计算机系统中地址空间的扩展方法 |
| US10198259B2 (en) * | 2016-06-23 | 2019-02-05 | Advanced Micro Devices, Inc. | System and method for scheduling instructions in a multithread SIMD architecture with a fixed number of registers |
| EP3270371B1 (en) * | 2016-07-12 | 2022-09-07 | NXP USA, Inc. | Method and apparatus for managing graphics layers within a graphics display component |
| US10359971B2 (en) * | 2017-07-17 | 2019-07-23 | Hewlett Packard Enterprise Development Lp | Storing memory profile data of an application in non-volatile memory |
| WO2019089918A1 (en) | 2017-11-03 | 2019-05-09 | Coherent Logix, Inc. | Programming flow for multi-processor system |
| US11468312B2 (en) | 2018-02-02 | 2022-10-11 | Samsung Electronics Co., Ltd. | Memory management for machine learning training on GPU |
| US11068247B2 (en) | 2018-02-06 | 2021-07-20 | Microsoft Technology Licensing, Llc | Vectorizing conditional min-max sequence reduction loops |
| CN108304218A (zh) * | 2018-03-14 | 2018-07-20 | 郑州云海信息技术有限公司 | 一种汇编代码的编写方法、装置、系统和可读存储介质 |
| US11277455B2 (en) | 2018-06-07 | 2022-03-15 | Mellanox Technologies, Ltd. | Streaming system |
| US10691430B2 (en) * | 2018-08-27 | 2020-06-23 | Intel Corporation | Latency scheduling mehanism |
| US20200106828A1 (en) * | 2018-10-02 | 2020-04-02 | Mellanox Technologies, Ltd. | Parallel Computation Network Device |
| CN111428327A (zh) * | 2018-12-24 | 2020-07-17 | 深圳市中兴微电子技术有限公司 | 一种指令硬件架构的构建方法、装置及存储介质 |
| US12417083B2 (en) | 2019-01-25 | 2025-09-16 | The Regents Of The University Of California | Coalescing operand register file for graphical processing units |
| US11625393B2 (en) | 2019-02-19 | 2023-04-11 | Mellanox Technologies, Ltd. | High performance computing system |
| EP3699770B1 (en) | 2019-02-25 | 2025-05-21 | Mellanox Technologies, Ltd. | Collective communication system and methods |
| US11294685B2 (en) * | 2019-06-04 | 2022-04-05 | International Business Machines Corporation | Instruction fusion using dependence analysis |
| CN110162330B (zh) * | 2019-07-08 | 2021-04-13 | 上海赫千电子科技有限公司 | 一种应用于汽车ecu升级文件的系统及方法 |
| US11580434B2 (en) * | 2019-10-17 | 2023-02-14 | Microsoft Technology Licensing, Llc | Automatic accuracy management for quantum programs via symbolic resource estimation |
| US11200061B2 (en) * | 2019-11-19 | 2021-12-14 | Microsoft Technology Licensing, Llc | Pre-instruction scheduling rematerialization for register pressure reduction |
| CN112862658A (zh) * | 2019-11-28 | 2021-05-28 | 中兴通讯股份有限公司 | Gpu运行方法、装置、设备及存储介质 |
| US11750699B2 (en) | 2020-01-15 | 2023-09-05 | Mellanox Technologies, Ltd. | Small message aggregation |
| US11252027B2 (en) | 2020-01-23 | 2022-02-15 | Mellanox Technologies, Ltd. | Network element supporting flexible data reduction operations |
| US11429310B2 (en) | 2020-03-06 | 2022-08-30 | Samsung Electronics Co., Ltd. | Adjustable function-in-memory computation system |
| TWI850513B (zh) | 2020-03-06 | 2024-08-01 | 南韓商三星電子股份有限公司 | 用於記憶體內計算的方法及用於計算的系統 |
| US11210071B2 (en) | 2020-04-01 | 2021-12-28 | Microsoft Technology Licensing, Llc | Compiler sub expression directed acyclic graph (DAG) remat for register pressure |
| US11876885B2 (en) | 2020-07-02 | 2024-01-16 | Mellanox Technologies, Ltd. | Clock queue with arming and/or self-arming features |
| US11474798B2 (en) * | 2020-08-24 | 2022-10-18 | Huawei Technologies Co., Ltd. | Method and system for optimizing access to constant memory |
| US11556378B2 (en) | 2020-12-14 | 2023-01-17 | Mellanox Technologies, Ltd. | Offloading execution of a multi-task parameter-dependent operation to a network device |
| US12118359B2 (en) * | 2021-05-20 | 2024-10-15 | Huawei Technologies Co., Ltd. | Method and system for optimizing address calculations |
| US11645076B2 (en) * | 2021-07-26 | 2023-05-09 | International Business Machines Corporation | Register pressure target function splitting |
| US12309070B2 (en) | 2022-04-07 | 2025-05-20 | Nvidia Corporation | In-network message aggregation for efficient small message transport |
| US11922237B1 (en) | 2022-09-12 | 2024-03-05 | Mellanox Technologies, Ltd. | Single-step collective operations |
| US12099823B2 (en) * | 2023-01-16 | 2024-09-24 | International Business Machines Corporation | Reducing register pressure |
| CN116155854B (zh) * | 2023-02-20 | 2025-03-25 | 深圳市闪联信息技术有限公司 | 一种局域网互联码的生成方法和装置、解码方法和装置 |
| US12489657B2 (en) | 2023-08-17 | 2025-12-02 | Mellanox Technologies, Ltd. | In-network compute operation spreading |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040162952A1 (en) * | 2003-02-13 | 2004-08-19 | Silicon Graphics, Inc. | Global pointers for scalable parallel applications |
| US20040205304A1 (en) * | 1997-08-29 | 2004-10-14 | Mckenney Paul E. | Memory allocator for a multiprocessor computer system |
| US20070079298A1 (en) * | 2005-09-30 | 2007-04-05 | Xinmin Tian | Thread-data affinity optimization using compiler |
| US20070076010A1 (en) * | 2005-09-30 | 2007-04-05 | Swamy Shankar N | Memory layout for re-ordering instructions using pointers |
| US20120042306A1 (en) * | 2010-08-11 | 2012-02-16 | International Business Machines Corporation | Compiling system and method for optimizing binary code |
| US20120113128A1 (en) * | 2010-11-10 | 2012-05-10 | Samsung Electronics Co., Ltd. | Computing apparatus and method using x-y stack memory |
| US20120254497A1 (en) * | 2011-03-29 | 2012-10-04 | Yang Ni | Method and apparatus to facilitate shared pointers in a heterogeneous platform |
Family Cites Families (91)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4449196A (en) | 1979-04-27 | 1984-05-15 | Pritchard Eric K | Data processing system for multi-precision arithmetic |
| TW237529B (zh) * | 1990-08-23 | 1995-01-01 | Supercomp Systems Ltd Partnership | |
| IL100990A (en) | 1991-02-27 | 1995-10-31 | Digital Equipment Corp | Multilanguage optimizing compiler using templates in multiple pass code generation |
| US6286135B1 (en) * | 1997-03-26 | 2001-09-04 | Hewlett-Packard Company | Cost-sensitive SSA-based strength reduction algorithm for a machine with predication support and segmented addresses |
| DE69804708T2 (de) * | 1997-03-29 | 2002-11-14 | Imec Vzw, Leuven Heverlee | Verfahren und Gerät für Grössenoptimierung von Speichereinheiten |
| US7548238B2 (en) * | 1997-07-02 | 2009-06-16 | Nvidia Corporation | Computer graphics shader systems and methods |
| US5903761A (en) | 1997-10-31 | 1999-05-11 | Preemptive Solutions, Inc. | Method of reducing the number of instructions in a program code sequence |
| DE69739404D1 (de) * | 1997-12-10 | 2009-06-25 | Hitachi Ltd | Optimiertes speicherzugriffsverfahren |
| US6748587B1 (en) * | 1998-01-02 | 2004-06-08 | Hewlett-Packard Development Company, L.P. | Programmatic access to the widest mode floating-point arithmetic supported by a processor |
| US6256784B1 (en) | 1998-08-14 | 2001-07-03 | Ati International Srl | Interpreter with reduced memory access and improved jump-through-register handling |
| US6757892B1 (en) | 1999-06-24 | 2004-06-29 | Sarnoff Corporation | Method for determining an optimal partitioning of data among several memories |
| US6415311B1 (en) | 1999-06-24 | 2002-07-02 | Ati International Srl | Sign extension circuit and method for unsigned multiplication and accumulation |
| US6438747B1 (en) * | 1999-08-20 | 2002-08-20 | Hewlett-Packard Company | Programmatic iteration scheduling for parallel processors |
| US20020129340A1 (en) | 1999-10-28 | 2002-09-12 | Tuttle Douglas D. | Reconfigurable isomorphic software representations |
| US6523173B1 (en) | 2000-01-11 | 2003-02-18 | International Business Machines Corporation | Method and apparatus for allocating registers during code compilation using different spill strategies to evaluate spill cost |
| US6941549B1 (en) * | 2000-03-31 | 2005-09-06 | International Business Machines Corporation | Communicating between programs having different machine context organizations |
| US20020038453A1 (en) | 2000-08-09 | 2002-03-28 | Andrew Riddle | Method and system for software optimization |
| US7039906B1 (en) * | 2000-09-29 | 2006-05-02 | International Business Machines Corporation | Compiler for enabling multiple signed independent data elements per register |
| GB2367650B (en) | 2000-10-04 | 2004-10-27 | Advanced Risc Mach Ltd | Single instruction multiple data processing |
| US20030028864A1 (en) | 2001-01-29 | 2003-02-06 | Matt Bowen | System, method and article of manufacture for successive compilations using incomplete parameters |
| US20040205740A1 (en) * | 2001-03-29 | 2004-10-14 | Lavery Daniel M. | Method for collection of memory reference information and memory disambiguation |
| JP3763516B2 (ja) | 2001-03-30 | 2006-04-05 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 変換プログラム、コンパイラ、コンピュータ装置およびプログラム変換方法 |
| US20020144101A1 (en) | 2001-03-30 | 2002-10-03 | Hong Wang | Caching DAG traces |
| US7162716B2 (en) * | 2001-06-08 | 2007-01-09 | Nvidia Corporation | Software emulator for optimizing application-programmable vertex processing |
| US6865614B2 (en) * | 2001-07-02 | 2005-03-08 | Hewlett-Packard Development Company, L.P. | Method for transferring a packed data structure to an unpacked data structure by copying the packed data using pointer |
| US7032215B2 (en) * | 2001-10-11 | 2006-04-18 | Intel Corporation | Method and system for type demotion of expressions and variables by bitwise constant propagation |
| US7107584B2 (en) * | 2001-10-23 | 2006-09-12 | Microsoft Corporation | Data alignment between native and non-native shared data structures |
| US8914590B2 (en) | 2002-08-07 | 2014-12-16 | Pact Xpp Technologies Ag | Data processing method and device |
| US20110161977A1 (en) | 2002-03-21 | 2011-06-30 | Martin Vorbach | Method and device for data processing |
| US7243342B2 (en) | 2002-06-11 | 2007-07-10 | Intel Corporation | Methods and apparatus for determining if a user-defined software function is a memory allocation function during compile-time |
| US6954841B2 (en) | 2002-06-26 | 2005-10-11 | International Business Machines Corporation | Viterbi decoding for SIMD vector processors with indirect vector element access |
| US7353243B2 (en) * | 2002-10-22 | 2008-04-01 | Nvidia Corporation | Reconfigurable filter node for an adaptive computing machine |
| US7051322B2 (en) * | 2002-12-06 | 2006-05-23 | @Stake, Inc. | Software analysis framework |
| JP2004303113A (ja) * | 2003-04-01 | 2004-10-28 | Hitachi Ltd | 階層メモリ向け最適化処理を備えたコンパイラおよびコード生成方法 |
| US20040221283A1 (en) | 2003-04-30 | 2004-11-04 | Worley Christopher S. | Enhanced, modulo-scheduled-loop extensions |
| US20050071823A1 (en) | 2003-09-29 | 2005-03-31 | Xiaodong Lin | Apparatus and method for simulating segmented addressing on a flat memory model architecture |
| US7124271B2 (en) * | 2003-10-14 | 2006-10-17 | Intel Corporation | Method and system for allocating register locations in a memory during compilation |
| US7457936B2 (en) * | 2003-11-19 | 2008-11-25 | Intel Corporation | Memory access instruction vectorization |
| US7567252B2 (en) | 2003-12-09 | 2009-07-28 | Microsoft Corporation | Optimizing performance of a graphics processing unit for efficient execution of general matrix operations |
| US7814467B2 (en) * | 2004-01-15 | 2010-10-12 | Hewlett-Packard Development Company, L.P. | Program optimization using object file summary information |
| US7376813B2 (en) | 2004-03-04 | 2008-05-20 | Texas Instruments Incorporated | Register move instruction for section select of source operand |
| US8689202B1 (en) * | 2004-03-30 | 2014-04-01 | Synopsys, Inc. | Scheduling of instructions |
| US8677312B1 (en) * | 2004-03-30 | 2014-03-18 | Synopsys, Inc. | Generation of compiler description from architecture description |
| US7386842B2 (en) * | 2004-06-07 | 2008-06-10 | International Business Machines Corporation | Efficient data reorganization to satisfy data alignment constraints |
| US7802076B2 (en) * | 2004-06-24 | 2010-09-21 | Intel Corporation | Method and apparatus to vectorize multiple input instructions |
| US7472382B2 (en) * | 2004-08-30 | 2008-12-30 | International Business Machines Corporation | Method for optimizing software program using inter-procedural strength reduction |
| US7389499B2 (en) * | 2004-10-21 | 2008-06-17 | International Business Machines Corporation | Method and apparatus for automatically converting numeric data to a processor efficient format for performing arithmetic operations |
| US7730114B2 (en) * | 2004-11-12 | 2010-06-01 | Microsoft Corporation | Computer file system |
| US7681187B2 (en) * | 2005-03-31 | 2010-03-16 | Nvidia Corporation | Method and apparatus for register allocation in presence of hardware constraints |
| TWI306215B (en) * | 2005-04-29 | 2009-02-11 | Ind Tech Res Inst | Method and corresponding apparatus for compiling high-level languages into specific processor architectures |
| CN100389420C (zh) * | 2005-09-13 | 2008-05-21 | 北京中星微电子有限公司 | 用协处理器加速文件系统操作的方法及装置 |
| US7694288B2 (en) * | 2005-10-24 | 2010-04-06 | Analog Devices, Inc. | Static single assignment form pattern matcher |
| US20070124631A1 (en) | 2005-11-08 | 2007-05-31 | Boggs Darrell D | Bit field selection instruction |
| JP4978025B2 (ja) * | 2006-02-24 | 2012-07-18 | 株式会社日立製作所 | ポインタの圧縮・伸張方法、これを実行するプログラム、及び、これを用いた計算機システム |
| WO2008002173A1 (en) * | 2006-06-20 | 2008-01-03 | Intel Corporation | Method and apparatus to call native code from a managed code application |
| US8321849B2 (en) * | 2007-01-26 | 2012-11-27 | Nvidia Corporation | Virtual architecture and instruction set for parallel thread computing |
| US9601199B2 (en) * | 2007-01-26 | 2017-03-21 | Intel Corporation | Iterator register for structured memory |
| US9361078B2 (en) | 2007-03-19 | 2016-06-07 | International Business Machines Corporation | Compiler method of exploiting data value locality for computation reuse |
| US8671401B2 (en) * | 2007-04-09 | 2014-03-11 | Microsoft Corporation | Tiling across loop nests with possible recomputation |
| US8411096B1 (en) * | 2007-08-15 | 2013-04-02 | Nvidia Corporation | Shader program instruction fetch |
| US20090070753A1 (en) | 2007-09-07 | 2009-03-12 | International Business Machines Corporation | Increase the coverage of profiling feedback with data flow analysis |
| US8555266B2 (en) | 2007-11-13 | 2013-10-08 | International Business Machines Corporation | Managing variable assignments in a program |
| US7809925B2 (en) | 2007-12-07 | 2010-10-05 | International Business Machines Corporation | Processing unit incorporating vectorizable execution unit |
| JP5244421B2 (ja) | 2008-02-29 | 2013-07-24 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置およびプログラム分割方法 |
| US8255884B2 (en) * | 2008-06-06 | 2012-08-28 | International Business Machines Corporation | Optimized scalar promotion with load and splat SIMD instructions |
| US20100184380A1 (en) | 2009-01-20 | 2010-07-22 | Qualcomm Incorporated | Mitigating intercarrier and intersymbol interference in asynchronous wireless communications |
| US20100199270A1 (en) | 2009-01-30 | 2010-08-05 | Ivan Baev | System, method, and computer-program product for scalable region-based register allocation in compilers |
| US8713543B2 (en) * | 2009-02-11 | 2014-04-29 | Johnathan C. Mun | Evaluation compiler method |
| US8831666B2 (en) | 2009-06-30 | 2014-09-09 | Intel Corporation | Link power savings with state retention |
| US8819622B2 (en) | 2009-09-25 | 2014-08-26 | Advanced Micro Devices, Inc. | Adding signed 8/16/32-bit integers to 64-bit integers |
| US8271763B2 (en) * | 2009-09-25 | 2012-09-18 | Nvidia Corporation | Unified addressing and instructions for accessing parallel memory spaces |
| CA2684226A1 (en) * | 2009-10-30 | 2011-04-30 | Ibm Canada Limited - Ibm Canada Limitee | Eleminating redundant operations for common properties using shared real registers |
| US8578357B2 (en) * | 2009-12-21 | 2013-11-05 | Intel Corporation | Endian conversion tool |
| US8453135B2 (en) | 2010-03-11 | 2013-05-28 | Freescale Semiconductor, Inc. | Computation reuse for loops with irregular accesses |
| CN101833435A (zh) * | 2010-04-19 | 2010-09-15 | 天津大学 | 基于传输触发架构可配置处理器指令冗余消除方法 |
| US8645758B2 (en) * | 2010-04-29 | 2014-02-04 | International Business Machines Corporation | Determining page faulting behavior of a memory operation |
| US8954418B2 (en) * | 2010-05-14 | 2015-02-10 | Sap Se | Performing complex operations in a database using a semantic layer |
| US8799583B2 (en) * | 2010-05-25 | 2014-08-05 | International Business Machines Corporation | Atomic execution over accesses to multiple memory locations in a multiprocessor system |
| US8538912B2 (en) * | 2010-09-22 | 2013-09-17 | Hewlett-Packard Development Company, L.P. | Apparatus and method for an automatic information integration flow optimizer |
| US8997066B2 (en) * | 2010-12-27 | 2015-03-31 | Microsoft Technology Licensing, Llc | Emulating pointers |
| GB2488980B (en) * | 2011-03-07 | 2020-02-19 | Advanced Risc Mach Ltd | Address generation in a data processing apparatus |
| US8640112B2 (en) * | 2011-03-30 | 2014-01-28 | National Instruments Corporation | Vectorizing combinations of program operations |
| US20130113809A1 (en) * | 2011-11-07 | 2013-05-09 | Nvidia Corporation | Technique for inter-procedural memory address space optimization in gpu computing compiler |
| US9092228B2 (en) * | 2012-01-17 | 2015-07-28 | Texas Instruments Incorporated | Systems and methods for software instruction translation from a high-level language to a specialized instruction set |
| JP5840014B2 (ja) * | 2012-02-01 | 2016-01-06 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | コンパイル方法、プログラムおよび情報処理装置 |
| US9043582B2 (en) * | 2012-09-14 | 2015-05-26 | Qualcomm Innovation Center, Inc. | Enhanced instruction scheduling during compilation of high level source code for improved executable code |
| US9411558B2 (en) * | 2012-10-20 | 2016-08-09 | Luke Hutchison | Systems and methods for parallelization of program code, interactive data visualization, and graphically-augmented code editing |
| US10140403B2 (en) * | 2012-12-01 | 2018-11-27 | Synopsys Inc. | Managing model checks of sequential designs |
| US9396240B2 (en) * | 2013-12-03 | 2016-07-19 | Business Objects Software Ltd. | Extreme visualization enabling extension for large data sets |
| US9710245B2 (en) * | 2014-04-04 | 2017-07-18 | Qualcomm Incorporated | Memory reference metadata for compiler optimization |
| US10444759B2 (en) * | 2017-06-14 | 2019-10-15 | Zoox, Inc. | Voxel based ground plane estimation and object segmentation |
-
2012
- 2012-10-24 US US13/659,802 patent/US20130113809A1/en not_active Abandoned
- 2012-10-24 US US13/659,786 patent/US9009686B2/en active Active
- 2012-10-25 US US13/660,986 patent/US9639336B2/en active Active
- 2012-10-26 US US13/661,478 patent/US10228919B2/en active Active
- 2012-11-05 US US13/669,401 patent/US9436447B2/en active Active
- 2012-11-06 CN CN2012800132283A patent/CN103460188A/zh active Pending
- 2012-11-06 WO PCT/US2012/063757 patent/WO2013070637A1/en not_active Ceased
- 2012-11-06 CN CN2012800066826A patent/CN103339621A/zh active Pending
- 2012-11-06 WO PCT/US2012/063723 patent/WO2013070616A1/en not_active Ceased
- 2012-11-06 CN CN201280006681.1A patent/CN103348317B/zh active Active
- 2012-11-06 WO PCT/US2012/063754 patent/WO2013070635A1/en not_active Ceased
- 2012-11-06 DE DE112012000212T patent/DE112012000212T5/de active Pending
- 2012-11-06 WO PCT/US2012/063756 patent/WO2013070636A1/en not_active Ceased
- 2012-11-06 DE DE112012000195T patent/DE112012000195T5/de active Pending
- 2012-11-06 CN CN201280003006.3A patent/CN104641350A/zh active Pending
- 2012-11-06 WO PCT/US2012/063730 patent/WO2013070621A2/en not_active Ceased
- 2012-11-06 DE DE112012000187T patent/DE112012000187T5/de active Pending
- 2012-11-06 DE DE112012000214T patent/DE112012000214T5/de active Pending
- 2012-11-06 DE DE112012000209T patent/DE112012000209T5/de active Pending
- 2012-11-06 CN CN201280029582.5A patent/CN103608774A/zh active Pending
- 2012-11-07 TW TW101141372A patent/TWI604410B/zh active
- 2012-11-07 TW TW101141357A patent/TWI502509B/zh not_active IP Right Cessation
- 2012-11-07 TW TW101141369A patent/TWI509561B/zh active
- 2012-11-07 TW TW101141361A patent/TWI483182B/zh not_active IP Right Cessation
- 2012-11-07 TW TW101141366A patent/TWI498817B/zh active
-
2018
- 2018-11-19 US US16/195,776 patent/US20190087164A1/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040205304A1 (en) * | 1997-08-29 | 2004-10-14 | Mckenney Paul E. | Memory allocator for a multiprocessor computer system |
| US20040162952A1 (en) * | 2003-02-13 | 2004-08-19 | Silicon Graphics, Inc. | Global pointers for scalable parallel applications |
| US20070079298A1 (en) * | 2005-09-30 | 2007-04-05 | Xinmin Tian | Thread-data affinity optimization using compiler |
| US20070076010A1 (en) * | 2005-09-30 | 2007-04-05 | Swamy Shankar N | Memory layout for re-ordering instructions using pointers |
| US20120042306A1 (en) * | 2010-08-11 | 2012-02-16 | International Business Machines Corporation | Compiling system and method for optimizing binary code |
| US20120113128A1 (en) * | 2010-11-10 | 2012-05-10 | Samsung Electronics Co., Ltd. | Computing apparatus and method using x-y stack memory |
| US20120254497A1 (en) * | 2011-03-29 | 2012-10-04 | Yang Ni | Method and apparatus to facilitate shared pointers in a heterogeneous platform |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160041816A1 (en) * | 2013-04-26 | 2016-02-11 | The Trustees Of Columbia University In The City Of New York | Systems and methods for mobile applications |
| US9766867B2 (en) * | 2013-04-26 | 2017-09-19 | The Trustees Of Columbia University In The City Of New York | Systems and methods for improving performance of mobile applications |
| CN104915180A (zh) * | 2014-03-10 | 2015-09-16 | 华为技术有限公司 | 一种数据操作的方法和设备 |
| US10061592B2 (en) | 2014-06-27 | 2018-08-28 | Samsung Electronics Co., Ltd. | Architecture and execution for efficient mixed precision computations in single instruction multiple data/thread (SIMD/T) devices |
| US10061591B2 (en) | 2014-06-27 | 2018-08-28 | Samsung Electronics Company, Ltd. | Redundancy elimination in single instruction multiple data/thread (SIMD/T) execution processing |
| CN104834532A (zh) * | 2015-06-03 | 2015-08-12 | 星环信息科技(上海)有限公司 | 一种分布式数据向量化处理方法和装置 |
| US10684834B2 (en) * | 2016-10-31 | 2020-06-16 | Huawei Technologies Co., Ltd. | Method and apparatus for detecting inter-instruction data dependency |
| US10877757B2 (en) * | 2017-11-14 | 2020-12-29 | Nvidia Corporation | Binding constants at runtime for improved resource utilization |
| US11663044B2 (en) | 2020-10-22 | 2023-05-30 | Shanghai Biren Technology Co., Ltd | Apparatus and method for secondary offloads in graphics processing unit |
| US11748077B2 (en) | 2020-10-22 | 2023-09-05 | Shanghai Biren Technology Co., Ltd | Apparatus and method and computer program product for compiling code adapted for secondary offloads in graphics processing unit |
| US12236277B2 (en) | 2020-10-22 | 2025-02-25 | Shanghai Biren Technology Co., Ltd | Apparatus and method for secondary offloads in graphics processing unit |
| US12524212B2 (en) | 2022-04-15 | 2026-01-13 | Nvidia Corporation | Control of storage aliasing via automatic application of artificial dependences during program compilation |
Also Published As
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190087164A1 (en) | Technique for inter-procedural memory address space optimization in gpu computing compiler | |
| US8570333B2 (en) | Method and system for enabling managed code-based application program to access graphics processing unit | |
| TWI437491B (zh) | 經組態以轉譯一應用程式來由一通用處理器執行的運算裝置 | |
| CN110008009B (zh) | 在运行时绑定常量以提高资源利用率 | |
| US9645802B2 (en) | Technique for grouping instructions into independent strands | |
| US9424038B2 (en) | Compiler-controlled region scheduling for SIMD execution of threads | |
| US9361079B2 (en) | Method for compiling a parallel thread execution program for general execution | |
| US9229717B2 (en) | Register allocation for clustered multi-level register files | |
| US20070261038A1 (en) | Code Translation and Pipeline Optimization | |
| US8436862B2 (en) | Method and system for enabling managed code-based application program to access graphics processing unit | |
| TWI489392B (zh) | 多個應用程式分享的圖形處理單元 | |
| US9367306B2 (en) | Method for transforming a multithreaded program for general execution | |
| CN103870242A (zh) | 优化线程栈存储器的管理的系统、方法和计算机程序产品 | |
| US20150145871A1 (en) | System, method, and computer program product to enable the yielding of threads in a graphics processing unit to transfer control to a host processor | |
| US8539516B1 (en) | System and method for enabling interoperability between application programming interfaces | |
| US8402229B1 (en) | System and method for enabling interoperability between application programming interfaces | |
| CN103870247A (zh) | 用于保存和恢复线程组操作状态的技术 | |
| CN111240745A (zh) | 交叉执行的增强型标量向量双管线架构 | |
| CN121255475A (zh) | 内存管理系统、方法、编译方法、设备、介质和产品 | |
| WO2007131089A2 (en) | Code translation and pipeline optimization |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: NVIDIA CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KONG, XIANGYUN;WANG, JIAN-ZHONG;LIN, YUAN;AND OTHERS;SIGNING DATES FROM 20121023 TO 20121024;REEL/FRAME:029186/0426 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |