[go: up one dir, main page]

MXPA98006707A - Information processing device and entertainment system - Google Patents

Information processing device and entertainment system

Info

Publication number
MXPA98006707A
MXPA98006707A MXPA/A/1998/006707A MX9806707A MXPA98006707A MX PA98006707 A MXPA98006707 A MX PA98006707A MX 9806707 A MX9806707 A MX 9806707A MX PA98006707 A MXPA98006707 A MX PA98006707A
Authority
MX
Mexico
Prior art keywords
main
storage medium
unit
sub
processing
Prior art date
Application number
MXPA/A/1998/006707A
Other languages
Spanish (es)
Inventor
Ohba Akio
Original Assignee
Sony Computer Entertainment:Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment:Kk filed Critical Sony Computer Entertainment:Kk
Publication of MXPA98006707A publication Critical patent/MXPA98006707A/en

Links

Abstract

Routine processing for routine data, non-routine processing for routine data and general non-routine processing, will be processed efficiently. For this purpose, a main CPU 20 has a CPU core 21, which has a parallel computing mechanism, a command cache 22 and a data cache 23, such as general cache units, and a special registration memory SPR 24 which is a memory. high-speed internal network capable of carrying out direct memory access (DMA) appropriate for routine processing. A floating decimal point vector (VPE) processor 30 has a high speed internal memory (VU-MEM) capable of processing the DMA and is tightly connected to the main CPU to form a co-processor. The VPE 40 has a high speed internal memory (VU-MEM) capable of processing by DMA. The DMA controller (DMAC) 14 controls the transfer of DMA between the main memory 50 and the SPR 24, between the main memory 50 and the (VU-MEM) 34 and between the (VU-MEM) 44 and the SPR

Description

"INFORMATION PROCESSING DEVICE AND ENTERTAINMENT SYSTEM" BACKGROUND OF THE INVENTION FIELD OF THE INVENTION This invention operates with an information processing device for efficiently performing routine and non-routine processing for routine data and general non-routine processing, an entertainment system, such as a home gaming machine employing the device of information processing.
DESCRIPTION OF THE RELATED TECHNIQUE In a computer system, such as a workstation or a personal computer, an entertainment system, such as a video game machine, attempts have been made to increase the speed of the CPU and the improvement of a memory system. of cache, the adoption of parallel calculation functions and the introduction of a dedicated calculation system, to copy with the increased processing volume and the increased data volume.
In particular, the improvement of the cache system and the parallel calculations (called multi-media commands) have been prevalent in the personal computer. Although the cache enhancement is statistically worthwhile for so-called general-purpose processing, such as non-routine processing, the conventional cache structure can not be said to be efficient for the routine processing represented by e.g. the MPEG decoding carried out by means of a parallel calculation command, that is, a DSP type processing for large capacity data. That is, with the DSP processing of the large capacity data, the flowing data can hardly gain access again. Therefore, a memory structure that is increased in speed in the second access, such as a cache can not be said to be effective. The data that is accessed many times with the aforementioned DSP processing is the temporary data of the internal parameters of the internal work area. The cache structure in which the data is used only once in the main memory can not be said to be efficient.
Since the data format is fixed in this routine processing, an appropriate data volume that is read in the cache can be set. However, since the volume of data read in a certain time can not be controlled by a program in the usual cache structure, the data transfer can not be increased in efficiency. Likewise, if a dedicated calculation device for routine processing is used, there are occasions where the transfer of the data to the dedicated computing device represents a bottleneck in processing even when the device is high in processing speed and in efficiency for processing. routine. Even when the processing bottleneck in the data transfer is eliminated using a direct memory address (DMA) or by providing a dedicated bus collector, the device is difficult to control from the main program and only deficient in flexibility.
COMPENDIUM OF THE INVENTION Therefore, an object of the present invention is to provide an information processing device and an entertainment system for efficiently performing the various processing operations, such as routine and non-routine processing for routine data and general non-routine processing. In one aspect, the present invention provides an information processing apparatus that includes a main processor unit that includes at least one parallel computing processing means, a cache storage medium, internal high speed storage media accessible directly to memory, a main storage medium and a direct memory access control unit for direct memory access transfer control between the high-speed storage medium intern in the main processor unit and the main storage medium. The main processor unit, the main storage medium and the direct memory access unit are interconnected through a main bus manifold. Preferably, a floating decimal point vector processing unit including at least one vector processing means and an internal high-speed storage medium accessible to the memory directly is provided in the main bus manifold.
In another aspect, the present invention provides an information processing apparatus that includes a main processor unit that includes at least one parallel computing processing means, a caching storage medium, and an internal high-speed storage medium. and direct access to memory, a main storage medium and a direct memory access control unit for direct memory access transfer control between the high-speed internal storage medium and the main processor unit, and the main storage medium. The main processor unit, the main storage medium and the direct memory access unit are interconnected through a main bus manifold. Preferably, a floating-point-vector vector processing unit that includes at least one vector processing means and a high-speed internal storage medium for direct memory access is provided in the main bus manifold. In another aspect, the present invention provides an information processing apparatus that includes a main processor unit that includes at least one calculation processing means and one caching storage medium, one main storage medium, one processing unit, and floating-point-decimal vector that includes at least one vector processing means and an internal high-speed storage medium with direct access to memory, and a direct-access memory control unit for direct access transfer control to the memory between the internal high-speed storage medium in the vector processor unit and the main storage medium. The main processor unit, the main storage medium and the direct memory access control unit are interconnected through a main bus manifold. Preferably, the floating decimal point vector processing unit is constituted of a first vector processor and a second processor, and the first vector processor is tightly connected to the main processor unit to form a co-processor. In an entertainment system according to yet another aspect of the present invention, the above-described information processing apparatus is a system of the main processor to which a sub-system is connected through an interface of the bus sub-collector. processor comprising a sub-processor, a sub-storage medium and a sub-DMAC through a bus sub-collector. To this bus sub-collector is connected a reproduction medium for an external storage medium such as a CD-ROM driver, and a driving means such as a manual controller. In accordance with the present invention, since the direct access memory transfer control is carried out by a DMA controller between the main storage medium and the internal high-speed storage medium of the main processing unit which it has in addition to the parallel processing processing means a parallel processing processing means and the caching means, a routine processing, in particular the processing of the integral routine data can be carried out efficiently. By further providing a floating decimal point vector processing unit having at least one high speed internal storage medium for direct access to the memory and a vector calculation processing means, the routine processing of the routine data can be carried out efficiently. Providing two of these floating-point vector processing units and tightly connecting one of the vector processors to the main processing unit to be used as the co-processor, routine non-routine processing of the routine data can be carried out efficiently, while routine processing of the routine data can be carried out efficiently by the remaining vector processor. Having, in addition to the main processing unit having the usual caching storage medium effective for a non-routine processing, a vector processor having a high-speed internal memory and a data transfer mechanism by means of DMA suitable for processing of Routine data routine, and a tightly connected vector co-processor that has a high-speed internal memory and a data transfer mechanism using appropriate DMA for non-routine processing of routine data, high-efficiency processing must obtained for a variety of processing configurations. Further, by providing in addition to the main processing unit having a high speed internal memory medium of direct access to the appropriate memory for routine processing and a usual caching storage medium effective for non-routine processing, a vector processor which it has a high-speed internal memory and a data transfer mechanism using appropriate DMA for routine processing of routine data and a high-speed internal memory and a mechanism for direct access to memory for direct access to memory between the medium High-speed internal storage and the vector processor and the high-speed internal memory in the vector processor, routine and non-routine processing and non-routine processing for the routine data can be efficiently carried out.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a functional diagram showing a schematic structure of an embodiment of the present invention. Figure 2 is a functional diagram showing an example of the schematic structure of an entire circuit of a television game machine encompassing the present invention. Figure 3 is a functional diagram showing an example of the integral routine processing operation in one embodiment of the present invention. Figure 4 is a functional diagram showing an example of a routine processing operation of the routine data in an embodiment of the present invention. Figure 5 is a functional diagram showing an example of a routine processing operation of a non-routine data in an embodiment of the present invention. Figure 6 shows an example of a DMA packet in an embodiment of the present invention. Figure 7 shows an example of a program of the DMA package in an embodiment of the present invention. Figure 8 is a flow chart to illustrate an example of programming the DMA packet in one embodiment of the present invention.
DESCRIPTION OF THE PREFERRED MODALITIES With reference to the drawings, the preferred embodiments of the present invention will be explained in detail. Figure 1 is a functional diagram showing a structure of the system to which the data transfer method encompassed by the present invention is applied. In Figure 1, a main CPU 20 operating as a min processor, two floating-point-decimal vector processors are connected in a bus collector 11 min.
(VPE) 30, 40, a main memory 50 and a direct memory access circuit (DMAC) 14. A floating decimal point vector (FPU) processor 27 as a first co-processor is provided in association with the CPU 20 main. The floating decimal point vector processor (FPU) 30 mentioned above is tightly connected as a second co-processor with the main CPU 20. An interrupt controller (INTC), a synchronizer 62, an interface 63 in series (SIO) and an MPEG 14 that functions as a decoder for the so-called MPEG 2 are also connected to the main bus 11. They are also connected to the bus collector 11 a bus sub-collector interface (SBUSIF) 13 to have a data exchange with the bus sub-collector, as will be explained later and a GPUIF 148 that functions as an interface for the graphics processor as will be explained later. The main CPU 20 has a CPU core 21 which includes a single stream of multiple data and a single instruction stream (SIMD) as a parallel computing system and a usual caching mechanism effective for the usual non-routine processing (processing for general purpose) which is an instruction cache (Y $) 22 and a data cache (D $) 23. In addition, the main CPU 20 includes an internal high-speed memory with direct access to memory (record memory SPR) special) 24 suitable for routine processing and which is connected through the bus collector interface unit (BIU) 25 with the main bus collector 11.
A high-speed floating-point decimal (CPU) vector processor 27 having a multiplier / floating-point addition apparatus (FMAC) 28 and a floating-point division apparatus (FDIV) is tightly connected to this main CPU 20. 29, as a first co-processor, and a floating decimal point vector processor (VPE) 30 as a second co-processor. This floating decimal point vector (VPE) processor 30 has a micro-memory (Micro-MEM) 31, a floating decimal point multiplier / adder (FMAC) 32, a floating decimal point divider (FDIV) 33, a memory enclosed (VU_MEM) 34 and a packet engine (PKE), and is connected through an output memory 46 in the order of acquisition (FIFO) in the main bus 11. The second floating decimal point vector processor VPE) 40 similarly includes a micro-memory (Micro-MEM) 41, a floating decimal point multiplier / adder (FMAC) 42, a floating decimal point divider (FDIV) 43, an enclosed internal memory (VU-MEM) 44 and a packet engine (PKE) and connects to the bus collector 11 through a FIFO memory 46 in the order of acquisition. These floating-point-point vector processors (VPEs) 30, 40 carry out matrix processing, coordinate transformation, and perspective transformation at a high speed. The floating decimal point multiplier / adders (FMACs) and the floating decimal point dividers (FDIVs) that function as floating decimal point vector processor VU units of the VPEs operate in accordance with the micro-program stored in the micro- memory (Micro-MEM) to calculate the data in the internal register and the enclosed memory (VU-MEM) at a high speed. The packet engine (PKE) expands the packet of the packaged data or the microcode of the transferred VU by means of direct access to the memory (DMA) as will be explained later in the memory (such as the micro-MEM or the VU-MEM) according to the code in the package (PKE code). The vector processor unit (VU) can be started through PKE by the DMA packet (including the commands and data) and can constitute a sequence of the VPE calculation processing program independently of the CPU. Meanwhile, the first PE 30 is tightly connected as a co-processor of the main CPU 20 with the main CPU 20 as described above, the second VPE 40 has the function of sending the processing results to a unit 71. of the graphics processor through GPUIF 48 and therefore, it works as a pre-processor for GPU 71.
The package engines (PKEs) in VPEs 30 and 40 are explained. The PKE establishes an internal record for the DMA data packet sent to the FIFO memory by direct memory access (DMA) in accordance with the PKE code or expands (unpacks) the successive data to expand or synthesize the number data indicated in the PKE code at an address specified by the immediate value in the PKE code. Likewise, the PKE has the function of transferring the microcode from the VPE to the micro-MEM of micromemory to transfer the command of the GPU drawing or the image data directly to the GPUIF 148 without interposition of the enclosed memory 9 (VU- MEM). The interrupt control circuit (INTC) 61 arbitrates the interrupt requests from the multiple processors to send the interrupt to the main CPU 20. The DMA controller 14 (DMAC) intelligently distributes the data as it submits the arbitration of the main bus collector to multiple processors that belong together and use the resources of the main memory. This transfer occurs between the peripheral processors, the main memory and the special register memory (SPM). Simultaneously, the bus collector inclusive of the main CPU is submitted to arbitration.
That is, the main register memory (SPR) 24 in the main CPU 20 is a high-speed internal memory that is suitable for routine processing and DMA. As the appropriate DMA mechanism for routine processing of the routine data, a data transfer mechanism is used between it and the enclosed memory (VU-MEM) 44 in the VPE 40. The GPUIF 48 is an interface for communication between the CPU system connected to the main bus collector 11 of Figure 1, and the graphics processor unit (GPU). Two sets of data are sent to the GPU in parallel, ie a presentation list for routine processing through the VU (vector processor unit: FMAC 42 and FDIV 43) of the VPE 40 and a presentation list for processing Exception generated by the main CPU and the co-processor and sent directly to the GPU through the FIFO memory 49. These two streams are arbitrated by GPUIF 48 and are dispatched in time to GPU 71. An MDEC 64 is the owner of an image data expansion function that has the so-called MPEG2 macroblock decoding function, a function RGB conversion, a vector quantization function and a bitstream expansion function. The MPEG is an abbreviation of a group of Film Experts for Film Compression and Coding of the International Organization / International Electrotechnical Commission, Joint Technical Committee 1 / Sub Committee 29 (ISO / IEC JTC / SC29) The MPEG 1 standards and MPEG 2 are IS0111725 and ISO 13818, respectively, The main memory 50 is formed, for example, of a dynamic random access memory (DRAM) and is connected to the bus collector min 1 via a DRAM controller (DRAMC) 51. The bus sub-collector interface (SBUSIF) 13 has a FIFO and several registers and data exchanges with an external bus collector or bus sub-collector (SBUS) Figure 2 shows a schematic structure of the main CPU system of the Figure 1, as applied to an entertainment system such as a television gaming machine for domestic use.Referring to Figure 2, the main bus collector 11 and the bus sub-collector 12 are interconnected to through SBUSIF 13. The configuration of the proximity circuit of the main bus 11 is as explained with reference to Figure 1 and therefore, the corresponding parts are illustrated by the same reference numbers and are not specifically explained. To a GPU 71, connected with GPUIF 48, a frame memory 72 is connected which is provided with a CRTC controller 73 of cathode ray tube. Taking into account that the direct memory access controller (DMAC) is connected to the bus sub-collector 12, and DMAC 14 is connected to the main bus collector 11, there is a main DMAC. A sub-CPU 80, a DMAC sub-memory 82, a ROM 83 having an initiation program or an operating system (OS) stored therein, a unit are connected with the bus sub-collector 12 of FIG. of sound processing (CPU) 15, a communication control unit (ATM) 15, a CD-ROM driver 16, as the reproduction medium for an external storage medium and an input unit 85. The input unit 85 includes a connection terminal 87 for connection to a drive unit 86, a video input circuit 88 for receiving the image data from other devices (not shown) and an audio input circuit 89 for receive voice data from other devices, not illustrated. In the gaming machine, as shown in Figure 2, the main CPU 20 reads the initiation program through SBUSIF 13 from the ROM 83 connected to the bus sub-collector 12 and carries out the initiation program to make operate the OS. The main CPU 20 also controls the CD-ROM drive 16 to read an application program and the data from the CD-ROM or the like which is graded on the CD-ROM drive 16 to read the application program or data to store the reading data in the main memory 50. In addition, the main CPU 20 generates the data for non-routine processing (polygon definition information and so on), compared to the first vector processor (VPE) 30, for the data of a three-dimensional object constituted by basic figures multiple (polygons) that are read from the CD-ROM, that is, coordinate values of apex points (representative points) of polygons. This VPE 30 includes VU processing elements, such as FMAC 32 or FDIV 33 calculating a part of the real number of the floating decimal number, and performs the floating decimal point calculations in parallel. Specifically, the main CPU 20 and the first VPE 30 carry out the processing of the need for delicate operations in the valleys of the polygon in geometry processing, such as the state of leaves flying in the wind or raindrops in the front or front window of a car, and sends the information defining the calculated polygon, such as the calculated apex information or shading mode information as packets to the main memory 50 through the main bus collector 11.
The information that defines the polygon is made up of the information of the drawing area of the image and the information of the polygon. The information of the drawing area of the image is constituted by decentered coordinates in the direction of the frame buffer of the image drawing area and the coordinate of the image area to cancel the drawing if there is a polygon coordinate to the outside of the area of drawing the image. The information of the polygon is made up of the attribute information of the polygon and the information of the point of the apex. The attribute information of the polygon is the information that specifies the shading mode, the alpha-blending mode or the mode to form a texture map. The apex information includes, for example, the coordinates in the apex image drawing area, the coordinates in the apex point texture area and the color of the apex point. Similar to the first VPE 30, the second VPE 40 performs the floating decimal point calculations, and is used to generate the data for processing to generate an image by driving the drive 86 and matrix operations, specifically the data for simpler processing for which programming can be carried out in the VPE 40, such as the information that defines the polygon. For example, the second VPE 40 performs processing such as transformation in perspective for an object in a simpler way such as a building or a car, calculations for a beam of collimated light or the generation of a two-dimensional curved plane . The information defining the generated polygon is sent through GPUIF 48 to the GPU 71. The GPUIF 48 sends the information that defines the polygon supplied thereto from the main memory 50 through the bus collector 11 and the information defining the polygon supplied to it from a second VPE 40 or the GPU 71 as it submits to arbitration both in order to avoid conflicts. The CPU 71 draws an image in the frame memory 72 based on the polygon definition information that is supplied thereto via GPUIF 48. The GPU 71 can use the frame memory 72 as a texture memory and can link the pixel image in the frame memory 72 as a texture in the polygon being plotted. The main DMAC 46 which can carry out the control such as the DMA transfer, in the circuits connected to the main bus collector 11. In addition, the main DMAC 46 responds in the current state of SBUSIF 13 to carry out the control, such as the DMA transfer for the circuits connected to the main bus sub connector 12. The sub GPU 80 performs several compliance operations with the program stored in the ROM 83 and carries out control operations, such as the DMA transfer in the connected circuits or in the bus sub-collector 12 if only SBUSIF 13 disconnects the main sub-collector 11 and the bus sub-collector 12. The sound processing unit (SPU) 76 responds to a sound command supplied to it from the sub CPU 80 or DMAC 82 to read the voice data from the sound memory 77 to send the voice data as the audio output . The communication control unit (ATM) 15 is connected to, for example, a public network and exchanges the data through the network. Referring to Figure 3 ff., The routine processing operations in the present embodiment are explained. Figure 3 shows a data bus collector for integral processing of the integral routine data. In this figure, the integral routine data 52 is transferred by direct access to the memory (DMA) of DMAC 14 to the special register memory (SPR) 24 in the main CPU 20. The transferred data is subjected to both routine processing and non-routine processing, by uniting an individual instruction stream multiple data stream (SIMD) command of the parallel computing mechanism of the CPU core 21, with SPR 24 as the work area. The newly processed data is transferred by DMA to the dedicated devices such as the main memory 50 or the GPU 71. By lengthening the burst length in this case, the high speed transfer becomes possible thus allowing the processing at a speed higher than is possible with the usual cache mechanism. Figures 4 and 5 show a data passage of floating decimal point processing of integers and floating decimal point data. The routine data is first classified as routine processing and non-routine processing and is graded by a routine DMA channel or non-routine DMA processing. In the DMA channel for routine processing of the routine data, shown in Figure 4, a routine data 53 in the main memory 50 is transferred by bursts and expanded by DMA 14 to the high-speed internal memory 44 (VU- MEM) in the floating decimal point vector processor (VPE) 40 through the packet engine (PKE) 45 as the data expansion mechanism. The routine processing program 54 in the main memory 50 is also transferred to the micrometer (micro MEM) 41 for expansion. The data transferred to and expanded in the VU-MEM 44 is routinely processed in VU-MEM 44 using the command of the floating decimal point vector of the VPE 40. As for the microprogram, a resident program in the micro-memory (microMEM) or a non-resident program transferred by bursts from the main memory to the micro-memory (microMEM) 41 in association with the data by means of DMA, is initiated by a program initiation command (Program Initiation) of a tag command in the data transferred. The data portion not processed by routine processing is transferred using a DMA channel that connects to the special register memory (SPR) 24 in the main CPU 20 from the memory (VU-MEM) 44 for routine processing in the SPR 24 in cooperation with the main CPU 20 and the processors 27, 30. In the DMA channel for routine processing of the routine data shown in Figure 5, the routine data 55 in the main memory 50 is transferred by bursts to the internal high-speed memory (VU-MEM) 34 by a packet engine (PKE) 35 as a data expansion mechanism for expansion. The data transferred and expanded by the memory (VU-MEM) 34 is processed by non-routine memory (VU-MEM) 44 according to the micro-program of the microMEM 31 initiated by the main CPU 20 or with the command of the co-processor using the command of the floating decimal point vector of the VPE 30. In the present mode, the processed data is packed into the main CPU 20 and DMA is transferred through SPR 24 to the GPU 71 or the main memory 50. Figure 6 shows an example of a DMA package of the data and the program. Referring to Figure 6, a PKE command for the packet PKE engine or a data expansion mechanism adjacent to the tag command (DMA command) to DMAC 14 is placed in this DMA command, and is followed by a main portion of the program or data. The PKE control is a command for data transfer or expansion to the PKE or a program initiation or transfer command. In the example shown in Figure 6, a PKE command a is an expansion command, a PKE command b is a data expansion command and a PKE command c is a program initiation command. The DMA initiated by the main CPU 20 transfers the packets, linked in accordance with the meta-command in the packet with PKE in VPE in succession. The PKE performs in accordance with the PKE command in the packet, the data expansion in the packet to the internal high-speed memory (VU-MEM) in VPE, the transfer of the program from the packet program to the memory of can internal speed (VU-MEM) and the initiation of the VPE micro-program. Figure 7 shows an example of the programming of VPE 40 using the DMA package. In Figure 7, the codes Tl, T2, ... represent the transfer sequence of the DMA packets. In this sequence, DMA is transferred to the micro-memory (microMEM) 41 or memory (VU-MEM) 44. From these PKE commands, the PKE commands aae are a data expansion command to establish the matrix (Matrix ), a data expansion command to set the apex point (Vertex) of a polygon, an initiation command (Program Command) of a resident program, a program transfer command, and an initiation command (Initiation of the Program) of a non-resident program, respectively. As the meta-commands for DMA 14 of DMA-Tag in the DMA package, the call ret and ref in the example of Figure 7 is shown. The call command of the DMA transfer data after the tag by a pre-graduated number pushes then the address adjacent to the next packet to the DMA address stack to carry out the next meta-command indicated by the specified address. The control DMA transfer data ret that remains adjacent to the tag by a specified number, then skips an address in the DMA address row and performs the indicated meta-command by the address that has jumped. The command ref transfers DMA to the address data specified by the meta-command by means of a specified number and then performs the meta-command from the address adjacent to the next packet. In the example shown in Figure 7, the two adjacent data following the DMA meta-command is transferred before transferring the data of the specified address. In the program of Figure 7, for example, since the DMA-Marbete meta-command at the initiation of the program is a call, the tag command ret of T2 is carried out and the T2 (Vertex) data is sent. transfers and expands before the end of the transfer and expansion of the T2 data (Matrix). After the execution of the resident program by means of the PKE command c, the control proceeds to the tag command of T3 which remains adjacent to Tl. In the BUNDLE DMA package, the program in the package is transferred by the PKE command d after which the transferred program (non-resident program) begins to be carried out by the PKE e command. Referring to Figure 8, an illustrative programming sequence of the routine data flow type processing program wherein the packets attaching data, programs and program initiation commands are connected by the DMA meta-command as shown in FIG. shown in Figure 7, will be explained later. In the first step SI of Figure 8, a programmer first decides the structure of the processing data that is to be processed routinely. In the next step S2, a program of routine processing of the data whose formula has been determined in step SI is formulated, using a usual programming language such as the C language used by the CPU 20 and verified as to its operation , that is, in order to verify if the program worked properly. In the next step S3, part of the program C is corrected as described in an assembly language to rewrite the program in a program employing the CPU 20 and the floating decimal point vector co-processor (VPE) 30, a order to verify the program operation. The programmer then proceeds to step S4 to convert the data into a DMA packet to be rewritten in a DMA transfer form in the high-speed internal memory (SPR) 24 in order to verify the operation of the program. In the next step S5, the program expanded and transferred from the routine data of the program to the VPE 30 is written back to the command of a data expansion mechanism (PKE) to form a DMA packet again and expand to the internal memory of the program. high speed VU-MEM in the VPE 30 using the PKE 35 to be rewritten in the processing form by the microprogram of the VPE 30 for verification of the operation. In the next step S6, the program is written back to the VPE 40 processing and the micro-program also becomes a DMA packet that is connected to the data packet by the DMA meta-command to form a routine processing program of type of data flow. In step S7, being co-owner of the data packets and the sequence of the processing packets is controlled in step S7 a DMA meta-command to increase the efficiency of the memory or to decrease the amount of data transfer or the non-resident program through the so-called tuning. In the above described embodiment of the present invention, efficient processing for a variety of processing configurations is carried out by providing a data bus collector, a data transfer system, a cache mechanism and an appropriate computing processing device for specific processing configurations, such as routine processing for routine data or general non-routine processing. In particular, a virtual reality modeling language (VRML) or a game, for which it is required to process a large amount of data and carry out all the flexible processing, such as the processing of 3D graphics, can be processed efficiently in a flexible way. For the vector processing device or a calculating processor suitable for routine processing, such as controls of the SIMD type it is possible to set a more appropriate burst data transfer amount than the usual cache mechanism or the use of free cache of waste through the transfer of DMA, the memory of the special register or an enclosed high-speed memory equipped with a data expansion function. A data bus collector is also provided for routine data or for the non-routine processed data thereby obtaining flexible high-speed processing. The present invention is not limited to the aforementioned embodiments. For example, the present invention is applicable to a case where only a portion of the FPU configuration 27, the VPEs 30 and 40 connected to the main CPU 20. The present invention can also be applied to a variety of devices other than domestic television gaming machines.

Claims (12)

R E I V I N D I C A C I O N E S:
1. An information processing apparatus comprising: a main processor unit including at least one parallel computing processing means, a cache storage medium, an internal high-speed storage medium with direct access to the memory; a main storage medium; and a direct memory access control unit for direct memory access transfer control between the internal high-speed storage medium and the main processor unit and the main storage medium; The main processor unit, the main storage medium and the direct memory access unit are interconnected through a main bus manifold. The information processing apparatus according to claim 1, wherein a floating decimal point vector processing unit including at least one vector processing means and a high speed internal storage medium of direct access The memory is provided in the main bus collector. The information processing apparatus according to claim 1, wherein the internal high-speed storage medium of direct memory access in the floating-point vector processor can carry out direct access transfer to the memory to the internal speed data storage medium in the main processor unit. 4. An information processing apparatus comprising: a main processor unit including at least one calculation processing means and a caching storage means; a main storage medium; a floating-point-vector vector processing unit at least including a vector processing means and an internal high-speed storage medium with direct access to the memory; and a direct memory access control unit for direct memory access transfer control between the internal high-speed storage medium and the vector processor unit and the main storage medium; the main processor unit, the main storage medium and the direct memory access control unit are interconnected through the main bus manifold. 5. The information processing apparatus according to claim 4, wherein the floating decimal point vector processing unit is tightly connected to the main processor unit to form a co-processor. 6. The information processing apparatus according to claim 4, wherein the floating decimal point vector processing unit is constituted of a first vector processor and a second processor and wherein the first vector processor is tightly connected. with the main processor unit to form a co-processor. The information processing apparatus according to claim 4, wherein the main processing unit has a direct access transfer to the memory of the internal high-speed storage medium to control between it and the main storage medium. through the unit of direct access to memory. 8. An information processing apparatus comprising: a main processor unit including at least one parallel computing processing means, a cache storage medium, and an internal high-speed storage medium with direct access to the memory; a main storage medium; and a direct memory access control unit for direct memory access transfer control between the internal high-speed storage medium in the main processor unit and the main storage medium; a bus collector to which the main processor unit, the main storage medium and the direct memory access control unit are connected; a sub-processing unit; a means of sub-storage; a sub-unit of direct memory access control for carrying out direct memory access control between the processing sub-unit and the storage sub-means; and a sub-collector to which the processing subunits, the storage sub-medium and the direct memory access control sub-unit are connected; The main bus collector and the bus sub-collector are interconnected through a sub-interface of the bus collector. The entertainment system according to claim 8, wherein an input unit having at least one connection terminal for connection to an actuator and a reproduction means for an external registration means are connected to the bus sub-collector. The information processing apparatus according to claim 8, wherein the floating decimal point vector processing unit has at least one vector processing means and a high speed internal storage medium with direct access to the memory that connects to the main bus collector. 11. An information processing apparatus comprising: a main processor unit including at least one calculation processing means and one caching storage means; a main storage medium; a floating-point-vector vector processing unit that includes at least one vector processing means and an internal high-speed storage medium for direct memory access; and a direct memory access control unit for transfer control of direct memory access between the internal high-speed storage medium in the vector processor unit and the main storage medium; a main bus collector that is connected to the main processor unit, the main storage medium and the direct memory access control unit; a processing sub-unit; a means of sub-storage; a sub-unit of direct memory access control for carrying out direct memory access control between the processing sub-unit and the storage sub-means; and a bus sub-collector to which a processing sub-unit, a storage sub-medium and the direct memory access control sub-unit are connected; The main bus collector and the bus sub-collector are interconnected through a sub-interface of the bus collector. The entertainment system according to claim 11, wherein an input unit having at least one connection terminal for connecting to a drive device and a reproduction medium for an external recording means, are connected to the bus sub-collector.
MXPA/A/1998/006707A 1997-08-22 1998-08-19 Information processing device and entertainment system MXPA98006707A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP9-226892 1997-08-22

Publications (1)

Publication Number Publication Date
MXPA98006707A true MXPA98006707A (en) 1999-09-20

Family

ID=

Similar Documents

Publication Publication Date Title
US6427201B1 (en) Information processing apparatus for entertainment system utilizing DMA-controlled high-speed transfer and processing of routine data
AU730714B2 (en) Information processing apparatus and information processing method
US6219073B1 (en) Apparatus and method for information processing using list with embedded instructions for controlling data transfers between parallel processing units
KR100506959B1 (en) Image processing apparatus and image processing method
JP5020393B2 (en) Processing equipment
JPWO1997032248A1 (en) Image processing device and image processing method
KR100725331B1 (en) Image generating device
JPH0749961A (en) Graphic accelerator floating point processor and method for performing the floating point function
GB2201568A (en) Graphics processor
US7170512B2 (en) Index processor
JP2001312740A (en) Game system, display image forming method therefor and computer readable recording medium with program for game stored therein
JPH11102443A (en) Lighting unit for three-dimensional graphics accelerator with improved input color value handling
US20020052955A1 (en) Data processing system and method, computer program, and recording medium
MXPA98006707A (en) Information processing device and entertainment system
JP3468985B2 (en) Graphic drawing apparatus and graphic drawing method
JPH1173527A (en) Compression and decompression of three-dimensional geometric data representing regularly tiling surface parts of graphical objects
HK1021041A (en) Information processing apparatus and entertainment system
CN118521698A (en) A software-based method and device for ray tracing hardware division calculation
AU5012001A (en) Information processing apparatus and information processing method