CN105679276B - Technology for multipass rendering - Google Patents
Technology for multipass rendering Download PDFInfo
- Publication number
- CN105679276B CN105679276B CN201511022943.5A CN201511022943A CN105679276B CN 105679276 B CN105679276 B CN 105679276B CN 201511022943 A CN201511022943 A CN 201511022943A CN 105679276 B CN105679276 B CN 105679276B
- Authority
- CN
- China
- Prior art keywords
- pixel
- data
- multipass
- rendering
- graphics processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 88
- 238000005516 engineering process Methods 0.000 title abstract description 12
- 230000002708 enhancing effect Effects 0.000 claims abstract description 15
- 239000000872 buffer Substances 0.000 claims description 97
- 238000012545 processing Methods 0.000 claims description 58
- 238000000034 method Methods 0.000 claims description 43
- 230000008569 process Effects 0.000 claims description 34
- 238000004040 coloring Methods 0.000 claims description 26
- 230000000694 effects Effects 0.000 claims description 15
- 235000013399 edible fruits Nutrition 0.000 claims description 7
- 238000005286 illumination Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 230000012447 hatching Effects 0.000 claims 1
- 238000009738 saturating Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 description 37
- 230000015654 memory Effects 0.000 description 23
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000005611 electricity Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 241000406668 Loxodonta cyclotis Species 0.000 description 2
- 101100074336 Xenopus laevis ripply2.1 gene Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 229920000642 polymer Polymers 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 235000014510 cooky Nutrition 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/026—Control of mixing and/or overlay of colours in general
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/10—Use of a protocol of communication by packets in interfaces along the display data pipeline
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
Technology for multipass rendering includes the vertex data for receiving the one or more objects to be enhanced.Vertex data can be used to determine in parameter in display list.The parameter in display list can be used to run multipass pixel rendering.The enhancing that one or more objects can be rendered based on multipass pixel rendering is described.Other embodiments are also described and claimed.
Description
The application be PCT international application no be PCT/US2011/064933, international filing date be on December 14th, 2011,
Application No. is the 201180075514.8, divisional applications of the application of entitled " technology for multipass rendering " for China national.
Background
3-D technology plays important role in graphical development field.3-D technology in such as smart phone, desktop and
It is realized in the mobile device of net book etc.3-D technology performance on the mobile apparatus and power consumption usually with user's visual experience
It is related, and affect the competitive advantage of product.
Many 3d gamings are using the special efficacy of such as transparent, shade and/or adaptive texture/skin etc so that game
The more attractive for end user.However, operate on current Graphics processing unit application need it is right to same three-dimensional
As gathering multipass by entire three-dimensional assembly line, to create these special efficacys.
For example, for creation transparent effect, using Depth Peeling must be carried out first, to obtain the frame buffering of each depth layer
Then area mixes each layer according to depth value.During Depth Peeling process, using must repeatedly be transported to same three dimensional object set
Row is by three-dimensional assembly line, to obtain data from different layers.In operation each time by calculating three during three-dimensional assembly line
Tie up both vertex stage (phase) and the Pixel-level of assembly line.However, during operation, not changing in vertex stage.Knot
Fruit is repetition and redundancy in the vertex stage of these time middle calculating.Exactly consider to need of the invention change for these and other
Into.
Detailed description of the invention
Fig. 1 shows the embodiment of the system for multipass rendering.
Fig. 2 shows the embodiments of the logic flow of the system of Fig. 1.
Fig. 3 shows the embodiment of the graphics processing unit with three-dimensional assembly line.
Fig. 4 is shown during Pixel-level for the embodiment of the depth rendering of object.
Fig. 5 shows the embodiment of the parameter used in Pixel-level.
Fig. 6 shows the embodiment of the communication between multipass rendering application and graphdriver.
Fig. 7 shows the embodiment of the centralized system of the system of Fig. 1.
Fig. 8 shows the embodiment of counting system structure.
Fig. 9 shows the embodiment of communication architecture.
Detailed description
Each embodiment is rendered for multipass.In one embodiment, multipass rendering can redundantly not handle number of vertex
It is carried out in the case where.In one embodiment, the vertex data of one or more objects to be enhanced can be received.It is real one
It applies in example, shows that vertex data can be used to determine in the parameter in list.The parameter in display list can be used to run multipass
Pixel rendering.The enhancing of one or more objects, which is described, to be rendered based on multipass pixel rendering.
The rendering of 3-D effect can be by being improved in three-dimensional assembly line using separated vertex stage and Pixel-level.Pass through
Single runs vertex stage to create display list, and the display list is then reused while Pixel-level is run multiple times, can be more
3-D effect is realized in the case where good performance and less power consumption.As a result, each embodiment can improve operator, equipment or network
Ability to bear, scalability, modularity, scalability or interoperability.
Referring now to the drawings, wherein identical reference label is used to indicate that identical element in all the appended drawings.Under
In the description in face, elaborate numerous details in order to provide complete understanding of the present invention for purpose of explanation.However, it is aobvious and
It is clear to, it can be without implementing each novel embodiment in the case where these details.In other cases, show in form of a block diagram
Each well known structure and equipment are gone out in order to describe the present invention.The present invention falls into the essence of theme claimed by covering
All modifications, equivalent scheme and alternative in mind and range.
Fig. 1 shows the block diagram of system 100.In one embodiment, system 100 may include having one or more softwares
Using and/or component computer implemented system 100.Although system 100 shown in Fig. 1 has in particular topology
The element of limited quantity, but it will be appreciated that system 100 may include more in replacement topology needed for given realize or more
Few element.
System 100 may include multipass rendering using 120.In one embodiment, multipass rendering can be in graphics process using 120
It is run on unit.In one embodiment, multipass rendering can be run through three-dimensional assembly line using 120 to create three-dimensional special efficacy.
For example, multipass rendering can create such as, but not limited to using 120: transparent, shade, adaptive texture and/or adaptive skin
Special efficacy.
In embodiment, system 100 can be by having 118 He of figure Application Programming Interface in multipass rendering is using 120
Graphdriver 121 improves the performance of renders three-dimensional effect.
In one embodiment, graphdriver 121 can be tri-dimension driving mechanism.Graphdriver 121 can be with graphics process
Unit is worked together three-dimensional assembly line is processed into two individual grades.In one embodiment, three-dimensional assembly line may include top
Point grade 122 and Pixel-level 124.In one embodiment, graphdriver 121 can run vertex stage 122.Vertex stage 122 can quilt
Processing, and graphdriver 121 can be generated and be interrupted.Graphdriver 121 can store the knot of vertex stage 122 in display list
Fruit.By the storage result in display list, Pixel-level 124 can use display list later for processes pixel.
In one embodiment, graphdriver 121 can run 124 multipass three-dimensional assembly line of Pixel-level, to create
Required special efficacy.By separating vertex stage 122 with Pixel-level 124, vertex stage can be run single, and result is stored.
The result stored can be used during the multipass of Pixel-level by Pixel-level 124.As a result, power is saved, because of vertex
Grade 122 need not be reruned when each Pixel-level 124 is run in three-dimensional assembly line.
In one embodiment, vertex stage 122 can be used for object reception vertex data based on one or more.In an embodiment
In, vertex data 110 can be the input data 110 that multipass rendering applies 120.In one embodiment, vertex data 110 can be with
It is the data of one or more objects from special efficacy to be applied to.Vertex stage 122 can run the vertex data from object
110 pass through vertex pipeline to handle data.Vertex stage 122 can determine pel (primitive) data.In one embodiment,
Primitive data may include one or more of transformation, illumination, color and position data.
In one embodiment, primitive data can be stored in display list by vertex stage 122.In one embodiment, it shows
List may include multiple parameters.In one embodiment, the parameter for showing list may include using vertex data by vertex stage really
Fixed primitive data.In one embodiment, the parameter for showing list may include being directed toward the pointer of order data buffer area.For example,
The parameter of display list may include the pointer and/or direction for being directed toward the pointer of texture buffer, being directed toward pixel coloring device buffer area
Depth/Render Buffer pointer.In one embodiment, depth/Render Buffer can be with respective depth and rendering
Two sseparated buffer areas of information.In one embodiment, depth buffer may include depth information.Depth information can be used for
Reflect the distance of object.In one embodiment, Render Buffer may include rendering result.In one embodiment, Render Buffer
It is referred to alternatively as frame buffer zone.
In one embodiment, when vertex stage 122 ends processing, graphdriver 121, which can be used, carrys out free vertex stage 122
Parameter in the display list of generation starts Pixel-level 124.In one embodiment, Pixel-level 124 can be with 122 nothing of vertex stage
It closes.In other words, Pixel-level 124 can be run repeatedly without reruning vertex stage 122.In one embodiment, Pixel-level 124 can
For using display list operation multipass pixel rendering.In one embodiment, a pixel rendering can be run for the first time, to obtain
Take the depth/rendering i.e. frame buffer zone (depth/render or frame buffer) of nearest depth layer.In one embodiment,
It is every to obtain frame buffer zone from secondary nearly depth layer all over pixel rendering.In one embodiment, a pixel rendering can be run finally
Once, to obtain the frame buffer zone of farthest depth layer.
In one embodiment, after Pixel-level 124 runs multipass pixel rendering and reaches most distal layer by Depth Peeling,
The enhancing that Pixel-level 124 can render one or more objects to be enhanced is described.In one embodiment, one or more objects
Enhancing describe can be multipass rendering apply 120 output 130.Output 130 may include that have specific one or more right
The rendering of elephant.For example, multipass rendering can mix the depth from distal layer to nearly layer using 120 Pixel-level 124 after Depth Peeling
Degree/rendering, that is, frame buffer zone, to obtain the transparent effect of one or more objects.
It is used to execute the illustrative methods of the novel aspect of disclosed architecture included herein are one group of representative
Flow chart.While, for purposes of simplicity of explanation, one or more for example shown in the form of flow chart or flow chart herein
A method is shown and described as a series of actions, but it is understood that each method is not limited by the order acted with understanding,
Because according to the present invention, certain movements can act simultaneously by from shown here and description different order and/or with other
Occur.For example, those skilled in the art will understand and appreciate that, method is alternatively represented as a series of shapes that are mutually related
State or event, such as in the form of state diagram.In addition, not the everything shown in a method is all that novel realize must
It needs.
Fig. 2 shows one embodiment of logic flow 200.Logic flow 200 can represent one as described herein or more
Some or all of operation performed by a embodiment.
In embodiment shown in Fig. 2, it is right that logic flow 200 can receive one or more to be enhanced in frame 202
The vertex data of elephant.For example, vertex data can be received during the first order of three-dimensional assembly line.In one embodiment, three
Dimension assembly line can have two-stage.In one embodiment, the first order may include vertex stage.Vertex stage can receive to be enhanced one
Or the vertex data of multiple objects.For example, user, which may want to an object in scene or a group objects, seems transparent.As knot
Fruit, in scene the object or the associated vertex data of the group objects can be connect during the vertex stage of three-dimensional assembly line
It receives.
Logic flow 200 can determine display list using vertex data during single operation of the frame 204 in the first order.Example
Such as, vertex data can be processed during vertex stage.In one embodiment, vertex data can be processed and/or be compiled with determination
About the position of vertex data, color and other information.Embodiment example without being limited thereto.
In one embodiment, vertex stage can vertex data creation shows list based on treated.Show that list may include
One or more parameters.In one embodiment, display list parameter may include primitive data.In one embodiment, list is shown
Parameter may include command buffer.Command buffer may include that control buffer area command buffer may include being directed toward and second i.e. picture
The pointer of the plain associated each buffer area of grade.In one embodiment, be directed toward each buffer area pointer can in Pixel-level quilt
It uses.In one embodiment, command buffer may include but be not limited to: being directed toward the pointer of texture buffer, is directed toward pixel shader
The pointer and direction depth/Render Buffer pointer of device buffer area.In one embodiment, the command buffer during vertex stage
Setting can be changed before Pixel-level.In one embodiment, the command buffer being arranged during vertex stage can be default
Texture buffer, pixel coloring device buffer area and/or depth/Render Buffer.In one embodiment, user can determine specifically
Buffer area should be used and parameter can be redefined, so that pointer may point to the particular buffer.For example, if in vertex stage
After operation, user specifies particular texture buffer area, then can to Pixel-level using the specific texture buffer non-display column
The default texture buffer that pointer is directed toward on table.In one embodiment, display list is available to texture buffer selected by user
Pointer replacement to default texture buffer pointer.When vertex stage and Pixel-level separate in three-dimensional assembly line, Yong Huke
One or more buffer areas before running for the first time, are selected after vertex stage is run but in Pixel-level.
Logic flow 200 can run the pixel rendering of the multipass second level using display list in frame 206.For example, Pixel-level can
It is run repeatedly, to realize required special efficacy.When being run Pixel-level, display list can be used without and must rerun
Vertex stage.Pointer in display list can be updated, so that the information of the parameter in display list can be used in Pixel-level,
Without reruning vertex stage.
For example, Pixel-level can be run repeatedly, a depth layer is removed from object each time.Pixel-level can continue to run, directly
Last depth layer is removed from image to Pixel-level determination.Embodiment example without being limited thereto.Logic flow 200 can be in frame
208 enhancings that one or more objects are rendered based on the pixel rendering of the multipass second level are described.For example, running vertex by single
Pixel-level is simultaneously run multiple times using the display list generated from vertex stage for grade, and the enhancing description of one or more objects can be by wash with watercolours
Dye.In one embodiment, three-dimensional special efficacy can be rendered.For example, each texture, object quilt can be described on one or more objects
Be shown as partly or completely all-transparent and/or object can be displayed as with shade.Embodiment example without being limited thereto.
For example, building can be object to be enhanced in scene.User may want to so that building object shows
It is transparent.Vertex data can be determined to building object.Vertex data can be received by graphics processing unit.Vertex data can be three
It is received during the vertex stage for tieing up assembly line.Compiled and processing vertex data can be primitive data.In the list of vertex stage
During secondary operation, it may be determined that display list.Show that list may include parameter, such as, but not limited to primitive data and control buffering
Area.
Graphics processing unit can be determined whether Pixel-level to be run.Graphdriver may wait for running Pixel-level, until
One order is received.In one embodiment, order from the user, which can be received, handles pel number to use particular buffer
According to.Graphics processing unit can redefine based on particular buffer and/or update the parameter in display list.For example, at figure
One or more pointers in the renewable command buffer of unit are managed, so that pointer is directed toward the coloring of specific pixel selected by user
Device buffer area.This allows pixel coloring device buffer area specified by user to be used in Pixel-level.
In one embodiment, the pixel rendering of the multipass second level can be run.In one embodiment, first pass pixel rendering can be from
Building object removes first layer.In one embodiment, second time pixel rendering can remove the second layer from building object.It is subsequent
Time second level pixel rendering can be run, until the last layer of building object is determined.
For example, the enhancing transparent image of building object can be rendered.When having run multiple pixel rendering to building object
When grade, multiple depth/rendering frame buffer zone can be determined for multiple Depth Peeling layers.For rendering transparent object, graphics process list
Member can mix each layer using the frame buffer zone from most distal layer to closest layer according to depth value.The transparent image of building object can quilt
Rendering.It in one embodiment, can rendering transparent building object on the display of the mobile device.Embodiment is without being limited thereto to be shown
Example.
Fig. 3 shows the embodiment of the graphics processing unit with three-dimensional assembly line 300.In one embodiment, at figure
Managing unit 302 may include PowerVR graphics processing unit.In one embodiment, with reference to as described in multipass rendering applies 122,
Three-dimensional assembly line is divided into vertex stage 322 and Pixel-level 326 by the graphics processing unit 302 with three-dimensional assembly line 300.It is real one
It applies in example, vertex pipeline can be used to handle vertex stage 322 in graphics processing unit 302.Graphics processing unit 302 can handle top
Point grade 322, then generates the interruption to graphdriver 310.Graphdriver 310 can run vertex stage 322.Graphdriver
310 can receive the interruption being stored in the result of vertex stage 322 in output buffer (such as, but not limited to: display list 315).
By the storage result in display list 315, Pixel-level 326 can use display list 315 later for processes pixel.It is real one
It applies in example, graphdriver 310 can run Pixel-level 326.
In one embodiment, display list 315 may include information needed for processes pixel.In one embodiment, display column
Table 315 may include one or more parameters.In one embodiment, show that the parameter in list 315 may include primitive data 330.
In one embodiment, primitive data 330 may include the vertex data handled by vertex stage 322.As described above, primitive data 330
Including one or more of transformation, illumination, color and position data.
In one embodiment, show that the parameter in list 315 may include command buffer.In one embodiment, order is slow
Rushing area may include control stream information.In one embodiment, command buffer may include being directed toward in Pixel-level 326 as processes pixel institute
The pointer of each buffer area needed.For example, command buffer may include the pointer for being directed toward texture buffer 335.Texture buffer
May include to be the texture image of one or more objects rendering in scene.In one embodiment, texture coordinate information can be with
It is vertex underlying attribute data.In one embodiment, how texture coordinate information is mapped to texture image if can be used to determine
On three dimensional object.Information in texture buffer 335 and primitive data 330 can be for handling during Pixel-level 326
Input.
In one embodiment, the pointer for being directed toward pixel coloring device buffer area 340 can be included in display list 315.Picture
Plain tinter buffer area 340 may include the information for handling input during Pixel-level 326.Specifically, pixel coloring device is slow
Rushing area 340 may include the information for handling the information in 335 buffer area of texture and primitive data 330.In one embodiment, as
Plain tinter buffer area 340 may include programming code.In one embodiment, the generation being stored in pixel coloring device buffer area 340
Code can be loaded during Pixel-level 326 by graphics processing unit 302.
In one embodiment, the pointer for being directed toward depth/Render Buffer 345 can be included in display list 315.One
In embodiment, depth/Render Buffer 345 may include two sseparated bufferings with respective depth and spatial cue
Area.In one embodiment, depth buffer may include depth information.Depth information can be used for the distance for reflecting object.One
In embodiment, Render Buffer may include rendering result.In one embodiment, depth/Render Buffer 345 may include in pixel
Tinter buffer area 340 handles the output information after primitive data 330 and texture buffer 335.In one embodiment, deep
Degree/Render Buffer 345 can store the pixel of each depth layer when Pixel-level 326 runs and removes nearest one layer of pixel time.
Fig. 4 is shown during Pixel-level for the embodiment of the depth rendering of object.In one embodiment, processes pixel
It may include the Depth Peeling of each layer of one or more objects.For example, the object that execute Depth Peeling to it, which can be, to be had
The circle of lines.Object can be run through vertex stage, to have the aobvious of multiple parameters based on the circle Object Creation with lines
Show list.Circle and lines object can be run through the first order in three-dimensional assembly line.The first order can be vertex stage.Work as top
After point grade is completed, circle and lines object are ready for the second level in three-dimensional assembly line.The second level can be for pixel
The Pixel-level of processing.Pixel-level may include being run multiple times using each time pixel rendering from the parameter for showing list.
For example, first pass pixel rendering can obtain the depth/rendering i.e. frame buffer zone of nearest depth layer.Such as institute in Fig. 4
Show, first layer (layer 0) 405 may include the first pass by Pixel-level.In the first Depth Peeling layer 405, nearest one layer can
It is removed.
Multipass rendering can determine there are more layers for circle and lines object using 120.As a result, multipass rendering is answered
The pointers of the direction buffer areas from display list may be updated with 120, and Pixel-level is reruned to circle ledger line object.Cause
It can be used for subsequent pixel time for display list, so vertex stage need not be reruned.To which vertex stage can be run list
It is secondary, and Pixel-level can be reruned to remove each depth layer.
It can determine the second layer (layer 1) 410 of circle ledger line object by second time of Pixel-level.Pixel-level can be by making
Time nearly layer is determined with the parameter from first layer 405 and pixel of the removal from first layer 405.Multipass rendering can using 120
Pixel is removed from first layer 405 to obtain time nearly layer 410.Secondary nearly layer can be the second layer 410.A time second level pixel rendering can
It is run to obtain the frame buffer zone of the second depth layer 410.
It can determine third and last layer (layer 2) 415 by the third time of Pixel-level.Because first and second layers preceding two
It is removed all over period, so closest layer can be third layer 415.Pixel-level can be by using the parameter from the second layer 410 simultaneously
The pixel from first layer 405 and the second layer 410 is removed to determine time nearly layer.Multipass rendering can be from 405 He of first layer using 120
The second layer 410 removes pixel to obtain time nearly layer 415.A time pixel rendering can be run to obtain the frame of farthest depth layer 415
Buffer area.This time pixel rendering can be had reached by running another time pixel rendering and determining without other depth layers to determine
Most distal layer 415.In one embodiment, when last pixel time is run, successive depths/Render Buffer can be with depth before
Degree/Render Buffer is identical.In one embodiment, when last pixel time is run, without picture in depth/Render Buffer
Element.In one embodiment, when last pixel time is run, exist without other layers, because there is no bigger depth values to come
The existing value in depth buffer is replaced, and to be stored in Render Buffer without rendered pixel.
Fig. 5 shows the embodiment of the parameter used in Pixel-level.As shown in Figure 5, primitive data 520 and texture are slow
Rushing area 525 can be input.Pixel coloring device buffer area 530 can provide code to handle input.Depth/Render Buffer 535
It can provide output.For example, by the usable primitive data 540 of first time operation 505 of Pixel-level as input.Pass through Pixel-level
First time operation 505 may be not from any data texturing of texture buffer 525, because not needing to compare to layer before
Compared with.By the way that during the first time of Pixel-level operation 505, pixel coloring device buffer information 545 can handle primitive data, and picture
The closest layer of element can be placed in depth/Render Buffer 550 as output.
Before second of operation 510 of Pixel-level, buffer area can be updated.From the defeated of depth/Render Buffer 550
Data can be placed in texture buffer 560 out.Pixel data from texture buffer 560 then can be with primitive data 540
It is used as the input of second of operation 510 of Pixel-level together.Second of operation 510, which can be used, comes from pixel coloring device buffer area
565 data are to handle the pixel data and primitive data 540 from texture buffer 560.In one embodiment, pixel shader
Device buffer area 585 may compare primitive data 540 and the pixel data from texture buffer 560 with the next layer of determination.Processing
As a result it can be pixel data, which can be placed in depth/Render Buffer 570.
Before the third time operation 515 of Pixel-level, buffer area can be updated.From the defeated of depth/Render Buffer 570
Data can be placed in texture buffer 580 out.Pixel data from texture buffer 580 then can be with primitive data 540
It is used as the input of the third time operation 515 of Pixel-level together.Third time operation 515, which can be used, comes from pixel coloring device buffer area
585 data are to handle the pixel data and primitive data 540 from texture buffer 580.In one embodiment, pixel shader
Device buffer area 585 may compare primitive data 540 and the pixel data from texture buffer 585 with the next layer of determination.Processing
As a result it can be pixel data, which can be placed in depth/Render Buffer 590.Because of depth/Render Buffer
Pixel data in 590 may be from the final or final layer of object, and the Depth Peeling of Pixel-level is achievable.Embodiment is without being limited thereto
Example.
Fig. 6 shows the embodiment of the communication between multipass rendering application and graphdriver.In one embodiment, multipass
Rendering can receive using 620 opens the order for applying 625.In one embodiment, order can be for scene setting.It is real one
It applies in example, for a scene, one or more objects can be rendered with special efficacy.
After multipass rendering receives the order for opening application using 620, multipass rendering can be to graphdriver using 620
610 send information.For example, the information for being sent to graphdriver 610 may include vertex data, to determine in scene one or
The three-dimensional pel of multiple objects.Graphdriver 610, which can generate one or more command buffers and be stored for Pixel-level 655, to be referred to
Needle.For example, graphdriver 610 is ready for command buffer.For example, graphdriver 610 can record in command buffer
Where uses texture buffer, pixel coloring device buffer area and depth/Render Buffer.In one embodiment, graphdriver
610 can store direction texture buffer, pixel coloring device buffer area and depth/Render Buffer pointer.
Multipass rendering can start vertex stage 635 using 620.Multipass rendering can be sent using 620 to graphdriver 610 to be believed
Breath, so that graphdriver 610, which can start vertex stage, shows list 640 to determine.Graphdriver 610 can be complete in vertex stage
It is handled at stopping later.In one embodiment, graphdriver 610 can wait before starting Pixel-level answers from multipass rendering
With 620 order.In one embodiment, graphdriver 610 can be rendered from multipass and be received used in the Pixel-level using 620
Input.
In one embodiment, multipass rendering using 620 settable texture buffers, pixel coloring device buffer area and depth/
Render Buffer 645.In one embodiment, order can be received with setting command buffer area.In one embodiment, buffer area
It can be determined via to multipass rendering using 620 user's input.For example, user can determine that texture buffers after vertex stage
Area will be used during Pixel-level.For example, user can determine that pixel coloring device buffer area will be used during Pixel-level.It is real
Apply example example without being limited thereto.
620 setting is applied based on multipass rendering, graphdriver 610 can be in the command buffer of display list 650
Replace texture buffer, pixel coloring device buffer area and/or depth/Render Buffer pointer.
Multipass rendering can start Pixel-level 655 by communicating with graphdriver 610 using 620, to run pixel time
660.In each pixel after 660, pointer in command buffer that can be replaced 650.Multipass rendering can determine using 620
Whether this is last all over 665.If graphdriver 610 can determine new depth layer, graphdriver 610 can be run
Another pixel is all over 660.When Pixel-level terminates and last has run 660 all over 665, then multipass rendering can be ordered using 620
Each time 675 before 610 use of graphdriver result is enabled to generate final scene.Graphdriver 610 can show rendering knot
Fruit 680.Rendering result 680 may include having the scene of three dimensional object.
Fig. 7 shows the block diagram of centralized system 700.Centralized system 700 such as can all exist in single computational entity
It is single to calculate in equipment 720, realize some or all of structure and/or operation of system 100.
In one embodiment, calculating equipment 720 can be mobile device.Mobile device may include and be not limited to computer,
Server, work station, notebook computer, handheld computer, phone, cellular phone, personal digital assistant (PDA), combination
Cellular phone and PDA etc..
Calculating equipment 720 and processing component 730 can be used is that system 100 executes processing operation or logic.Processing component 730 can
Combination including each hardware elements, software elements or both.The example of hardware elements may include equipment, component, processor, Wei Chu
Manage device, circuit, circuit element (such as transistor, resistor, capacitor, inductor etc.), integrated circuit, specific integrated circuit
(ASIC), programmable logic device (PLD), digital signal processor (DSP), field programmable gate array (FPGA), memory
Unit, logic gate, register, semiconductor devices, chip, microchip, chipset etc..The example of software elements may include software
Component, application, computer program, application program, system program, machine program, operating system software, middleware, is consolidated program
Part, software module, routine, subroutine, function, method, program, software interface, application programming interfaces (API), instruction set, calculating
Code, computer code, code segment, computer code segments, word, value, symbol or their any combination.As needed for given realize
, determine that embodiment is that realize using hardware component and/or software component can be different according to any number of factor, these
Factor such as needed for computation rate, power level, thermal capacitance limit, processing cycle budget, input data rate, output data rate,
Memory resource, data bus speed and other designs or performance constraints.
Calculating equipment 720 and communication component 740 can be used is that system 100 executes traffic operation or logic.Communication component 740 can
To realize any known communication technology and agreement, such as be suitable for packet switching network (for example, such as internet public network,
Private networks such as corporate intranet, etc.), circuit-switched network (for example, Public Switched Telephone Network) or packet switch
The technology that the combination (with suitable gateway and converter) of network and circuit-switched network is used together.Communication component 740 can
It is such as one or more communication interfaces, network interface, network interface card (NIC), wireless including various types of standard traffic elements
Electric installation, wireless transmitter/receiver (transceiver), wired and or wireless communications medium, physical connector, etc..As showing
Example rather than limit, communication media 720 include wired communication media and wireless communication medium.The example of wired communication media can wrap
Include electric wire, cable, plain conductor, printed circuit board (PCB), backboard, exchange optical fiber, semiconductor material, twisted pair, coaxial electrical
Cable, optical fiber, transmitting signal etc..The example of wireless communication medium may include acoustics, radio frequency (RF) frequency spectrum, infrared and other wireless
Medium 720.
Communication media 715 and other equipment can be passed through using signal of communication 722 via communication component 740 by calculating equipment 720
710,730 communication.In one embodiment, calculating equipment 720 may include but be not limited to smart phone, plate, laptop computer
Deng.
In one embodiment, calculating equipment 720 may include display 750.In one embodiment, display 750 can wrap
Include liquid crystal display (LCD).In one embodiment, display 750 may include Organic Light Emitting Diode (OLED) display.?
In one embodiment, OLED display can be used, because it provides color saturation and sight preferably than liquid crystal display (LCD)
See angle.In one embodiment, display 750 may include one or more OLED display screens.
Fig. 8 shows the embodiment for being adapted for carrying out the example calculation architecture 800 of foregoing each embodiment.
As used in this application, term " system " and " component " are intended to indicate that computer related entity, either hardware, hardware and
Combination, software or the software in execution of software, example are provided via example calculation architecture 800.For example, group
Part can be but not limited to, and the process that runs on a processor, processor, hard disk drive, multiple (light and/or magnetic storage are situated between
Matter) memory driver, object, executable code, the thread of execution, program, and/or computer.As explanation, in server
Both the application and the server of upper operation can be component.One or more components may reside within process and/or execution
In thread, and component can be located on a computer and/or be distributed between two or more computers.In addition, component
It can be communicably coupled to each other with coordinated manipulation by various types of communication medias.The coordination can be related to one-way or bi-directional
Information exchange.For example, component can pass through the information of the signal form of communication media transmitting.The information can be implemented so that point
The signal of each signal line of dispensing.In these distribution, each message is all signal.However, other embodiments are alternatively adopted
Use data-message.These data-messages can be sent across each connection.Exemplary connection includes parallel interface, serial line interface and total
Line interface.
In one embodiment, computing architecture 800 may include a part of electronic equipment or as one of electronic equipment
Divide and realizes.The example of electronic equipment may include but be not limited to, mobile device, personal digital assistant, mobile computing device, intelligence electricity
Words, cellular phone, mobile phone, unidirectional pager, bidirection pager, messaging devices, computer, personal computer (PC), platform
Formula computer, laptop computer, notebook computer, handheld computer, tablet computer, server, server array
Or it is server farm, web server, network server, Internet server, work station, minicomputer, mainframe computer, super
Grade computer, the network equipment, web appliance, distributed computing system, multicomputer system, processor-based system, consumption electricity
Sub- product, programmable consumer electronics, television set, digital television, set-top box, wireless access point, base station, subscriber station, shifting
Dynamic customer center, radio network controller, router, hub, gateway, bridge, interchanger, machine, or combinations thereof.Each reality
It applies example and is not limited to the context.
Counting system structure 800 includes various common computing elements, such as one or more processors, coprocessor, storage
Device unit, chipset, controller, peripheral equipment, interface, oscillator, timing device, video card, audio card, multimedia input/
Export (I/O) component, etc..However, each embodiment is not limited to be realized by counting system structure 800.
As shown in figure 8, counting system structure 800 includes processing unit 804, system storage 806 and system bus
808.Processing unit 804 can be any one of commercially available various processors.Dual micro processor and other multiprocessing bodies
Architecture also is used as processing unit 804.Each system component of the system bus 808 to including but not limited to system storage 806
The interface of processing unit 804 is provided.System bus 808 can be any one of several types of bus structures, these buses
It is all kinds of in the market that structure can also be interconnected to memory bus (with and without Memory Controller), peripheral bus and use
The local bus of any one of the bus architecture being commercially available.
Computing architecture 800 may include or realize various products.Product may include the computer-readable storage for storing logic
Medium.Various embodiments of the present invention can also be implemented at least partially as in non-transient computer-readable media or on include
Instruction, these instructions can read and be executed by one or more processors and enables to execute operation described herein.Meter
The example of calculation machine readable storage medium storing program for executing may include any tangible medium that can store electronic data, including volatile memory or
Nonvolatile memory, removable or non-removable memory, erasable or nonerasable memory, writeable or rewritable storage
Device etc..The example of logic may include using any suitable type code (such as source code, compiled code, interpretive code,
Executable code, static code, dynamic code, object-oriented code, visual code etc.) realize executable computer program refer to
It enables.
System storage 806 may include various types of meters of the form of the memory cell of one or more higher speeds
Calculation machine readable storage medium storing program for executing, such as read-only memory (ROM), random access memory (RAM), dynamic ram (DRAM), double data
Rate DRAM (DDR AM), synchronous dram (SDRAM), static state RAM (SRAM), programming ROM (PROM), erasable programmable ROM
(EPROM), the polymer memories, Austria such as electrically erasable ROM (EEPROM), flash memory, ferroelectric polymer memory
Family name's memory, phase transformation or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical card,
Or the medium of any other type suitable for storing information.In the illustrated embodiment shown in Fig. 8, system storage 806 can be wrapped
Include nonvolatile memory 810 and/or volatile memory 812.Basic input/output (BIOS) can store non-easy
In the property lost memory 810.
Computer 802 may include that one or more various types of computers compared with the form of the memory cell of low speed can
Storage medium is read, including internal HDD (HDD) 814, the magnetic floppy disk drive for reading and writing moveable magnetic disc 818
(FDD) 816 and the CD drive 820 for reading and writing removable CD 822 (for example, CD-ROM or DVD).HDD 814,
FDD 816 and CD drive 820 can be connected by HDD interface 824, FDD interface 826 and CD-ROM drive interface 828 respectively
To system bus 808.The HDD interface 824 realized for external drive may include universal serial bus (USB) and IEEE
At least one of 1394 interfacings or both.
Driver and associated computer-readable medium are provided to data, data structure, computer executable instructions
Deng volatibility and/or non-volatile memories.For example, multiple program modules can be stored in driver and memory cell 810,
In 812, including operating system 830, one or more application program 832, other program modules 834 and program data 836.
The one or more application program 832, other program modules 834 and program data 836 may include such as vertex
Grade 122 and Pixel-level 124.
User can be fixed by one or more wire/wireless input equipments, such as keyboard 838 and mouse 840 etc.
Point device will order and information input is into computer 802.Other input equipments may include microphone, infrared (IR) remote controler, behaviour
Vertical pole, game mat, stylus, touch screen etc..These and other input equipments are usually by being coupled to the defeated of system bus 808
Enter equipment interface 842 and be connected to processing unit 804, but such as parallel port, 1394 serial port of IEEE, game can also be passed through
The connection of other interfaces such as port, USB port, IR interface.
Monitor 844 or other kinds of display equipment are also connected to system via interfaces such as video adapters 846
Bus 808.Other than monitor 844, computer generally includes other peripheral output devices such as loudspeaker, printer.
Computer 802 can be used (such as long-range to count via one or more remote computers are wired and/or wireless communications to
Calculation machine 848) logical connection operated in networked environment.Remote computer 848 can be work station, server computer, road
By device, personal computer, portable computer, the amusement equipment based on microprocessor, peer device or other common networks
Node, and many or all of elements described relative to computer 802 are generally included, but illustrate only storage for simplicity
Device/storage equipment 850.Discribed logical connection includes to local area network (LAN) 852 and/or such as wide area network (WAN) 854
The wire/wireless of bigger network connects.This LAN and WAN networked environment is common in office and company, and facilitates such as
The enterprise-wide computer networks such as Intranet, it is all these all to may be connected to the global communications network such as internet.
When in LAN networked environment in use, computer 802 pass through wired and or wireless communications network interface or adaptation
Device 856 is connected to LAN 852.Adapter 856 can be convenient the wired and or wireless communications of LAN 852, and may also include
The wireless access point communicated for the radio function using adapter 856 being arranged thereon.
When in use, computer 802 may include modem 858, or being connected to WAN 854 in WAN networked environment
On the communication server, or with other devices for establishing communication on WAN 854 by internet etc..It or is interior
It sets or to be connected to system via input equipment interface 842 for external and wiredly and/or wirelessly equipment modem 858 total
Line 808.In networked environment, it can store relative to discribed program module of computer 802 or part thereof and remotely storing
In device/storage equipment 850.It is readily apparent that, shown network connection is exemplary, and be can be used to establish between the computers and be led to
Believe other means of link.
Computer 802 is operable to be led to using 802 standard series of IEEE with wired and wireless equipment or entity
Letter, these equipment or entity are e.g. operationally positioned to and such as printer, scanner, desk-top and/or portable computing
Machine, personal digital assistant (PDA), telecommunication satellite, any equipment associated with wireless detectable label or position (example
Such as, telephone booth, newsstand, lobby) and phone carry out wireless communication the nothing of (for example, the aerial modulation technique of IEEE 802.11)
Line equipment.This includes at least Wi-Fi (i.e. Wireless Fidelity), WiMax and bluetoothTMWireless technology.Communication can be such as normal as a result,
Advising network is predefined structure like that, or only the self-organizing between at least two equipment (ad hoc) communicates.Wi-Fi net
Network provided using the referred to as radio technology of IEEE 802.11x (a, b, n, g etc.) it is safe and reliable, quickly wirelessly connect
It connects.Wi-Fi network, which can be used for for computer being connected to each other, be connected to internet and be connected to cable network (uses IEEE
802.3 relevant media and function).
Fig. 9 shows the block diagram for being adapted to carry out the exemplary communication architecture 900 of the various embodiments described above.Communication system knot
Structure 900 includes various common communication elements, such as transmitter, receiver, transceiver, radio device, network interface, Base-Band Processing
Device, antenna, amplifier, filter, etc..However, each embodiment is not limited to be realized by communication architecture 900.
As shown in figure 9, communication architecture 900 includes one or more client computer 902 and server 904.Client computer 902
Client machine system 320 can be achieved.Client computer 902 and server 904, which are operably connected to, can be used to storage respective client
902 and the local information (such as cookie and/or associated contextual information) of server 914 one or more respective clients
Machine data storage 908 and server data storage 910.
Client computer 902 and server 904 can be used communications framework 906 and transmit information between each other.Communications framework 906
Can be achieved any known communication technology and agreement, such as frame of reference 300 it is described those.Communications framework 906 can be by reality
It is now packet switching network (for example, proprietary networks such as the public networks such as internet, corporate intranet, etc.), electricity
The combination of road exchange network (for example, public switch telephone network) or packet switching network and circuit-switched network is (using suitable
Gateway and converter).
Statement " one embodiment " and " embodiment " and its derivative can be used to describe for some embodiments.These terms
Mean that the special characteristic, structure or the property that combine embodiment description are included at least one embodiment.Appear in explanation
The phrase " in one embodiment " in each place is not necessarily all referring to the same embodiment in book.In addition, some embodiments can make
It is described with statement " coupling " and " connection " and its derivative.These terms are not necessarily intended to synonym each other.For example, can be with
Some embodiments are described using term " connection " and/or " coupling ", to indicate that two or more elements are in direct object each other
Reason or electrical contact.However, term " coupling " can also mean that two or more elements are not directly contacted with mutually, and still mutually
Cooperation or interaction.
It is however emphasized that the abstract of the disclosure is provided to allow reader quickly to determine the nature of the disclosure of the art.It mentions
It hands over while abstract it will be appreciated that it will not be had to explain or limit the scope of the claims or meaning.In addition, in front detailed
In description, it can be seen that combine various features for the purpose that the disclosure is linked to be to an entirety and be placed on individually
In embodiment.The displosure method will not be interpreted to reflect that embodiment claimed is required than defining in each claim
The intention of the more features of statement.On the contrary, as reflected in the appended claims, the theme of invention is present in more public than individually
It opens in the few feature of all features of embodiment.To appended claims be combined into detailed description accordingly, wherein each
Claim independently represents an individual embodiment.In the dependent claims, term " includes " and " wherein " difference
As term "comprising" and " it is characterized in that " understandable English word of equal value.Moreover, term " first ", " second ", " third "
Etc. be only used as marking, and be not intended to and force at numerical requirements on its object.
Each example described above including disclosed architecture.Certainly, each component being contemplated that is described
And/or the combination of method is impossible, but one of ordinary skilled in the art is it should be appreciated that many other combinations and row
Column are all possible.Therefore, which is intended to cover all these spirit for falling into the appended claims and model
Change, modifications and variations in enclosing.
Claims (26)
1. a kind of graphics processor, comprising:
Three-dimensional 3D graphics processing pipeline comprising vertex process level and processes pixel grade, the 3D graphics processing pipeline are used
One group will used during the multipass pixel rendering in the processes pixel grade is determined during the operation in vertex process level
Parameter, one group of parameter is determined for the geometric object to be enhanced, wherein the 3D graphics processing pipeline is also used to make
Multipass pixel rendering is executed in the processes pixel grade with one group of parameter for the geometric object, and is based on
The multipass pixel rendering renders the scene that the enhancing including the geometric object is described, and the enhancing describes including described several
One or more image effects of what object.
2. graphics processor as described in claim 1, which is characterized in that one or more of image effects include transparent effect
Fruit.
3. graphics processor as described in claim 1, which is characterized in that one or more of image effects include shade effect
Fruit.
4. graphics processor as described in claim 1, which is characterized in that the 3D graphics processing pipeline is also used to described
Multiple Render Buffers are rendered during multipass pixel rendering.
5. graphics processor as claimed in claim 4, which is characterized in that the 3D graphics processing pipeline is used for based on described
Multiple Render Buffers render the scene.
6. graphics processor as claimed in claim 5, which is characterized in that the 3D graphics processing pipeline is used for for described
The one or many of vertex process level operate in execution multipass pixel rendering in the processes pixel grade.
7. graphics processor as described in claim 1, which is characterized in that one group of parameter packet for the geometric object
Include the primitive data that the vertex data based on the geometric object generates.
8. graphics processor as claimed in claim 7, which is characterized in that the primitive data include transformation, illumination, color and
One or more of position data.
9. such as graphics processor of any of claims 1-8, which is characterized in that for described in the geometric object
One group of parameter includes pixel coloring device data associated with the processes pixel grade of the 3D graphics processing pipeline, the pixel
Shader data includes the programming code for the processes pixel grade execution via the 3D graphics processing pipeline.
10. graphics processor as claimed in claim 9, which is characterized in that the pixel coloring device data include in institute
The multiple groups pixel coloring device programming code executed during the multipass pixel rendering for stating scene.
11. a kind of system for graphics process, comprising:
It is coupled to the graphics processing unit of reservoir, the graphics processing unit includes three-dimensional 3D graphics processing pipeline, described
Three-dimensional 3D graphics processing pipeline includes vertex process level and processes pixel grade, and the 3D graphics processing pipeline is used to use institute
Vertex process level is stated to determine one group of parameter for geometric object, is come using one group of parameter for the geometric object
Multipass pixel rendering is executed in the processes pixel grade, and is rendered based on the multipass pixel rendering including geometric object
The scene described of enhancing, it includes one or more graphical effects that the enhancing of the geometric object, which is described,;And
The display coupled with the graphics processor unit, for showing that the enhancing of the geometric object is described.
12. system as claimed in claim 11, which is characterized in that one or a graphical effect includes transparent effect and yin
One or more of shadow effect.
13. system as claimed in claim 12, which is characterized in that further include for described in graphics processing unit offer
The processing unit of the vertex data of geometric object.
14. system as claimed in claim 13, which is characterized in that one group of parameter for the geometric object includes base
In the primitive data that the vertex data generates.
15. system as claimed in claim 14, which is characterized in that the primitive data includes transformation, illumination, color and position
One or more of data.
16. the system as described in any one of claim 11-15, which is characterized in that described one for the geometric object
Group parameter includes pixel coloring device data associated with the processes pixel grade of the 3D graphics processing pipeline, the pixel
Color device data include the programming code for the processes pixel grade execution via the 3D graphics processing pipeline.
17. system as described in claim 16, which is characterized in that the pixel coloring device data include in the field
The multiple groups pixel coloring device programming code executed during the multipass pixel rendering of scape.
18. system as described in claim 11, which is characterized in that the 3D graphics processing pipeline is used in the multipass
Multiple Render Buffers are rendered to during pixel rendering, and the scene is rendered based on the multiple Render Buffer.
19. system as claimed in claim 18, which is characterized in that the 3D graphics processing pipeline is used to be directed to the vertex
The one or many of process level operate in execution multipass pixel rendering in the processes pixel grade.
20. a kind of data processing system, comprising:
For showing the display of the object in scene;
System on chip integrated circuit including graphics processor;
Product including storage medium, the storage medium include instruction, and described instruction, which is performed, leads to the integrated chip
The system on circuit is used for:
Receive the vertex data of the object;
It determines during the operation of the vertex process level of the three-dimensional 3D graphics processing pipeline of the graphics processor for described
One group of parameter of object;
It is executed in the processes pixel grade of the 3D graphics processing pipeline using one group of parameter for the object more
All over pixel rendering;And
The enhancing description of the object is rendered based on the multipass pixel rendering, the enhancing description of the object includes saturating
One or more of obvious results fruit or hatching effect.
21. data processing system as claimed in claim 20, which is characterized in that further include leading to integrated chip upon being performed
The system on circuit executes the following instruction operated: in one or many operations of the three-dimensional 3D graphics processing pipeline
Period determines one group of parameter used in the multipass pixel rendering in the processes pixel grade, one group of parameter packet
Include the primitive data of the object.
22. data processing system as claimed in claim 21, which is characterized in that the primitive data of the object includes becoming
It changes, illumination, one or more of color and position data.
23. data processing system as claimed in claim 22, which is characterized in that one group of parameter packet for the object
Include pixel coloring device data associated with the 3D rendering processing processes pixel grade of assembly line.
24. data processing system as claimed in claim 23, which is characterized in that the pixel coloring device data include for passing through
Programming code is performed by the processes pixel grade of the 3D graphics processing pipeline.
25. the data processing system as described in claim 24, which is characterized in that the pixel coloring device data include being used for
The multiple groups pixel coloring device programming code executed during the multipass pixel rendering.
26. the data processing system as described in any one of claim 20-21, which is characterized in that including for leading to chip
The system on integrated circuit executes the following extra-instruction operated: retouching via the enhancing that the display exports the object
It draws.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511022943.5A CN105679276B (en) | 2011-12-14 | 2011-12-14 | Technology for multipass rendering |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511022943.5A CN105679276B (en) | 2011-12-14 | 2011-12-14 | Technology for multipass rendering |
CN201180075514.8A CN103999044B (en) | 2011-12-14 | 2011-12-14 | Techniques for multi-pass rendering |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201180075514.8A Division CN103999044B (en) | 2011-12-14 | 2011-12-14 | Techniques for multi-pass rendering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105679276A CN105679276A (en) | 2016-06-15 |
CN105679276B true CN105679276B (en) | 2019-04-19 |
Family
ID=56298146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201511022943.5A Active CN105679276B (en) | 2011-12-14 | 2011-12-14 | Technology for multipass rendering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105679276B (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999056249A1 (en) * | 1998-04-27 | 1999-11-04 | Interactive Silicon, Inc. | Graphics system and method for rendering independent 2d and 3d objects |
US6731289B1 (en) * | 2000-05-12 | 2004-05-04 | Microsoft Corporation | Extended range pixel display system and method |
CN101266546A (en) * | 2008-05-12 | 2008-09-17 | 深圳华为通信技术有限公司 | Method for accomplishing operating system three-dimensional display and three-dimensional operating system |
CN101635061B (en) * | 2009-09-08 | 2012-10-24 | 南京师范大学 | Adaptive three-dimensional rendering method based on mechanism of human-eye stereoscopic vision |
CN101907992B (en) * | 2010-07-08 | 2013-04-17 | 福建天晴在线互动科技有限公司 | Equipment and method for providing three-dimensional user interface under Windows environment |
-
2011
- 2011-12-14 CN CN201511022943.5A patent/CN105679276B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN105679276A (en) | 2016-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107393502B (en) | Technology for multipass rendering | |
CN110148203B (en) | Method and device for generating virtual building model in game, processor and terminal | |
WO2021008627A1 (en) | Game character rendering method and apparatus, electronic device, and computer-readable medium | |
CN104169974B (en) | The state based on observability updates in graphics processing unit | |
DE112016006707T5 (en) | ALL-IN-ONE MOBILE COMPUTER DEVICE | |
CN109903385A (en) | Rendering method, device, processor and the terminal of threedimensional model | |
CN109716316A (en) | Interactive real-time visualization of streaming data | |
CN106575158B (en) | Environment mapping virtualization mechanism | |
US20170192734A1 (en) | Multi-interface unified displaying system and method based on virtual reality | |
CN106537371A (en) | Visualization suggestions | |
CN109117779A (en) | One kind, which is worn, takes recommended method, device and electronic equipment | |
CN105321142B (en) | Sampling, mistake manages and/or the context switching carried out via assembly line is calculated | |
CN103874991A (en) | Crowd-sourced video rendering system | |
CN105917384A (en) | Techniques to manage map information illustrating a transition between views | |
US20140207959A1 (en) | Distributed association engine | |
CN105144243A (en) | Data visualization | |
CN103999043B (en) | Technology for strengthening multi views performance in three-dimensional streamline | |
CN109634608A (en) | Interface dynamic generation method, system, equipment and medium | |
CN109448123A (en) | The control method and device of model, storage medium, electronic equipment | |
CN114565707A (en) | 3D object rendering method and device | |
CN108335342A (en) | Method, equipment and the computer program product of more people's drawing are carried out in web browser | |
CN108434737A (en) | Game element display methods, device and readable storage medium storing program for executing | |
CN112070868A (en) | Animation playing method based on iOS system, electronic equipment and medium | |
CN105679276B (en) | Technology for multipass rendering | |
CN108280765A (en) | The value control method, apparatus and computer readable storage medium of block chain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |