[go: up one dir, main page]

WO1998043208A2 - Procede et dispositif de traitement d'elements graphiques - Google Patents

Procede et dispositif de traitement d'elements graphiques Download PDF

Info

Publication number
WO1998043208A2
WO1998043208A2 PCT/US1998/005694 US9805694W WO9843208A2 WO 1998043208 A2 WO1998043208 A2 WO 1998043208A2 US 9805694 W US9805694 W US 9805694W WO 9843208 A2 WO9843208 A2 WO 9843208A2
Authority
WO
WIPO (PCT)
Prior art keywords
node
value
static
poly
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US1998/005694
Other languages
English (en)
Other versions
WO1998043208A3 (fr
Inventor
Alan T. Wootton
James E. Lloyd
Konstantin Vasilyev
Srikanth Subramaniam
Stephen Hawley
W. Andreas Whittenstein
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEWFIRE Inc
Original Assignee
NEWFIRE Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEWFIRE Inc filed Critical NEWFIRE Inc
Priority to AU68678/98A priority Critical patent/AU6867898A/en
Publication of WO1998043208A2 publication Critical patent/WO1998043208A2/fr
Publication of WO1998043208A3 publication Critical patent/WO1998043208A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal

Definitions

  • test pilot can simulate flying the
  • a computer graphics system typically represents a three-dimensional scene as a
  • three-dimensional graphical primitives i.e., objects such as boxes, spheres,
  • polygonally bounded planar surfaces such as triangles, quadrilaterals, etc.
  • a computer graphics system typically represents a two-dimensional geometric
  • entity by boundary and attribute defining information such as a data structure containing
  • a z-value represents the distance between the
  • attribute defining information prior to generating pixel data for display For example,
  • Some systems perform visible surface determination operations to remove surfaces that
  • the system also performs
  • transformation, projection, and clipping operations to define the geometric entity with respect to the coordinate system of the display device (e.g., the two-dimensional screen
  • the rendering engine stores the generated pixel data in a frame buffer which is
  • VSD visible surface determination
  • BSP binary space partitioning
  • a reference coordinate space e.g., a unique coordinate space for
  • one prior art VSD process creates visibility tables based on
  • this process is typically performed independent of the camera position
  • geometric entities are rendered into the image buffer in an arbitrary order.
  • a system for simulating model worlds may need to detect collisions if there is
  • One conventional method detects collisions by comparing the space occupied
  • modeled world are represented as numbers, e.g., the height of a bouncing ball, many mathematical techniques are employed to modify those attributes. Ai ⁇ mation is,
  • scripting capability using a widely adopted scripting language, e.g., Java, rather than
  • This approach may also allow for a more compact
  • the invention provides a method and apparatus for graphics processing.
  • BSP binary space partitioning
  • invention generates bounding boxes for the nodes of a BSP tree.
  • Yet another embodiment of the invention performs a poly differencing process.
  • This process receives information relating to overlapping polys, and generates data
  • Another embodiment employs an interpolation mechanism that
  • Figure 1 depicts a high-level block diagram for a simulation system.
  • Figure 2 presents a computer system upon which one embodiment of the
  • Figure 3 presents a diagram for the system architecture of one embodiment of the
  • Figure 4 presents the operational flow diagram for the embodiment of the invention set forth in Figure 3.
  • Figure 5 presents one example of a visual scene.
  • Figure 6 presents a scene node graph for the visual scene represented in Figure
  • Figure 7 presents a scene cache graph for the scene node graph of Figure 6.
  • Figure 8 presents one example of a BSP tree structure.
  • Figure 9 presents the visual scene of Figure 5 as partitioned by a number of
  • Figure 10 presents a diagram of the BSP data structure used in one embodiment
  • Figure 11a presents a specific example of a BSP leaf data structure.
  • Figure lib and 10c present two examples of indexed face sets.
  • Figure 12 presents a class diagram for a number of classes used in one
  • Figure 13 presents a flow chart for creating bounding boxes for leaf nodes.
  • Figure 14 presents one example of bounding boxes being created by the process
  • Figure 15 presents one embodiment of the dynamic partition visibility detection process of the invention.
  • Figure 16 presents the update-visibility process for one embodiment of the
  • Figure 17 presents a state diagram for determining a new state in the update-
  • Figure 18a presents one embodiment of the step of Figure 16 for finding a
  • Figure 18b presents the process for determining whether a bounding box is in a mini-frustum, for one embodiment of the invention.
  • Figures 19-22 present the steps performed during the queuing process of Figure
  • Figure 23 presents a circularly linked list of vertices.
  • Figure 24 presents one example of a polygon represented by the linked list of
  • Figure 25 presents the steps performed by the poly differencing engine of one
  • Figure 26 presents a poly record structure used in one embodiment of the
  • Figure 27 presents a sorted list of edges created by the poly differencing engine.
  • Figure 28 presents a linked list of active polys.
  • Figure 29 presents a linked list of edge records.
  • Figure 30 presents a wedge record data structure used in one embodiment of the
  • Figure 31 presents a displayed representation of the wedge record data structure
  • Figure 32 presents the process for scanning the sorted lists of edges to initiate the
  • Figure 33 presents the process for executing an insert command.
  • Figure 34 presents the process for closing a wedge record.
  • Figure 35 presents the process for executing a remove command.
  • Figure 36 presents the process for executing an exchange command.
  • Figure 37 presents the process for scanning the active-edge table.
  • Figure 38 presents the process for calculating the intersection of two edges.
  • Figure 39 presents an example of wedges created by the polygon clipping
  • Figure 40 presents an active-edge linked list representing the example of Figure
  • Figure 41 presents a process for determining the portion of a bounding box that
  • Figure 42 presents the union rectangle process of Figure 41.
  • Figure 43 presents the process for creating a wedge for a non-static geometric
  • Figures 44 and 45 present one example of the results of the process of
  • Figure 46 presents the process for populating a command buffer during an
  • Figure 47 presents the process for populating the command buffer with static
  • Figure 48 presents the process for populating the command buffer with the z-fill
  • Figure 49 presents an example of a non-static scanline array used in the process
  • Figure 50 presents a displayed image with three static wedges and one non-static
  • Figure 51 presents one example of a command buffer that has been populated by
  • Figure 52 presents the process performed by the screen-writer of one embodiment of the invention.
  • Figure 53 presents the process for performing a solid-fill operation for one
  • Figure 54 presents an example of the z-buffer operation performed by one
  • Figure 55 depicts the high-level software architecture of the collision detection
  • Figure 56 depicts an image of a sample model world.
  • Figure 57 depicts representative data components utilized during operation of the
  • Figure 58 depicts a flowchart of the object-to-object collision detection process.
  • Figure 59 depicts a sample object with its bounding box, projection volume and
  • Figure 60 depicts details of the data structures for the collision volume list.
  • Figure 61 depicts the sample scene superimposed with collision detection
  • Figure 62 depicts a flowchart for the camera-to-object collision detection
  • Figure 63 depicts an example of a camera collision volume.
  • Figure 64 depicts Interpolator Node structures.
  • Figure 65 depicts a flowchart of the interval table building process.
  • Figure 66 depicts a flowchart of interpolator node operation.
  • Figure 67 depicts one embodiment's architectural components that interact to support certain VRML script node processing.
  • Figure 68 depicts a control flow diagram for one embodiment of an inverted Java
  • Figure 69 depicts a flowchart detailing the establishment of an inverted Java
  • the invention provides a method and apparatus for graphics and simulation
  • Figure 1 depicts a high-level block diagram for a simulation system.
  • the system
  • Model Description Source 110 comprises a Model Description Source 110, a Control Data Source 115, a Simulation
  • connection 120 contain information defining the model world for which simulation
  • An example model world for visualization may have maximum dimensions
  • An astronomer's example model world may have dimensions measured in light years and contain 3D representations all of the known stars
  • Figure 1 selects a model world and then generates images from many different
  • Control Data Source 115 delivers a signal via connection 145 to the Model
  • Signals traversing connection 145 may contain information specifying a
  • sequence of viewpoints may jump from location to location within the model world.
  • the biochemist may desire a "top, bottom, right side, left side, front, back"
  • viewpoint sequence On the other hand, the sequence of viewpoints may each represent
  • the astronomer may want to simulate a smooth flight through a
  • the Control Data Source 115 may generate the information signal to the Model
  • Processor 120 from stored data or from data gathered real-time. For example, it may be
  • the astronomer also may find it advantageous
  • the Model Processor 120 generates an output signal to the Simulation Frame
  • connection 150 Signals traversing connection 150 may contain
  • the image frame information represents the projection of the 3d model world signaled by the Model Description
  • Source 110 onto a viewing plane as would be seen from within the model world at the
  • system are not limited to operation with graphical information but may be a
  • Figure 2 presents a computer system upon which one embodiment of the present
  • Computer system 200 includes bus 205, processor 210, read
  • cursor controller 250 cursor controller 250
  • hard copy device 255 hard copy device 255
  • speakers 260 speakers
  • Bus 205 collectively represents all of the communication lines that connect the
  • Processor 210 couples to bus 205 and
  • Computer system 200 further includes a read and write memory
  • RAM random access memory
  • Memory 215 also may be used for storing temporary variables or other intermediate information during the operation of
  • Computer system 200 also includes a read only memory (ROM) 220 coupled to bus 205 for storing static information and instructions for processor 210.
  • ROM read only memory
  • Computer system 200 also includes a read only memory (ROM) 220 coupled to bus 205 for storing static information and instructions for processor 210.
  • ROM read only memory
  • mass data storage device 225 such as a magnetic or optical disk and its corresponding
  • computer program information e.g., object
  • computer 205 utilizes the
  • firmware instructions residing in ROM 220 direct the invention.
  • Computer system 200 further includes a display device 230, such as a cathode
  • CTR ray tube
  • LCD liquid crystal display
  • the pixel data representing an image is stored in
  • each scanline to represent geometric entities (e.g., polys or wedges on each scanline), and (2) a screen writer to generate the pixel data by decoding the generated
  • Bus 205 also couples to alphanumeric input device 245 (e.g., a keyboard) which
  • Cursor controller 250 is an additional user input device which may be coupled
  • Input device 245 may take many different forms, such as a mouse, a
  • a stylus tablet e.g., a touchpad
  • a touch-sensitive input device e.g., a touchpad
  • Another device which may be coupled to bus 205 is a hard copy device 255 which
  • bus 205 also couples computer 200 to a network
  • the computer can be a part
  • LAN local area network
  • wide area network a network of computers
  • WAN wide area network
  • Intranet a network of networks
  • Internet a network of networks
  • Figure 3 presents a diagram for the system architecture of one embodiment of the
  • This embodiment is designed to operate as a VRML player, which allows a
  • System 300 of Figure 3 includes a VRML engine 305, a cache manager 310, a
  • the VRML engine parses the textual representation of the scene contained in a
  • scene node data structures i.e., a first set of objects
  • Part of the cache data structures is a binary
  • BSP space partitioning
  • VSM visible scene manager
  • the VSM reduces the number of geometric entities (e.g., polygons such as
  • VSD VSD processes. Some of these processes are conventional VSD processes such as
  • VSM frustum and backface culling processes.
  • the VSM also uses the invention's
  • DUVD dynamically determines the extent to which the geometric entities for display occlude one another.
  • the VSM After performing visible surface determination, the VSM supplies information
  • these two engines generate wedge data structures
  • Wedges are quadrilateral polys
  • wedges can be represented by a pair of
  • the poly differencing engine receives
  • non-static poly creator to create data structures representing non-static wedges.
  • the poly differencing engine and the non-recording engine are identical to the same.
  • the differencing engine and the non-static poly creator supply their own
  • This software rasterizer then performs the invention's abstract rendering process.
  • this rasterizer For each scanline of the display device, this rasterizer generates commands
  • the collision detection processor 392 may anticipate a collision
  • collision event may result in the execution of a program script that modifies the position
  • the Script Manager 330 oversees the
  • the Script Manager 330 may use a "handler"
  • the Script Manager 330 will use the Java handler 396 to effectuate the execution of the program script.
  • the Interpolator Processor 390 may also operate to affect the generated view of
  • the VRML Engine 305 may have created interpolator nodes in the scene
  • the Interpolator Processor 390 alters the instant
  • Figure 4 depicts a life cycle diagram of the VRML player.
  • the main operational loop executes for each generated image frame and comprises a
  • collision detection step 410 a vehicle position determination step 415, a sensor
  • the rendering step displays the generated image frame and the main operational loop begins again.
  • the initialization step 400 includes the parsing of the VRML input file that
  • the VRML language is a declaration and
  • VRML defines a model
  • VRML Specification hereafter referred to as the VRML Specification. Additional VRML information can be
  • the steps may not be critical.
  • the main operational loop begins with
  • collision detection identifies
  • determination step 415 determines any change in viewpoint from that used to generate
  • processing step 420 processes the drawing-independent sensor nodes in the scene graph
  • the routing step 425 manages the processing of the internodal connection routes specified in the VRML input file.
  • the main operational loop processes repeatedly until the VRML file or the
  • VRML player is closed. This generally results from a user selection. Step 490 then
  • the initialization process is the first process performed after receiving a VRML file.
  • VRML engine and the cache manager perform this process.
  • Figure 5 presents one example of a visual scene. This visual scene has a textual
  • the scene node graph includes numerous nodes, each of which corresponds to the
  • Figure 5 includes a scene node and five sets of transform, shape, appearance, and geometry nodes for the five primitives in the scene.
  • Each node includes a cache pointer which initially is set to null by the VRML
  • the scene node also includes a roots field which is a list of pointers pointing to
  • Each transform node includes
  • One of these fields is an operation field for specifying simple
  • Another field is a children field pointing to a set
  • each children field contains a single shape
  • the shape node in turn, has a geometry field specifying the geometry of the
  • an appearance field specifying the appearance (e.g., texture) of the
  • non-static primitives have a similar data structure to static
  • the position field receives routed data that change the values of the fields. For example, the position field
  • a Java script generates the varying translation based on varying time
  • the cache manager generates the scene cache data structures, which are a
  • the cache data structures are created by applying a
  • a cache visitor process performs a visitor
  • Figure 7 presents the cache data structures corresponding to the node data
  • These cache data structures include transform caches, indexed
  • the transform and index face set (IFS) caches are
  • transform and IFS caches respectively correspond to the transform and IFS caches
  • transform and shape nodes Each of these two caches connects to its corresponding node via its node's cache pointer and its own original back pointer.
  • the transform cache also includes: (1) a children field for pointing to its IFS
  • the IFS cache is a cache representation for the
  • an appearance field that stores information relating to the appearance (e.g., information
  • the IFS cache stores indices to tessellated face sets which collectively represent
  • the geometry identified in the geometry node i.e., stores indices to the coordinates of
  • tessellated indexed faces either (1) are identical to the received polys that form the
  • received primitive or (3) are tessellated versions of a received primitive (e.g., sphere)
  • One embodiment of the invention relies on the non-static IFS caches for the duration of its operation. For instance, the non-static IFS caches are relied on for
  • the cache manager creates the BSP tree data structure
  • One embodiment of the invention uses a BSP tree (1) to perform VSD operations
  • a BSP tree is a data structure that
  • space e.g., a unique coordinate space for the BSP tree or the world coordinate space.
  • the tree represents the entire space, while each of its nodes represents a convex
  • Each internal node in this tree (1) stores a splitting plane (also called a
  • hyperplane which divides the node's space (i.e., the space that the node represents) into
  • Figure 8 presents one example of a BSP tree structure for the visual scene of
  • FIG. 5 as partitioned by partitioning planes, some of which are shown in Figure 9.
  • Figure 9 does not set forth the constant y partition planes that are co-planar to the top and bottom surfaces of the three boxes in the scene (i.e., does not present the top facing
  • each node in the tree structure represents a particular node in the tree structure.
  • each splitting plane divides a particular region of space
  • a node hanging off a left branch represents the negative space resulting from the division of a parent space into two subspaces, while a node
  • each face of the static polys is ultimately used as a
  • each face of the static polys resides as a leaf node for the
  • Each leaf node includes a list of non-static indexed
  • a static poly may be further
  • a three-step process is used for creating a BSP tree.
  • a partition plane is
  • Some prior art authoring tools build BSP trees, and therefore in their textual VRML file identify their splitting planes. After selecting the splitting plane, the set of
  • FIGS 10 and 11 present the BSP data structures for one embodiment of the
  • the BSP data structure includes BSP general-node caches and BSP leaf-node caches.
  • Each BSP node includes left and right fields which
  • a BSP leaf node includes
  • a leaf node's IFS caches are internal data structures representing the indexed
  • Each IFS cache stores indices into a buffer
  • each IFS cache includes a coordinate array pointer
  • Each IFS cache also includes a face pointer for pointing to a face buffer storing a
  • indices is a sequence of positive integers for indicating the manner for connecting the
  • array buffer can store the vertex coordinates for more than one face in the leaf node
  • the face buffer can store indices into the command buffer to generate more than one
  • a static BSP IFS cache is created when the original static IFS caches of the cache
  • each non-static BSP IFS cache corresponds to a non-static IFS
  • Each non-static BSP IFS cache represents a non-static primitive
  • Such a cache points back to its leaf node.
  • a non-static IFS cache can connect to multiple leaf nodes, because a non-
  • static primitive can be in more than one leaf node's partition of space as it can straddle a
  • Non-static IFS caches also include flags that inform
  • Figure 12 presents the class structure used in one embodiment of the invention.
  • the class DrawableCache is the parent class of BSP node class, BSP leaf class and IFS class (which itself is the parent class of the static IFS class, non-
  • the visibility field stores a visibility flag to indicate whether the node is visible
  • the visibility flag can assume one of three states, which are: Visible,
  • the DidDraw flag indicates whether all or part of the
  • node's partition of space remains for display after the poly differencing engine performs
  • Each bounding box is axis aligned with the BSP tree's
  • reference coordinate system e.g., the world coordinate system if the BSP tree is defined
  • BSP tree are pre-computed by the cache graph initialization process. During this
  • the bounding boxes of the leaf nodes are initially computed by deteirnining the faces of the partitioned convex volume of spaces of each leaf node. Subsequently, the
  • bounding box for each parent node is computed by combining the bounding boxes of its
  • Figure 13 presents one process for computing the bounding boxes for the leaf
  • the faces of the leaf node volume can be generated by performing the
  • a huge bounding box is a box which has its center at the origin and has a large
  • the polygon may be sliced into two pieces, one of which
  • the bounding box of the leaf node can then be generated by either (1) performing a min-max operation (analogous to the process set
  • each node can recursively be treated as a leaf node in the tree.
  • the face polygons for each of the node's child nodes can be computed by
  • resulting polygon is added to the set of faces currently attached to the node.
  • each polygon in the set of faces for the node is sliced by the node's
  • splitting plane in order to gather the results into two sets, a left set and a right set.
  • left set represents the faces of the leaf node volume of the left child, while the right set
  • Process 1300 of Figure 13 is based on the above-described realizations.
  • a huge bounding box is generated for the root node.
  • node volume faces VFACES is initialized to contain the six faces of the huge bounding
  • This set is linked to the root node.
  • the process then places the root node on list OpenList.
  • step 1320 transitions to step 1320 to terminate. Otherwise, the process transitions to step 1325 to
  • step 1330 it determines whether the current
  • node is a leaf node.
  • step 1340 it generates a huge polygon for the current node by computing
  • step 1340 the process transitions to step 1345 to request that the
  • the huge polygon of a node other than the root node e.g. , the huge polygon
  • step 1345 the processes transitions to step 1350 to add the resulting
  • Figure 14 pictorially presents the operations for the
  • the slicing operation at step 1355 results in a left set of volume faces and a right
  • the left set of faces are linked to the current node's
  • step 1365 where the left and right child nodes are pushed onto the front of OpenList.
  • step 1315 the process transitions back to step 1315 to determine whether the
  • step 1335 the current node's volume faces VFACES are set
  • the process can compute the bounding box by either:
  • volume faces or (2) performing a combining operation to combine each volume face
  • min, max, center, and size are all vectors.
  • MIN(V1,V2) (min(vl.x,v2.x), min(vl.y,v2.y), min(vl.z,v2.z))
  • VSD process One of the processes performed after initialization is the VSD process, as shown
  • the VSM reduces the number of static polys for rendering
  • VSD processes such as frustum and backface culling
  • DPVD process culls static polys which are occluded by other static polys.
  • this process is performed dynamically for each frame in
  • the VSM simplifies and accelerates the rendering
  • the invention's DPVD process reduces the number of poly-to-poly comparisons
  • bounding box remains after the clipping, and is processed into a wedge for rendering
  • the poly differencing engine (1) delays the rendering process, (2) sets the DidDraw flag
  • the VSD engine then dynamically re-computes the visibility status of the nodes in the BSP tree.
  • FIGS 15-22 present the steps performed by the VSD engine of one
  • Figure 15 presents the three major processes performed
  • FIG. 16-18 present the steps performed for each node during the update-
  • the first step of this process is step 1605, during which a determination is made as to the
  • box of an node is bigger than a node's poly, and a portion of the bounding box is visible
  • step 1610 When this step is performed immediately after initialization of the BSP tree cache, the process transitions to step 1610, because at initialization the visibility for all
  • nodes are set to Hidden. However, after initialization, the visibility state of a node
  • step 1615 the process determines if the node has any children. If not, the process
  • step 1610 transitions to step 1610. Otherwise, the process transitions to step 1620 to compute the value of this node's DidDraw flag.
  • Process 1600 determines the value of the DidDraw flag by updating the visibility
  • step 1620 computes the DidDraw value for a node by performing an
  • the poly differencing engine sets its children
  • the process determines the new visibility state for the node.
  • Figure 17 presents the state diagram for this determination. As shown in this figure, if
  • a node is Hidden, and its DidDraw flag is False (i.e., equals 0), the node's visibility
  • DidDraw flag is set to False. At this point, its visibility is set to Maybe.
  • the visibility state of a node remains Maybe until it is set to Hidden after the node is culled or after a
  • step 1610 the process transitions to step 1625, where a determination is
  • step 1645 the process determines the new state of the node at step 1630.
  • step 1645 If the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if the node's new state is not Visible, the process transitions to step 1645. However, if
  • step 1635 the process transitions to step 1635 to create a mini-frustum
  • process 1600 uses conventional frustum
  • Figure 42 in order to create a mini-frustum. Specifically, the projection plane, the four
  • step 1635 the process transitions to step 1640 in order to find the front
  • Step 1640 will be described below by reference to Figure 18a. From step 1640,
  • process 1600 transitions to step 1645 in order to return the DidDraw value for the node
  • Figure 18a presents the process for determining the front most visible child leaf
  • step 1802 determines which of the two
  • step 1804 the process transitions to step 1804 to determine if the left child is in the
  • Figure 18b presents the process for
  • step 1808 determines if the left child is a
  • the process sets the child's visibility state to Visible (at
  • step 1810) quits its recursive process.
  • the left child is not a leaf
  • process finds the visible children of the left child (e.g., recursively calls itself to find the
  • step 1814 the process determines if the right child is in the frustum. If not,
  • the process sets the visibility state of the right child to Hidden (at step 1816), and returns
  • the process decides at step 1818 whether the right child is a leaf node. If so, the process
  • step 1822 The process transitions to step 1824 in order to return to
  • step 1834 while the process transitions to return-step 1824 after setting the child states
  • steps 1836-1844 are identical to steps 1804-1812 described above, except that
  • step 1838 or finding the visible children of the left child again at step 1844, while it
  • process 1640 of Figure 18a is a recursive algorithm which
  • This process receives the bounding box center and size vectors of the
  • process 1850 sets the value of a variable N (representing the number of
  • step 1858 the process determines if the
  • cornerdistance variable is less than zero. If not, the process determines that the child is not in the viewing frustum and returns a value of No.
  • step 1864 determines if all six face of the frustum have been
  • step 1866 the process transitions to step 1866 to return the value Yes.
  • step 1854 if all the six faces have not been examined, it transitions back to step 1854.
  • FIGS 19-22 present the steps performed during queuing step 1510 of
  • the traversal of the BSP tree is depth first and in view (i.e. , camera) order.
  • the process determines the side of the splitting plane that the viewer is on, and initially traverses down the BSP tree on that
  • the VSD engine sets the DidDraw flag for all
  • step 1905 this process determines whether this node's bounding box is in the viewing
  • step 1910 transitions to step 1910, where it returns a Don't Queue command signifying that neither
  • step 1915 If the node's bounding box is in the frustum, the process transition to step 1915,
  • step 2005 presents the process for queuing the bounding box of a node.
  • process 2000 sets the DrawnRectangle to its initialized values of negative infinity for the
  • step 2010 the process tessellates the bounding box for the node into individual
  • process 1900 determines at step 1915 that the node's
  • Figure 21 presents the process for queuing the child node when it is a BSP leaf
  • this process determines (at step 2105) if the node's bounding box is in
  • step 2115 where a determination is made as to the visibility status of the node. If the
  • the process queues the node's bounding box for forwarding to the poly
  • Figure 21 is performed by process 2000 of Figure 20, in one embodiment of the
  • step 2125 If the node's visibility status is not Hidden, the process transitions to step 2125.
  • the process requests the queuing of all the static polys for the leaf node.
  • process 2100 performs step 2125 by repeatedly calling process 2200 at Figure 22 for each static element of the leaf node. From step 2125
  • process 2100 transitions to step 2130 in order to command the queuing of the leaf node's
  • Figure 22 presents the process for queuing individual static polys to the poly
  • process 2200 performs a conventional back ⁇
  • the static poly face culling operation on the static poly.
  • the static poly face culling operation on the static poly.
  • process 2200 obtains the dot product of the static poly's normal and the
  • process 2200 does not queue the poly.
  • step 2220 where it performs
  • the process then performs (at step 2240) the screen clipping operation by
  • step 2240 the process transitions to step 2245 in order to determine whether any
  • poly differencing process 1515 This process is performed by the poly differencing
  • This engine performs a polygon clipping operation which clips received overlapping
  • poly differencing engine generates data structures representing non-overlapping polys
  • the clipped data structures can be run-length
  • the poly differencing engine can determine whether a data structure
  • the poly differencing engine is conditioned to tolerate clipped data
  • this engine does not pass the data structures representing the clip static polys
  • the poly differencing engine prevents the camera from being
  • the static non-overlapping poly data in one embodiment of the invention, the static non-overlapping poly data
  • wedges are a particular type of polys. They are quadrilateral polys having horizontal top and bottom. They can be
  • a slope is not defined in the traditional sense of rise over run, but rather run
  • rise-over-run information is used to derive x-values and/or y-values.
  • the poly differencing engine 's polygon clipping operation and wedge creation
  • This engine also receives a pointer to a poly
  • type data structure which includes a number of fields and a pointer to a number of
  • the poly type pointer Based on the received linked list of vertices and the poly type pointer, the poly
  • differencing engine creates a poly record for each poly at step 2505.
  • static polys As the static polys
  • the poly differencing engine allocates poly records in front-to-back order
  • the poly record includes next and previous poly pointers NextPoly and
  • active-edge table to keep list of all active polys, so that, when a transition from a closer
  • the process can walk down the list created by these pointers.
  • the NextPoly pointer is also employed to connect the polys in a linked list prior to their
  • Figure 26 also includes the poly type pointer originally received from the VSM.
  • the polygon record further includes other poly data, such as color or shading
  • This record also includes the z-value of the poly at
  • poly record structure further includes a Toggle field to indicate whether an edge in the active-edge table is a left edge or right edge.
  • the UseCount is decremented by one.
  • the poly differencing engine determines that the poly is no
  • the poly differencing engine creates (at step 2510)
  • edge records from the received linked lists of vertices for the screen clipped polys.
  • an edge record is
  • FIG 29 presents one example of an edge record. This edge record is for edge
  • this record includes a poly record pointer which points to the poly record created by the poly differencing engine at step 2505.
  • This pointer also serves to identify the priority of the poly (to which the edge e4 belongs
  • the pointer can act as an identifier because the poly differencing engine allocates the
  • the edge record also includes the slope of the edge, which is computed based on
  • the edge record also includes a pointer nextShowing, which points to the next showing edge in the linked list of active edges as further
  • edge record includes a Boolean variable
  • the poly differencing engine creates (at step 2515) a
  • Figure 27 presents the sorted list of edges, after creating the edge records.
  • each pointer pointing to a buffer storing the edge data corresponding to a scanline.
  • the pointer for scanline Y0 points to a buffer which indicates that
  • edges e5 and e6 corresponding to the background screen polygon, start at scanline Y0.
  • edges e2 and e4 are then indicated in the buffer pointed to by the pointer of scanline Y2. This sorted list also indicates the termination of edges e2, e4, e5 and e6 at
  • crossing can be represented in the sorted list of edges by inserting a command indicative
  • crossing commands are inserted into the sorted list of
  • the poly differencing engine After sorting the list of edges, the poly differencing engine performs the following
  • invention's active-edge process at step 2520 of Figure 25.
  • step 2520 of Figure 25 In one embodiment of the
  • the active-edge process includes the creation of an active-poly linked list, an active-edge table, and a wedge linked list.
  • the poly differencing engine During one embodiment of the active-edge process, the poly differencing engine
  • edge stop command it might remove the edge's poly record from the list of active
  • edge process is determined on a scanline-by-scanline basis.
  • Figure 28 presents one example of a linked list of active polys.
  • Figure 29
  • Figure 24 the active-edge table is sorted by the x-values.
  • Figure 30 presents one example of a wedge record, and Figure 31 presents a
  • wedges have horizontal tops and bottoms, and are
  • Figures 32-38 present the active-edge process performed by one embodiment of
  • Figure 32 presents the process for traversing the sorted list of edges to
  • process 3200 sets a Scanline_Count variable to zero.
  • step 3210 the process inquires whether the scanline buffer at Scanline Count of the
  • step 3215 the process transitions back to step 3210.
  • step 3220 the process transitions to step 3220 to increment the Scanline_Count
  • step 3225 the process transitions to step 3225 where a decision is made
  • step 3230 the process initiates the active-edge table scan process of Figure 37.
  • step 3230 the process transitions to step 3235. If there were no commands in the
  • step 3235 At this step,
  • the process determines if the Scanline Count variable has reached a maximum value. If

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de traitement d'éléments graphiques. L'une des réalisations de cette invention est un procédé de détermination dynamique de surface visible qui réduit le nombre d'entités géométriques à reproduire. On procède pour cela en déterminant la visibilité des noeuds dans une arborescence BSP à partitionnement d'espace binaire (binary space partitioning). Une autre réalisation de l'invention consiste en la génération de cases pour les noeuds d'une arborescence BSP. Une autre réalisation encore de cette invention consiste en la conduite d'un processus de différentiation des polygones. Le processus consiste en l'occurrence en la réception d'informations concernant des polygones se chevauchant, puis en la génération de structures de données représentant des polygones ne se chevauchant pas. Encore une autre réalisation de l'invention consiste en la conduite d'un processus de rendu abstrait. De fait, à chaque ligne de balayage d'un afficheur, une réalisation de ce processus de rendu consiste, d'une part à générer des commandes permettant de représenter chacune des structures de données statiques et non statiques de cette ligne de balayage, et d'autre part à stocker ces commandes dans un tampon de commandes destinées à cette ligne de balayage.
PCT/US1998/005694 1997-03-21 1998-03-23 Procede et dispositif de traitement d'elements graphiques Ceased WO1998043208A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU68678/98A AU6867898A (en) 1997-03-21 1998-03-23 Method and apparatus for graphics processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US82299897A 1997-03-21 1997-03-21
US08/822,998 1997-03-21

Publications (2)

Publication Number Publication Date
WO1998043208A2 true WO1998043208A2 (fr) 1998-10-01
WO1998043208A3 WO1998043208A3 (fr) 1999-01-14

Family

ID=25237527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/005694 Ceased WO1998043208A2 (fr) 1997-03-21 1998-03-23 Procede et dispositif de traitement d'elements graphiques

Country Status (2)

Country Link
AU (1) AU6867898A (fr)
WO (1) WO1998043208A2 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2352951A (en) * 1999-03-03 2001-02-07 Canon Kk Computer graphics apparatus using recursive partitioning
CN110838161A (zh) * 2019-10-30 2020-02-25 西安恒歌数码科技有限责任公司 一种osg大批量图形节点在三维场景中聚合的方法
EP4181402A1 (fr) * 2021-11-12 2023-05-17 Rockwell Collins, Inc. Conversion de zones remplies en vecteurs codés par longueur de course
US11748923B2 (en) 2021-11-12 2023-09-05 Rockwell Collins, Inc. System and method for providing more readable font characters in size adjusting avionics charts
US11842429B2 (en) 2021-11-12 2023-12-12 Rockwell Collins, Inc. System and method for machine code subroutine creation and execution with indeterminate addresses
US11915389B2 (en) 2021-11-12 2024-02-27 Rockwell Collins, Inc. System and method for recreating image with repeating patterns of graphical image file to reduce storage space
US11954770B2 (en) 2021-11-12 2024-04-09 Rockwell Collins, Inc. System and method for recreating graphical image using character recognition to reduce storage space
WO2024086541A1 (fr) * 2022-10-19 2024-04-25 Qualcomm Incorporated Codage de représentation virtuelle dans des descriptions de scène
US12002369B2 (en) 2021-11-12 2024-06-04 Rockwell Collins, Inc. Graphical user interface (GUI) for selection and display of enroute charts in an avionics chart display system
US12306007B2 (en) 2021-11-12 2025-05-20 Rockwell Collins, Inc. System and method for chart thumbnail image generation
US12304648B2 (en) 2021-11-12 2025-05-20 Rockwell Collins, Inc. System and method for separating avionics charts into a plurality of display panels

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11854110B2 (en) 2021-11-12 2023-12-26 Rockwell Collins, Inc. System and method for determining geographic information of airport terminal chart and converting graphical image file to hardware directives for display unit

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4658247A (en) * 1984-07-30 1987-04-14 Cornell Research Foundation, Inc. Pipelined, line buffered real-time color graphics display system
US5613048A (en) * 1993-08-03 1997-03-18 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
AUPM704394A0 (en) * 1994-07-25 1994-08-18 Canon Information Systems Research Australia Pty Ltd Optimization method for the efficient production of images
US5572634A (en) * 1994-10-26 1996-11-05 Silicon Engines, Inc. Method and apparatus for spatial simulation acceleration

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2352951A (en) * 1999-03-03 2001-02-07 Canon Kk Computer graphics apparatus using recursive partitioning
GB2352951B (en) * 1999-03-03 2003-04-30 Canon Kk Computer graphics apparatus
US6580426B1 (en) 1999-03-03 2003-06-17 Canon Kabushiki Kaisha Computer graphics apparatus for processing of data defining a three-dimensional computer model to partition the three-dimensional space into a plurality of sectors
CN110838161A (zh) * 2019-10-30 2020-02-25 西安恒歌数码科技有限责任公司 一种osg大批量图形节点在三维场景中聚合的方法
CN110838161B (zh) * 2019-10-30 2024-01-30 西安恒歌数码科技有限责任公司 一种osg大批量图形节点在三维场景中聚合的方法
US11842429B2 (en) 2021-11-12 2023-12-12 Rockwell Collins, Inc. System and method for machine code subroutine creation and execution with indeterminate addresses
US11748923B2 (en) 2021-11-12 2023-09-05 Rockwell Collins, Inc. System and method for providing more readable font characters in size adjusting avionics charts
US11887222B2 (en) 2021-11-12 2024-01-30 Rockwell Collins, Inc. Conversion of filled areas to run length encoded vectors
EP4181402A1 (fr) * 2021-11-12 2023-05-17 Rockwell Collins, Inc. Conversion de zones remplies en vecteurs codés par longueur de course
US11915389B2 (en) 2021-11-12 2024-02-27 Rockwell Collins, Inc. System and method for recreating image with repeating patterns of graphical image file to reduce storage space
US11954770B2 (en) 2021-11-12 2024-04-09 Rockwell Collins, Inc. System and method for recreating graphical image using character recognition to reduce storage space
US12002369B2 (en) 2021-11-12 2024-06-04 Rockwell Collins, Inc. Graphical user interface (GUI) for selection and display of enroute charts in an avionics chart display system
US12306007B2 (en) 2021-11-12 2025-05-20 Rockwell Collins, Inc. System and method for chart thumbnail image generation
US12304648B2 (en) 2021-11-12 2025-05-20 Rockwell Collins, Inc. System and method for separating avionics charts into a plurality of display panels
WO2024086541A1 (fr) * 2022-10-19 2024-04-25 Qualcomm Incorporated Codage de représentation virtuelle dans des descriptions de scène

Also Published As

Publication number Publication date
WO1998043208A3 (fr) 1999-01-14
AU6867898A (en) 1998-10-20

Similar Documents

Publication Publication Date Title
US5579454A (en) Three dimensional graphics processing with pre-sorting of surface portions
US6731304B2 (en) Using ancillary geometry for visibility determination
US6215503B1 (en) Image generator and method for resolving non-binary cyclic occlusions with image compositing operations
CA2225017C (fr) Methode et appareil d'affichage rapide d'images de structures complexes creees par ordinateur
Zhang et al. Visibility culling using hierarchical occlusion maps
US6266064B1 (en) Coherent visibility sorting and occlusion cycle detection for dynamic aggregate geometry
US9030411B2 (en) Apparatus and methods for haptic rendering using a haptic camera view
US6373485B2 (en) Mitigating the effects of object approximations
Naylor Interactive solid geometry via partitioning trees
US8698799B2 (en) Method and apparatus for rendering graphics using soft occlusion
US5684936A (en) Method and apparatus for generating a rendering order for use in rendering images using a pre-processing technique
Snyder et al. Visibility sorting and compositing without splitting for image layer decompositions
US20060256112A1 (en) Statistical rendering acceleration
US6734853B2 (en) Method of using view frustrum culling for scaleable collision detection
US20080225048A1 (en) Culling occlusions when rendering graphics on computers
WO2001048697A1 (fr) Procede de simplification de scene statique hierarchique et gestion de polygones dans un modele 3d
WO1998043208A2 (fr) Procede et dispositif de traitement d'elements graphiques
US6664957B1 (en) Apparatus and method for three-dimensional graphics drawing through occlusion culling
US5666474A (en) Image processing
US5844562A (en) Method for hi-fidelity graphic rendering of three-dimensional objects
US7050053B2 (en) Geometric folding for cone-tree data compression
US5872570A (en) Method and apparatus for use in generating a rendering order for use in rendering images
KR100372901B1 (ko) 지.에프-버퍼를 이용하여 광선추적법의 속도를 개선하는방법
JP2952585B1 (ja) 画像生成方法
Snyder et al. Visibility sorting and compositing for image-based rendering

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998545890

Format of ref document f/p: F