[go: up one dir, main page]

WO2001037216A2 - Intelligent three-dimensional computer graphics system and method - Google Patents

Intelligent three-dimensional computer graphics system and method Download PDF

Info

Publication number
WO2001037216A2
WO2001037216A2 PCT/US2000/031320 US0031320W WO0137216A2 WO 2001037216 A2 WO2001037216 A2 WO 2001037216A2 US 0031320 W US0031320 W US 0031320W WO 0137216 A2 WO0137216 A2 WO 0137216A2
Authority
WO
WIPO (PCT)
Prior art keywords
entity
representation
user
constructs
attribute
Prior art date
Application number
PCT/US2000/031320
Other languages
French (fr)
Other versions
WO2001037216A3 (en
Inventor
Stephen Hartford
Kenneth Turcotte
Daniel Kaye
Michael R. Moore
Original Assignee
Stephen Hartford
Kenneth Turcotte
Daniel Kaye
Moore Michael R
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stephen Hartford, Kenneth Turcotte, Daniel Kaye, Moore Michael R filed Critical Stephen Hartford
Priority to AU16100/01A priority Critical patent/AU1610001A/en
Publication of WO2001037216A2 publication Critical patent/WO2001037216A2/en
Publication of WO2001037216A3 publication Critical patent/WO2001037216A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates generally to computer graphics applications, and more particularly to a system and method for implementing intelligent three- dimensional computer graphics modeling, rendering and animating.
  • FIG. 1A illustrates an example of such an operation.
  • the user begins with the image 100 that includes a building image 102 and a person image 104 sized in proportion to the size of the building image 102.
  • the user desires to modify the building image 102 by increasing the height of the building image 102.
  • the user may chose to simply utilize a conventional "resize" operation, which merely changes the distances between the various vertices that comprise the building image 102.
  • the resulting image 110 is depicted on the right, where the operation merely manipulated the geometric constructs of the model, without realistic constraints or regard for the type of entity (i.e., a building) being modeled.
  • the resulting building image 112 is grossly out of proportion with the person image 104 because the building image 102 was made taller without adding additional stories.
  • conventional systems and methods for creating and modifying a 3D graphical model of an entity are generally limited in that they do not account for the type of entity being modeled, thus resulting in the creation of images that are not realistic- looking, are difficult to modify, or both.
  • the present invention is directed to a system and method for creating entity representations on a computer or graphics device.
  • the invention provides for creation and modification of a reality-based three-dimensional computer graphics model, animation, and rendering through the use of intelligent objects, without requiring a user to manually manipulate the fundamental constructs of the graphics model.
  • the system of the invention comprises a graphics creation engine operative in modeling, animating, and rendering stages of the 3D computer animation process.
  • the graphics creation engine comprises an intelligent abstract engine operative to receive user inputs through a graphical user interface.
  • the intelligent abstract engine accesses and functions in conjunction with a plurality of intelligent objects, which provide instructions to a conventional computer graphics representation and manipulation module.
  • the graphics representation and manipulation module manipulates the fundamental constructs representing the modeled entity in various stages of the animation process.
  • the modeled entity is available for display to a user on a video display monitor.
  • degrees of freedom describes the number of ways in which a body may move or in which a dynamic system may change. For example, in 3D space, an entity can move up or down, left or right, and forward and backward.
  • degrees of freedom additionally describes the number of ways in which a graphical model may change, or the features available to 3D computer graphic representations of entities.
  • a computer graphic representation of a building may have features indicating, for example, the age, condition, and architectural style of a building.
  • the intelligent objects generally limit the degrees of freedom, or features, available to the user during creation and modification of a graphical model, animation, and rendering, and thereby provide for creation and modifications based on real-world or other predefined constraints.
  • Each of the intelligent objects comprises property sets, data subsets, and intelligent object processing code.
  • an intelligent object associated with the selected entity operates to present the user with a plurality of property sets.
  • Property sets include a list of modifiable entity attributes along with allowable ranges of values, or choices, associated with each of the attributes. The ranges of values presented to the user limit the user's ability to modify the entity model, thereby maintaining a reality-based model.
  • the intelligent object processing code Upon reception of a user selection of an entity attribute and modification parameters within the allowable range of values for that attribute, the intelligent object processing code identifies appropriate data subsets according to the user selection.
  • the data subsets comprise the fundamental constructs for graphical representation of a three-dimensional entity and include numeric data and embedded algorithms .
  • the intelligent object processing code identifies utilizations of the data subsets according to the user inputs and provides instructions to the graphics representation and manipulation module .
  • the modification of the constructs occurs within the graphics representation and manipulation module, which executes the instructions received from the intelligent object, and produces for display a graphical representation of an entity.
  • FIG. IB illustrates a corollary example to FIG. 1A, wherein the user is implementing the system and method of the present invention in a simplified manner.
  • the user begins with the image 120, including a building image 122 and a person image 124.
  • the image 130 the invention manipulates the fundamental geometry of the model by adding two additional stories to the building 122 to produce building 132, thus maintaining a realistic proportion between the images of the building 132 and of the person 124.
  • the present invention provides a system and method by which a user may easily manipulate a graphical image based on the type of entity being modeled.
  • FIG. 2 depicts an exemplary computer device 200 for implementing the entity representation creation techniques embodied in the present invention.
  • Computer device 200 includes a central processing unit (CPU) 202 , such as an Intel Pentium microprocessor, for executing computer program instructions.
  • CPU central processing unit
  • a video display 204 which may comprise a conventional CRT or LCD monitor, is coupled to a video display interface 206 and is configured to display images and text to a user.
  • the video display interface 206 may comprise any one of a number of commercially-available video display cards , or may comprise circuitry incorporated into a computer motherboard or CPU 202.
  • Input/output devices 208 which may variously comprise printers, keyboards, mice, trackballs, and the like, are operative to receive information from or convey information to a user or another device.
  • a memory 210 which may include one or a combination of random access memory (RAM), read-only memory (ROM), or non-volatile storage devices such as magnetic hard disks, CD-ROMs, and magneto-optical drives, stores program instructions, files, and other data.
  • a communications interface 212 such as a modem or Ethernet card, may be provided to enable communication with one or more remote devices over a network (not shown).
  • the various components of the computer device 200 are coupled in communication by at least one bus 214.
  • the memory 210 stores a graphics creation engine
  • the operating system 222 allocates memory, manages communications between computer device 200 components, and performs other low-level operations. Additionally, the memory 210 may store a library 224 of intelligent objects 226 operable in concert with the graphics creation engine 220.
  • the entity representation creation techniques of the present invention are embodied in the graphics creation engine 220 and the intelligent objects 226.
  • the graphics creation engine 220 is operative to allow a user to efficiently create and/or modify a 3D computer graphics model, animation, and rendering according to "real world" limitations through use of intelligent objects 226, which are described below. That is, the graphics creation engine 220 employs the intelligent objects 226 to create and/or modify 3D models according to the type of entity being modeled. The user thereby avoids the necessity of manually manipulating the fundamental constructs, such as the various points, polygons and surfaces, underlying the 3D graphical model.
  • FIG. 3 is a block diagram illustrating details of one embodiment of the graphics creation engine 220.
  • the graphics creation engine 220 is configured to receive user inputs 302 from the input/output devices 208 (FIG. 2) and to output a two-dimensional (2D) graphical representation of a frame series 316 to the video display 204 via the interface 206 (FIG. 2).
  • the graphics creation engine 220 conceptually, or logically, comprises a model creation module 306, an animation module 308, and a rendering module 310.
  • the process of producing 3D animation involves modeling, animating and rendering. In some applications, these activities may be highly intertwined, but are separated in this specification for conceptual convenience and clarity.
  • implementation of the system and method herein described is not limited to a particular sequence of activities, but that the various modeling, animating, and rendering activities described herein may be performed in alternative sequences to achieve the user's desired result.
  • the graphics creation engine 220 receives the user inputs 302, which the modules 306, 308, and 310 utilize in creating the 2D graphical representation of a frame series 316.
  • the model creation module 306 generally creates a 3D mathematical , or geometric , representation 312 of an entity , such as for example, a building.
  • the model creation module 306 manipulates the fundamental geometric constructs, e.g. , point, lines and polygons, to provide, among other features, the shape and size of a model of an entity. Details of the model creation module 306 are described below with reference to FIG. 4.
  • the animation module 308 generally creates a series of still images, or frames, of an entity that, when presented in rapid succession, provide the appearance that the entity is continuously moving.
  • the animation module 308 manipulates the fundamental animation constructs, e.g. , keyframes, interpolates intermediate frames, and applies kinematics and constraints to create the frame series 314. Details of the animation module 308 are discussed below with reference to FIG. 5.
  • the rendering module 310 generally creates a 2D pictorial representation of the frame series 316 of the modeled entity to provide a visual representation of the model to the user's video display 204 (FIG. 2). It is within the rendering module 310 that fundamental rendering constructs, e.g.
  • the camera or point of view
  • the lights are manipulated to provide the visual representation, or "picture" , of the model.
  • An algorithm within the rendering module 310 renders the frame series by calculating the desired visual representation of the frames based on the user inputs 302 or certain default settings.
  • the video display interface 206 (FIG. 2) converts the 2D graphical representation of the frame series 316 to electronic signals required by the display 204 (FIG. 2) to display an animated model. Details of the rendering module 310 are discussed below with reference to FIG. 6.
  • FIG. 4 is a block diagram illustrating details of a typical model creation module 306.
  • the model creation module 306 is generally configured to receive user inputs 302 from the input/output devices 208 (FIG. 2) and to provide the 3D mathematical representation 312 ofan entity to the animation module 308 (FIG. 3).
  • the model creation module 306 conceptually, or logically, comprises an intelligent abstract modeling engine 400 and a geometric representation and manipulation module 404.
  • the entity representation creation techniques of the present invention are embodied in the intelligent abstract modeling engine 400 and the intelligent objects 226 of the object library 224, as well as other intelligent abstract engines described below.
  • GUI graphical user interface
  • the intelligent abstract modeling engine 400 Based on the user inputs 302 received through a graphical user interface (GUI) 406 , the intelligent abstract modeling engine 400 interacts with the intelligent object library 224 through an intelligent object interface 408.
  • the intelligent abstract modeling engine 400 passes model generation instructions 402 from the intelligent objects 226 to the geometric representation and manipulation module 404.
  • the model generation instructions 402 instruct the geometric representation and manipulation module 404 as to the fundamental geometric constructs, that is the various points and polygons, required to obtain a model consistent with the user inputs 302, so that the geometric representation and manipulation module 404 may generate the 3D mathematical representation 312 of the associated entity.
  • the geometric representation and manipulation module 404 is configured to interact with the animation module 308 (FIG. 3) and provide the necessary data thereto.
  • FIG. 5 is a block diagram illustrating details of a typical animation module
  • the animation module 308 is generally configured to receive the user inputs
  • the animation module 308 is configured to generate the frame series 314 of the modeled entity, and to provide the frame series 314 to the rendering module 310 (FIG. 3).
  • the animation module 308 conceptually, or logically, comprises an intelligent abstract animating engine 500 and a frame representation and manipulation module 504.
  • the entity representation creation techniques of the present invention are also embodied in the intelligent abstract animating engine 500, as well as the other intelligent abstract engines described herein.
  • the intelligent abstract animating engine 500 Based on the user inputs 302 received through a graphical user interface 506, the intelligent abstract animating engine 500 interacts with the intelligent object library 224 through an intelligent object interface 508.
  • the intelligent abstract animating engine 500 passes frame generation instructions 502 from the intelligent objects 226 to the frame representation and manipulation module 504.
  • the frame generation instructions 502 instruct the frame representation and manipulation module 504 as to the fundamental animating constructs required to obtain a frame series consistent with the user inputs 302 or certain defaults.
  • the frame representation and manipulation module 504 modifies these constructs accordingly to define the frame series representation 314 of the associated entity.
  • the frame representation and manipulation module 504 is capable of interacting with the rendering module 310 (FIG. 3) and providing necessary data thereto.
  • FIG. 6 is a block diagram illustrating details of a typical rendering module 310.
  • the rendering module 310 is generally configured to receive the user inputs 302 from the input/output devices 208 (FIG. 2) and the frame series 314 from the frame representation and manipulation module 504 (FIG. 5).
  • the rendering module 310 is also configured to provide 2D graphical representations of the frame series 316 of the modeled entity to the user's video display 204 via the interface 206 (FIG. 2).
  • a rendering module 310 conceptually, or logically, comprises an intelligent abstract rendering engine 600 and a surface representation and manipulation module 604.
  • the entity representation creation techniques of the present invention are also embodied in the intelligent abstract rendering engine 600, as well as other intelligent abstract engines described herein.
  • the intelligent abstract rendering engine 600 Based on the user inputs 302 received through a graphical user interface 606 , the intelligent abstract rendering engine 600 interacts with the intelligent object library 224 through an intelligent object interface 608.
  • the intelligent abstract rendering engine 600 passes surface generation instructions 602 from the intelligent objects 226 to the surface representation and manipulation module 604.
  • the surface generation instructions 602 instruct the surface representation and manipulation module 604 as to the fundamental rendering constructs required to obtain a 2D graphical representation consistent with the user inputs 302, so that the surface representation and manipulation module 604 may modify these constructs accordingly to define the 2D graphical representation of the frame series 316 of the associated entity.
  • FIG. 7 is a block diagram illustrating details of one embodiment of the intelligent object 700, which represents any of the intelligent objects 226 (FIG. 2), and typical interaction between the intelligent object 700 and a user through the intelligent object interface 608 (not shown, see FIG. 6) and graphical user interfaces 406 (FIG. 4), 506 (FIG. 5), and 606 (FIG. 6), depicted as GUI 702.
  • An intelligent object 700 may reside in the memory 210 (FIG. 2). Alternatively, if the computer system 200 (FIG. 2) is a member of a network, the intelligent object 700 may also reside on a memory device separate from the local memory 210, and may be utilized by loading into the local memory 210 via the communication interface 212 (FIG. 2).
  • Each intelligent object 700 is associated with an entity, such as a building, an automobile, a person or the like.
  • entity such as a building, an automobile, a person or the like.
  • an intelligent object could be created for virtually any type of entity, real or imagined, for artists may even desire to constrain imagined entities to reality-based constraints.
  • the intelligent object 700 is called into operation by a user at any of the three stages of the overall 3D computer animation process, i.e. the modeling, animating or rendering stage.
  • the GUI 702 invites the user to select an entity from a predefined set of entities, to model.
  • Each intelligent object 226 (FIG. 2) is associated with a different entity.
  • the user selects an entity to model by making a selection 704 through the graphical user interface 702.
  • the intelligent object 700 associated with the selected entity then presents the user, through the appropriate interfaces 606 and 608, with a list of modifiable property sets 712 associated with the selected entity.
  • Each property set 706 is a list of modifiable entity attributes 708 and acceptable ranges of values 710 or selections for that attribute.
  • modifiable entity attributes 708 for a building may include, for example, the number of floors, the year of production, the architectural style, the type of neighborhood, and the like.
  • the intelligent object 700 includes property sets 706 that may affect any and/or all of the three stages of the overall 3D computer animation process .
  • property sets 706 may affect any and/or all of the three stages of the overall 3D computer animation process .
  • attributes can be identified for any entity, and the invention is not limited by the examples provided herein.
  • the intelligent object 700 further comprises one or more data subsets 716 and intelligent object processing code 718.
  • the data subsets 716 comprise the fundamental modeling, animating and rendering constructs of an entity, and can be conceptualized as the basic "building blocks" utilized to generate a model, animation or rendering.
  • data subsets 716 associated with a "number of floors" building attribute of a property set 706, may contain all the necessary geometry, surfacing and frame algorithms and/or data for a building ground floor, building intermediate floors, and a roof.
  • the intelligent object processing code 718 then receives, through the appropriate interfaces, user-selected attributes and modification parameters 714.
  • the code 718 operates by calling the data subsets 716 and providing instructions 720 to the representation and manipulation module 722 as to utilizations of the data subsets 716 to correspond with the user inputs.
  • the intelligent object processing code 718 generally limits the degrees of freedom available to the user, thereby limiting the modifications of the entity representation to certain predefined reality-based or other representations. Furthermore, it permits the users to design 3D computer graphical models and animations at a higher level of abstraction through interaction with the intelligent abstract engines 400 (FIG. 4), 500 (FIG. 5), and 600 (FIG. 6), as opposed to designing through direct manual manipulation of the low-level geometrical, animation and rendering constructs.
  • FIG. 8 is a flow chart illustrating one embodiment of a method for implementing the entity representation creation techniques embodied in the present invention.
  • Block 802 represents the modeling process, wherein a user creates the model of an entity, i.e. the user creates a reality-based geometric representation of a 3D entity by manipulation of the fundamental geometric constructs through use of and interaction with intelligent objects 700 (FIG. 7) described herein. Details of the block 802 are described in more detail below with reference to FIG. 9.
  • Block 804 represents the animating process, wherein a user creates the animated model of an entity, i.e. the user creates a series of frames of reality-based 3D mathematical representations of the model of block 802 by manipulation of the fundamental animating constructs through use of and interaction with intelligent objects 700 (FIG. 7). Details of the block 804 are described in more detail below with reference to FIG. 10.
  • Block 806 represents the rendering process, wherein a user creates the rendered model of an entity, i.e. the user creates a reality-based 2D representation of the frame series of the animated model of block 804 by manipulation of the fundamental rendering constructs through use of and interaction with intelligent objects 700 (FIG. 7). Details of the block 806 are described in more detail below with reference to FIG. 11.
  • FIG. 9 is a flow chart depicting steps associated with the modeling block 802 of FIG. 8.
  • Block 902 depicts the step of receiving the user entity selection 704 (FIG. 7) from a predetermined set of graphical entities presented by the GUI 702 described herein. For example, users may choose a building as their selected entity.
  • Block 904 depicts the step of identifying an intelligent object 700 (FIG. 7) associated with the entity selected by the user in block 902.
  • the intelligent abstract engine 400 (FIG. 4), 500 (FIG. 5), or 600 (FIG. 6) matches the user-selected entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2).
  • the intelligent abstract engine 400 (FIG. 4), 500 (FIG. 5), or 600 (FIG. 6) would identify an intelligent object 226 (FIG. 2) representing a building.
  • the intelligent objects 700 comprise property sets 706 (FIG. 7), data subsets 716 (FIG. 7) and intelligent object processing code 718 (FIG. 7).
  • Block 906 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 904. For example, the user may be presented with attributes such as number of floors, year of production, architectural style, or type of neighborhood.
  • Block 908 depicts the step of receiving a user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the number of floors in a building and a modification parameter representing a request for five floors.
  • Block 910 depicts the step of identifying the data subsets 716 (FIG. 7) according to the user input received in block 908.
  • the intelligent object 700 may identify, for example, data subsets that contain the underlying fundamental representation constructs associated with a ground floor, four intermediate floors, and a roof.
  • Block 912 depicts the step of identifying the necessary utilizations of the data subsets 716 (FIG. 7) that were identified in block 910, based on the user-selected attributes received in block 908.
  • the intelligent object processing code 718 is configured to identify utilizations of the data subsets 716 (FIG. 7) according to reality-based, or real world, constraints upon the entity to produce a reality-based representation thereof.
  • the intelligent object processing code 718 (FIG. 7) of the intelligent object 700 (FIG. 7) provides instructions to the conventional geometric representation and manipulation module 404 (FIG. 4) as to the necessary utilizations, identified in block 912, of the underlying fundamental representation constructs of the data subsets 716 (FIG. 6).
  • the conventional geometric representation and manipulation module 404 (FIG. 4) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical modeling method, which is a component of the complete entity representation creation method of the invention.
  • FIG. 10 is a flow chart depicting details of the animating step 804 of FIG. 8.
  • the animating process of the present invention may sequentially follow the modeling process depicted in FIG. 9, and comprises some steps analogous to those depicted in FIG. 9, but in relation to animating as opposed to modeling.
  • Block 1002 depicts the step of receiving a graphical model, i.e. a 3D mathematical representation of an entity.
  • the animating module 308 receives the 3D mathematical representation 312 (FIG. 3) from the model creation module 306 (FIG. 3).
  • the 3D mathematical representation 312 may be stored in various sources, including local memory 210 (FIG. 2) or remote memory if the user computer 200 (FIG. 2) is networked.
  • Block 1004 depicts the step of identifying an intelligent object 700 (FIG. 7) associated with the 3D mathematical representation 312 received in block 1002. Note that the appropriate intelligent object 700 may have been previously identified during the modeling portion 904 (FIG. 9) of the method. If not previously identified, then the intelligent abstract engine 500 (FIG. 5) matches the received 3D mathematical representation 312 of an entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2).
  • Block 1006 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 1004.
  • the user may be presented with attributes such as aging of the building, or demolition of the building, or other attributes that may affect the animating of the entity.
  • Block 1008 depicts the step of receiving the user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the aging of the building and a modification parameter representing a request for a particular time period of aging, such as 20 years. Similarly, if the user selected entity were a car, the user might select the rotation of the wheels as the attribute and enter a parameter associated with the rate of wheel rotation.
  • Block 1010 depicts the step of identifying data subsets 716 (FIG. 7) according to the user input received in block 1008.
  • the intelligent object 700 may identify data subsets that contain the underlying fundamental representation constructs associated with the chosen period of aging, such as frame interpolation processes providing for the apparent aging of the building material, or a change of facade style associated with the chosen period of aging, and the like.
  • Block 1012 depicts the step of identifying the necessary utilizations of the data subsets 716 (FIG. 7) that were identified in block 1010, based on the user-selected attributes received in block 1008.
  • the intelligent object processing code 718 (FIG.
  • the intelligent object processing code 718 (FIG. 7) of the intelligent object 700 (FIG. 7) provides instructions to the conventional frame representation and manipulation module 504 (FIG. 5) as to the necessary utilizations, identified in block 1012, of the underlying fundamental representation constructs of the data subsets 716 (FIG. 6).
  • the conventional frame representation and manipulation module 504 (FIG. 5) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical animating method, which is a component of the complete entity representation creation method of the invention.
  • the user may iterate the method illustrated in FIG. 10. Therefore, the steps 1002-1014 presented above may be repeated by users to their satisfaction before continuing to the rendering method of block 806 (FIG. 8). Alternatively, users may continue to the rendering method described below or return to the modeling method.
  • FIG. 11 is a flow chart depicting details of the rendering step of block 806 (FIG. 8).
  • the method of FIG. 11 may sequentially follow the method depicted in FIGS. 9 and 10, and comprises some steps analogous to those depicted in FIG. 10, but in relation to rendering as opposed to animating.
  • Block 1102 depicts the step of receiving an animated model, i.e. the frame series 314 of the selected entity.
  • the frame series 314 (FIG. 3) is received from the animating module 308 (FIG. 3).
  • the frame series 314 may be stored in various sources, including local memory 210 (FIG. 2) or remote memory if the user computer 200 (FIG. 2) is networked.
  • Block 1104 depicts the step of identifying an intelligent object 700 (FIG.
  • the appropriate intelligent object 700 may have been previously identified during the modeling and/or animating portion of the method. If not previously identified, then the intelligent abstract engine 600 (FIG. 6) is capable of matching the received frame series of an entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2).
  • Block 1106 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 1104.
  • the user may be presented with property set attributes such as year of production, architectural style, or type of neighborhood, or other attributes which may affect the rendering of the entity.
  • Block 1108 depicts the step of receiving a user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the year of production of the building and a modification parameter representing a request for a particular year of production.
  • Block 1110 depicts the step of identifying rendering data subsets 716 (FIG.
  • the intelligent object 700 may identify data subsets that contain the underlying fundamental representation constructs associated with the chosen year, such as digitized bitmaps or procedural algorithms representing a type of building material, apparent age of building material, a type of facade style associated with the chosen year of production, and the like.
  • Block 1112 depicts the step of identifying the necessary utilizations of the rendering data subsets 716 (FIG . 7) that were identified in block 1110, based on the user-selected attributes received in block 1108.
  • the intelligent object processing code 718 is configured to identify utilizations of the rendering data subsets 716 (FIG. 7) according to reality-based, or real world, constraints upon the entity to produce a reality-based representation.
  • the intelligent object processing code 718 (FIG. 7) of intelligent object 700 (FIG. 7) provides instructions to the conventional surface representation and manipulation module 604 (FIG. 6) as to the necessary utilizations, identified in block 1112, of the underlying fundamental representation constructs of the rendering data subsets 716 (FIG. 7).
  • the conventional surface representation and manipulation module 604 (FIG. 6) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical rendering method, which is a component of the complete entity representation creation method of the invention.
  • the user may iterate the method illustrated in FIG. 11 , therefore the steps 1102-1114 presented above may be repeated by users to their satisfaction before returning to the modeling or animating portions of the method.
  • the intelligent three-dimensional computer graphics system and method described herein may be used in any application utilizing computer graphics, including video production, computer-aided design, and manufacturing. Therefore, it is not intended to limit the invention to practice in a particular operating environment.
  • intelligent objects could contain large amounts of "intelligence," e.g., industry specifications and the like. Therefore it is not intended to limit the intelligent objects to the attributes exemplified in this description.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system and method are disclosed for creating reality-based computer animation without requiring the user to directly manipulate the fundamental constructs of the underlying graphical model. A graphics creation engine is configured to receive from a user a selected entity for representation. The graphics creation engine is further configured to interact with a library of objects to present modifiable entity attributes and associated allowable values from an object to the user, and to receive user selections based thereon. The object is capable of identifying appropriate data subsets, which contain fundamental constructs required to produce graphics models, animations, and renderings of the selected entity attributes. The object is further configured with processing code capable of generating instructions to utilize the data subsets according to the user selections, thereby creating the entity representation.

Description

INTELLIGENT THREE-DIMENSIONAL COMPUTER GRAPHICS SYSTEM AND METHOD
CROSS-REFERENCE TO RELATED APPLICATION The present application claims the benefit of U.S. provisional Patent Application No. 60/165,513, filed on November 15, 1999 and entitled "Intelligent Three-Dimensional Computer Graphics System and Method", which is hereby incorporated by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to computer graphics applications, and more particularly to a system and method for implementing intelligent three- dimensional computer graphics modeling, rendering and animating.
2. Background of the Prior Art
The increase in popularity of video productions that include 3D animation and the diversity of commercial and personal uses of 3D graphics in general, coupled with the increase in availability of computer processing power at declining costs, has resulted in demand for 3D computer graphics tools that create realistic 3D images that are easy to create. These tools typically facilitate model creation, rendering and animation.
Even with the advent of advanced computer animation systems, artists using conventional systems are still required to create 3D graphical models through the use of geometrical representations of an entity. The creation of 3D graphical models typically requires the tedious manipulation of fundamental geometric constructs, such as points, lines and planes, as well as the manipulation of higher- level geometric constructs, such as polygons, simple and complex curves and surfaces. In conventional systems, model changes desired during animating or rendering an entity require the user to return to the geometric representation of the entity and to modify the underlying geometry. This can be a meticulous, labor intensive, and therefore expensive, operation.
Some conventional 3D computer graphics tools provide the user certain operations that assist in the manipulation of the model's fundamental geometry, e.g., a "resize" operation. FIG. 1A illustrates an example of such an operation. In this example, the user begins with the image 100 that includes a building image 102 and a person image 104 sized in proportion to the size of the building image 102. Assume, for example, the user desires to modify the building image 102 by increasing the height of the building image 102. The user may chose to simply utilize a conventional "resize" operation, which merely changes the distances between the various vertices that comprise the building image 102. The resulting image 110 is depicted on the right, where the operation merely manipulated the geometric constructs of the model, without realistic constraints or regard for the type of entity (i.e., a building) being modeled. As shown, the resulting building image 112 is grossly out of proportion with the person image 104 because the building image 102 was made taller without adding additional stories. Hence, conventional systems and methods for creating and modifying a 3D graphical model of an entity are generally limited in that they do not account for the type of entity being modeled, thus resulting in the creation of images that are not realistic- looking, are difficult to modify, or both.
There is a need for an "intelligent" 3D entity representation, or computer graphics, creation system and method that does not require the user to directly manipulate the fundamental constructs of the graphical model. An additional need exists for a 3D computer graphics system and method that creates realistic-looking 3D graphical images that are easy for a user, even a lay user, to create and modify.
SUMMARY OF THE INVENTION The present invention is directed to a system and method for creating entity representations on a computer or graphics device. The invention provides for creation and modification of a reality-based three-dimensional computer graphics model, animation, and rendering through the use of intelligent objects, without requiring a user to manually manipulate the fundamental constructs of the graphics model.
The system of the invention comprises a graphics creation engine operative in modeling, animating, and rendering stages of the 3D computer animation process. The graphics creation engine comprises an intelligent abstract engine operative to receive user inputs through a graphical user interface. The intelligent abstract engine accesses and functions in conjunction with a plurality of intelligent objects, which provide instructions to a conventional computer graphics representation and manipulation module. The graphics representation and manipulation module manipulates the fundamental constructs representing the modeled entity in various stages of the animation process. The modeled entity is available for display to a user on a video display monitor.
Generally, "degrees of freedom" describes the number of ways in which a body may move or in which a dynamic system may change. For example, in 3D space, an entity can move up or down, left or right, and forward and backward. In the context of, and in describing, the present invention, "degrees of freedom" additionally describes the number of ways in which a graphical model may change, or the features available to 3D computer graphic representations of entities. A computer graphic representation of a building may have features indicating, for example, the age, condition, and architectural style of a building. The intelligent objects generally limit the degrees of freedom, or features, available to the user during creation and modification of a graphical model, animation, and rendering, and thereby provide for creation and modifications based on real-world or other predefined constraints. Each of the intelligent objects comprises property sets, data subsets, and intelligent object processing code. Upon receiving a user selection of an entity from a predefined set of entities, an intelligent object associated with the selected entity operates to present the user with a plurality of property sets. Property sets include a list of modifiable entity attributes along with allowable ranges of values, or choices, associated with each of the attributes. The ranges of values presented to the user limit the user's ability to modify the entity model, thereby maintaining a reality-based model.
Upon reception of a user selection of an entity attribute and modification parameters within the allowable range of values for that attribute, the intelligent object processing code identifies appropriate data subsets according to the user selection. The data subsets comprise the fundamental constructs for graphical representation of a three-dimensional entity and include numeric data and embedded algorithms . The intelligent object processing code identifies utilizations of the data subsets according to the user inputs and provides instructions to the graphics representation and manipulation module . The modification of the constructs occurs within the graphics representation and manipulation module, which executes the instructions received from the intelligent object, and produces for display a graphical representation of an entity.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS FIG. IB illustrates a corollary example to FIG. 1A, wherein the user is implementing the system and method of the present invention in a simplified manner. In the example depicted in FIG. IB, the user begins with the image 120, including a building image 122 and a person image 124. Through use of intelligent objects, the user may easily and realistically increase the height of the building image 122. As shown in the image 130, the invention manipulates the fundamental geometry of the model by adding two additional stories to the building 122 to produce building 132, thus maintaining a realistic proportion between the images of the building 132 and of the person 124. Accordingly, the present invention provides a system and method by which a user may easily manipulate a graphical image based on the type of entity being modeled.
FIG. 2 depicts an exemplary computer device 200 for implementing the entity representation creation techniques embodied in the present invention. Computer device 200 includes a central processing unit (CPU) 202 , such as an Intel Pentium microprocessor, for executing computer program instructions. A video display 204, which may comprise a conventional CRT or LCD monitor, is coupled to a video display interface 206 and is configured to display images and text to a user. The video display interface 206 may comprise any one of a number of commercially-available video display cards , or may comprise circuitry incorporated into a computer motherboard or CPU 202. Input/output devices 208, which may variously comprise printers, keyboards, mice, trackballs, and the like, are operative to receive information from or convey information to a user or another device.
A memory 210, which may include one or a combination of random access memory (RAM), read-only memory (ROM), or non-volatile storage devices such as magnetic hard disks, CD-ROMs, and magneto-optical drives, stores program instructions, files, and other data. Finally, a communications interface 212, such as a modem or Ethernet card, may be provided to enable communication with one or more remote devices over a network (not shown). The various components of the computer device 200 are coupled in communication by at least one bus 214.
As depicted in FIG. 2, the memory 210 stores a graphics creation engine
220 and an operating system 222. The operating system 222 allocates memory, manages communications between computer device 200 components, and performs other low-level operations. Additionally, the memory 210 may store a library 224 of intelligent objects 226 operable in concert with the graphics creation engine 220.
The entity representation creation techniques of the present invention are embodied in the graphics creation engine 220 and the intelligent objects 226. Generally, the graphics creation engine 220 is operative to allow a user to efficiently create and/or modify a 3D computer graphics model, animation, and rendering according to "real world" limitations through use of intelligent objects 226, which are described below. That is, the graphics creation engine 220 employs the intelligent objects 226 to create and/or modify 3D models according to the type of entity being modeled. The user thereby avoids the necessity of manually manipulating the fundamental constructs, such as the various points, polygons and surfaces, underlying the 3D graphical model.
FIG. 3 is a block diagram illustrating details of one embodiment of the graphics creation engine 220. The graphics creation engine 220 is configured to receive user inputs 302 from the input/output devices 208 (FIG. 2) and to output a two-dimensional (2D) graphical representation of a frame series 316 to the video display 204 via the interface 206 (FIG. 2). The graphics creation engine 220 conceptually, or logically, comprises a model creation module 306, an animation module 308, and a rendering module 310. In actual 3D animation practice, the process of producing 3D animation involves modeling, animating and rendering. In some applications, these activities may be highly intertwined, but are separated in this specification for conceptual convenience and clarity. Those skilled in the art will appreciate that implementation of the system and method herein described is not limited to a particular sequence of activities, but that the various modeling, animating, and rendering activities described herein may be performed in alternative sequences to achieve the user's desired result.
As depicted in FIG. 3, the graphics creation engine 220 receives the user inputs 302, which the modules 306, 308, and 310 utilize in creating the 2D graphical representation of a frame series 316. The model creation module 306 generally creates a 3D mathematical , or geometric , representation 312 of an entity , such as for example, a building. In particular, the model creation module 306 manipulates the fundamental geometric constructs, e.g. , point, lines and polygons, to provide, among other features, the shape and size of a model of an entity. Details of the model creation module 306 are described below with reference to FIG. 4.
The animation module 308 generally creates a series of still images, or frames, of an entity that, when presented in rapid succession, provide the appearance that the entity is continuously moving. In particular, the animation module 308 manipulates the fundamental animation constructs, e.g. , keyframes, interpolates intermediate frames, and applies kinematics and constraints to create the frame series 314. Details of the animation module 308 are discussed below with reference to FIG. 5. The rendering module 310 generally creates a 2D pictorial representation of the frame series 316 of the modeled entity to provide a visual representation of the model to the user's video display 204 (FIG. 2). It is within the rendering module 310 that fundamental rendering constructs, e.g. , the camera (or point of view), the lights, the surface characteristics, the texture mapping and the atmospheric effects functions, are manipulated to provide the visual representation, or "picture" , of the model. An algorithm within the rendering module 310 renders the frame series by calculating the desired visual representation of the frames based on the user inputs 302 or certain default settings. The video display interface 206 (FIG. 2) converts the 2D graphical representation of the frame series 316 to electronic signals required by the display 204 (FIG. 2) to display an animated model. Details of the rendering module 310 are discussed below with reference to FIG. 6.
FIG. 4 is a block diagram illustrating details of a typical model creation module 306. The model creation module 306 is generally configured to receive user inputs 302 from the input/output devices 208 (FIG. 2) and to provide the 3D mathematical representation 312 ofan entity to the animation module 308 (FIG. 3). The model creation module 306 conceptually, or logically, comprises an intelligent abstract modeling engine 400 and a geometric representation and manipulation module 404.
The entity representation creation techniques of the present invention are embodied in the intelligent abstract modeling engine 400 and the intelligent objects 226 of the object library 224, as well as other intelligent abstract engines described below. Based on the user inputs 302 received through a graphical user interface (GUI) 406 , the intelligent abstract modeling engine 400 interacts with the intelligent object library 224 through an intelligent object interface 408. The intelligent abstract modeling engine 400 passes model generation instructions 402 from the intelligent objects 226 to the geometric representation and manipulation module 404. The model generation instructions 402 instruct the geometric representation and manipulation module 404 as to the fundamental geometric constructs, that is the various points and polygons, required to obtain a model consistent with the user inputs 302, so that the geometric representation and manipulation module 404 may generate the 3D mathematical representation 312 of the associated entity. The geometric representation and manipulation module 404 is configured to interact with the animation module 308 (FIG. 3) and provide the necessary data thereto.
FIG. 5 is a block diagram illustrating details of a typical animation module
308. The animation module 308 is generally configured to receive the user inputs
302 from the input/output devices 208 (FIG. 2) and the 3D mathematical representation 312 of an entity from the geometric representation and manipulation module 404 (FIG. 4) of the model creation module 306 (FIGS. 3 and 4). The animation module 308 is configured to generate the frame series 314 of the modeled entity, and to provide the frame series 314 to the rendering module 310 (FIG. 3). As shown, the animation module 308 conceptually, or logically, comprises an intelligent abstract animating engine 500 and a frame representation and manipulation module 504.
The entity representation creation techniques of the present invention are also embodied in the intelligent abstract animating engine 500, as well as the other intelligent abstract engines described herein. Based on the user inputs 302 received through a graphical user interface 506, the intelligent abstract animating engine 500 interacts with the intelligent object library 224 through an intelligent object interface 508. The intelligent abstract animating engine 500 passes frame generation instructions 502 from the intelligent objects 226 to the frame representation and manipulation module 504. The frame generation instructions 502 instruct the frame representation and manipulation module 504 as to the fundamental animating constructs required to obtain a frame series consistent with the user inputs 302 or certain defaults. The frame representation and manipulation module 504 modifies these constructs accordingly to define the frame series representation 314 of the associated entity. The frame representation and manipulation module 504 is capable of interacting with the rendering module 310 (FIG. 3) and providing necessary data thereto.
FIG. 6 is a block diagram illustrating details of a typical rendering module 310. The rendering module 310 is generally configured to receive the user inputs 302 from the input/output devices 208 (FIG. 2) and the frame series 314 from the frame representation and manipulation module 504 (FIG. 5). The rendering module 310 is also configured to provide 2D graphical representations of the frame series 316 of the modeled entity to the user's video display 204 via the interface 206 (FIG. 2). A rendering module 310 conceptually, or logically, comprises an intelligent abstract rendering engine 600 and a surface representation and manipulation module 604.
The entity representation creation techniques of the present invention are also embodied in the intelligent abstract rendering engine 600, as well as other intelligent abstract engines described herein. Based on the user inputs 302 received through a graphical user interface 606 , the intelligent abstract rendering engine 600 interacts with the intelligent object library 224 through an intelligent object interface 608. The intelligent abstract rendering engine 600 passes surface generation instructions 602 from the intelligent objects 226 to the surface representation and manipulation module 604. The surface generation instructions 602 instruct the surface representation and manipulation module 604 as to the fundamental rendering constructs required to obtain a 2D graphical representation consistent with the user inputs 302, so that the surface representation and manipulation module 604 may modify these constructs accordingly to define the 2D graphical representation of the frame series 316 of the associated entity.
Those skilled in the art will appreciate that implementation of the system and method herein described is not limited to a plurality of intelligent abstract engines or to a plurality of representation and manipulation modules, but that the various operations involved in intelligent modeling, rendering and animating may be resident in a single intelligent abstract engine and a single representation and manipulation module.
FIG. 7 is a block diagram illustrating details of one embodiment of the intelligent object 700, which represents any of the intelligent objects 226 (FIG. 2), and typical interaction between the intelligent object 700 and a user through the intelligent object interface 608 (not shown, see FIG. 6) and graphical user interfaces 406 (FIG. 4), 506 (FIG. 5), and 606 (FIG. 6), depicted as GUI 702. An intelligent object 700 may reside in the memory 210 (FIG. 2). Alternatively, if the computer system 200 (FIG. 2) is a member of a network, the intelligent object 700 may also reside on a memory device separate from the local memory 210, and may be utilized by loading into the local memory 210 via the communication interface 212 (FIG. 2). Each intelligent object 700 is associated with an entity, such as a building, an automobile, a person or the like. Those skilled in the art will appreciate that an intelligent object could be created for virtually any type of entity, real or imagined, for artists may even desire to constrain imagined entities to reality-based constraints.
The intelligent object 700 is called into operation by a user at any of the three stages of the overall 3D computer animation process, i.e. the modeling, animating or rendering stage. Initially, the GUI 702 invites the user to select an entity from a predefined set of entities, to model. Each intelligent object 226 (FIG. 2) is associated with a different entity. The user selects an entity to model by making a selection 704 through the graphical user interface 702. The intelligent object 700 associated with the selected entity then presents the user, through the appropriate interfaces 606 and 608, with a list of modifiable property sets 712 associated with the selected entity. Each property set 706 is a list of modifiable entity attributes 708 and acceptable ranges of values 710 or selections for that attribute. For example, modifiable entity attributes 708 for a building may include, for example, the number of floors, the year of production, the architectural style, the type of neighborhood, and the like.
The intelligent object 700 includes property sets 706 that may affect any and/or all of the three stages of the overall 3D computer animation process . Again, those skilled in the art will appreciate that a number of attributes can be identified for any entity, and the invention is not limited by the examples provided herein. Once a user is presented with the modifiable entity attributes 712, the user may specify modification parameters 714 within the allowable range of values 710 or selections.
The intelligent object 700 further comprises one or more data subsets 716 and intelligent object processing code 718. The data subsets 716 comprise the fundamental modeling, animating and rendering constructs of an entity, and can be conceptualized as the basic "building blocks" utilized to generate a model, animation or rendering. For example, data subsets 716 associated with a "number of floors" building attribute of a property set 706, may contain all the necessary geometry, surfacing and frame algorithms and/or data for a building ground floor, building intermediate floors, and a roof.
The intelligent object processing code 718 then receives, through the appropriate interfaces, user-selected attributes and modification parameters 714. The code 718 operates by calling the data subsets 716 and providing instructions 720 to the representation and manipulation module 722 as to utilizations of the data subsets 716 to correspond with the user inputs.
The intelligent object processing code 718 generally limits the degrees of freedom available to the user, thereby limiting the modifications of the entity representation to certain predefined reality-based or other representations. Furthermore, it permits the users to design 3D computer graphical models and animations at a higher level of abstraction through interaction with the intelligent abstract engines 400 (FIG. 4), 500 (FIG. 5), and 600 (FIG. 6), as opposed to designing through direct manual manipulation of the low-level geometrical, animation and rendering constructs.
FIG. 8 is a flow chart illustrating one embodiment of a method for implementing the entity representation creation techniques embodied in the present invention. Block 802 represents the modeling process, wherein a user creates the model of an entity, i.e. the user creates a reality-based geometric representation of a 3D entity by manipulation of the fundamental geometric constructs through use of and interaction with intelligent objects 700 (FIG. 7) described herein. Details of the block 802 are described in more detail below with reference to FIG. 9.
Block 804 represents the animating process, wherein a user creates the animated model of an entity, i.e. the user creates a series of frames of reality-based 3D mathematical representations of the model of block 802 by manipulation of the fundamental animating constructs through use of and interaction with intelligent objects 700 (FIG. 7). Details of the block 804 are described in more detail below with reference to FIG. 10. Block 806 represents the rendering process, wherein a user creates the rendered model of an entity, i.e. the user creates a reality-based 2D representation of the frame series of the animated model of block 804 by manipulation of the fundamental rendering constructs through use of and interaction with intelligent objects 700 (FIG. 7). Details of the block 806 are described in more detail below with reference to FIG. 11.
FIG. 9 is a flow chart depicting steps associated with the modeling block 802 of FIG. 8. Block 902 depicts the step of receiving the user entity selection 704 (FIG. 7) from a predetermined set of graphical entities presented by the GUI 702 described herein. For example, users may choose a building as their selected entity. Block 904 depicts the step of identifying an intelligent object 700 (FIG. 7) associated with the entity selected by the user in block 902. The intelligent abstract engine 400 (FIG. 4), 500 (FIG. 5), or 600 (FIG. 6) matches the user-selected entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2). For example, the intelligent abstract engine 400 (FIG. 4), 500 (FIG. 5), or 600 (FIG. 6) would identify an intelligent object 226 (FIG. 2) representing a building.
As described above, the intelligent objects 700 (FIG. 7) comprise property sets 706 (FIG. 7), data subsets 716 (FIG. 7) and intelligent object processing code 718 (FIG. 7). Block 906 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 904. For example, the user may be presented with attributes such as number of floors, year of production, architectural style, or type of neighborhood. Block 908 depicts the step of receiving a user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the number of floors in a building and a modification parameter representing a request for five floors.
Block 910 depicts the step of identifying the data subsets 716 (FIG. 7) according to the user input received in block 908. In the context of the building example discussed above, the intelligent object 700 (FIG. 7) may identify, for example, data subsets that contain the underlying fundamental representation constructs associated with a ground floor, four intermediate floors, and a roof. Block 912 depicts the step of identifying the necessary utilizations of the data subsets 716 (FIG. 7) that were identified in block 910, based on the user-selected attributes received in block 908.
The intelligent object processing code 718 (FIG. 7) is configured to identify utilizations of the data subsets 716 (FIG. 7) according to reality-based, or real world, constraints upon the entity to produce a reality-based representation thereof. In block 914, the intelligent object processing code 718 (FIG. 7) of the intelligent object 700 (FIG. 7) provides instructions to the conventional geometric representation and manipulation module 404 (FIG. 4) as to the necessary utilizations, identified in block 912, of the underlying fundamental representation constructs of the data subsets 716 (FIG. 6). The conventional geometric representation and manipulation module 404 (FIG. 4) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical modeling method, which is a component of the complete entity representation creation method of the invention.
In practice, the user may repeat the steps 902-914 presented above to their satisfaction before continuing to the animating method. Alternatively, users may desire to continue to the animating and/or rendering methods described below. FIG. 10 is a flow chart depicting details of the animating step 804 of FIG. 8. The animating process of the present invention may sequentially follow the modeling process depicted in FIG. 9, and comprises some steps analogous to those depicted in FIG. 9, but in relation to animating as opposed to modeling. Block 1002 depicts the step of receiving a graphical model, i.e. a 3D mathematical representation of an entity. The animating module 308 receives the 3D mathematical representation 312 (FIG. 3) from the model creation module 306 (FIG. 3). The 3D mathematical representation 312 may be stored in various sources, including local memory 210 (FIG. 2) or remote memory if the user computer 200 (FIG. 2) is networked. Block 1004 depicts the step of identifying an intelligent object 700 (FIG. 7) associated with the 3D mathematical representation 312 received in block 1002. Note that the appropriate intelligent object 700 may have been previously identified during the modeling portion 904 (FIG. 9) of the method. If not previously identified, then the intelligent abstract engine 500 (FIG. 5) matches the received 3D mathematical representation 312 of an entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2).
Block 1006 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 1004. Continuing with the building example, the user may be presented with attributes such as aging of the building, or demolition of the building, or other attributes that may affect the animating of the entity. Block 1008 depicts the step of receiving the user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the aging of the building and a modification parameter representing a request for a particular time period of aging, such as 20 years. Similarly, if the user selected entity were a car, the user might select the rotation of the wheels as the attribute and enter a parameter associated with the rate of wheel rotation.
Block 1010 depicts the step of identifying data subsets 716 (FIG. 7) according to the user input received in block 1008. In the context of the building example discussed above, the intelligent object 700 (FIG. 7) may identify data subsets that contain the underlying fundamental representation constructs associated with the chosen period of aging, such as frame interpolation processes providing for the apparent aging of the building material, or a change of facade style associated with the chosen period of aging, and the like. Block 1012 depicts the step of identifying the necessary utilizations of the data subsets 716 (FIG. 7) that were identified in block 1010, based on the user-selected attributes received in block 1008. The intelligent object processing code 718 (FIG. 7) is configured to identify utilizations of the data subsets 716 (FIG. 7) according to reality-based, or real world, constraints upon the entity to produce a reality-based representation. In block 1014, the intelligent object processing code 718 (FIG. 7) of the intelligent object 700 (FIG. 7) provides instructions to the conventional frame representation and manipulation module 504 (FIG. 5) as to the necessary utilizations, identified in block 1012, of the underlying fundamental representation constructs of the data subsets 716 (FIG. 6). The conventional frame representation and manipulation module 504 (FIG. 5) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical animating method, which is a component of the complete entity representation creation method of the invention.
In practice, the user may iterate the method illustrated in FIG. 10. Therefore, the steps 1002-1014 presented above may be repeated by users to their satisfaction before continuing to the rendering method of block 806 (FIG. 8). Alternatively, users may continue to the rendering method described below or return to the modeling method.
FIG. 11 is a flow chart depicting details of the rendering step of block 806 (FIG. 8). The method of FIG. 11 may sequentially follow the method depicted in FIGS. 9 and 10, and comprises some steps analogous to those depicted in FIG. 10, but in relation to rendering as opposed to animating. Block 1102 depicts the step of receiving an animated model, i.e. the frame series 314 of the selected entity. The frame series 314 (FIG. 3) is received from the animating module 308 (FIG. 3). The frame series 314 may be stored in various sources, including local memory 210 (FIG. 2) or remote memory if the user computer 200 (FIG. 2) is networked. Block 1104 depicts the step of identifying an intelligent object 700 (FIG. 7) associated with the frame series received in block 1102. Note that the appropriate intelligent object 700 may have been previously identified during the modeling and/or animating portion of the method. If not previously identified, then the intelligent abstract engine 600 (FIG. 6) is capable of matching the received frame series of an entity with an appropriate intelligent object 226 (FIG. 2) from the object library 224 (FIG. 2).
Block 1106 depicts the step of presenting the user with a list of modifiable attributes and associated allowable ranges 712 (FIG. 7) from the intelligent object 700 (FIG. 7) identified in block 1104. Continuing with the building example, the user may be presented with property set attributes such as year of production, architectural style, or type of neighborhood, or other attributes which may affect the rendering of the entity. Block 1108 depicts the step of receiving a user-selected attribute and modification parameter 714 (FIG. 7). For example, the user may select the attribute representing the year of production of the building and a modification parameter representing a request for a particular year of production.
Block 1110 depicts the step of identifying rendering data subsets 716 (FIG.
7) according to the user input received in block 1108. In the context of the building example, the intelligent object 700 (FIG. 7) may identify data subsets that contain the underlying fundamental representation constructs associated with the chosen year, such as digitized bitmaps or procedural algorithms representing a type of building material, apparent age of building material, a type of facade style associated with the chosen year of production, and the like. Block 1112 depicts the step of identifying the necessary utilizations of the rendering data subsets 716 (FIG . 7) that were identified in block 1110, based on the user-selected attributes received in block 1108.
The intelligent object processing code 718 (FIG. 7) is configured to identify utilizations of the rendering data subsets 716 (FIG. 7) according to reality-based, or real world, constraints upon the entity to produce a reality-based representation. In block 1114, the intelligent object processing code 718 (FIG. 7) of intelligent object 700 (FIG. 7) provides instructions to the conventional surface representation and manipulation module 604 (FIG. 6) as to the necessary utilizations, identified in block 1112, of the underlying fundamental representation constructs of the rendering data subsets 716 (FIG. 7). The conventional surface representation and manipulation module 604 (FIG. 6) executes the instructions 720 (FIG. 7) provided by the intelligent object 700 (FIG. 7), thereby completing an iteration of the 3D graphical rendering method, which is a component of the complete entity representation creation method of the invention.
In practice, the user may iterate the method illustrated in FIG. 11 , therefore the steps 1102-1114 presented above may be repeated by users to their satisfaction before returning to the modeling or animating portions of the method. The intelligent three-dimensional computer graphics system and method described herein may be used in any application utilizing computer graphics, including video production, computer-aided design, and manufacturing. Therefore, it is not intended to limit the invention to practice in a particular operating environment. Furthermore, it is also appreciated that intelligent objects could contain large amounts of "intelligence," e.g., industry specifications and the like. Therefore it is not intended to limit the intelligent objects to the attributes exemplified in this description.

Claims

CLAIMSWhat is claimed is:
1. A system for creating a representation of an entity, the system comprising: an object associated with the entity and configured to generate entity representation instructions for creating the representation of the entity according to a user-selected modification to predefined entity attributes; and a graphics creation engine for creating the representation of the entity according to the entity representation instructions generated by the object.
2. A method for creating a representation of an entity, the method comprising: generating entity representation instructions for creating the representation of the entity according to a user-selected modification to predefined entity attributes using an object associated with the entity; and creating the representation of the entity according to the generated entity representation instructions.
3. A system for creating a representation of an entity, the system comprising: an object associated with the entity, comprising: a set of entity attributes; a set of allowable modifications to the entity attributes; a data subset associated with each of the entity attributes, each data subset comprising fundamental representation constructs for the associated entity attribute; and object processing code for generating instructions to utilize the data subsets according to a user-selected modification to the entity attributes to create the representation of the entity.
4. The system of claim 3 wherein the allowable modifications are limited to produce representations according to real-life constraints upon the entity.
5. The system of claim 3 wherein the user-selected modification comprises a predefined system default.
6. The system of claim 3 wherein the fundamental representation constructs are configured to represent a three-dimensional representation of the entity attribute.
7. The system of claim 3 wherein the fundamental representation constructs are configured as a series of representations of the entity attribute, the series configured to simulate the entity in motion.
8. The system of claim 7 wherein the fundamental representation constructs are configured to represent two-dimensional pictorial representations of the series of representations of the entity attribute.
9. The system of claim 3 wherein the fundamental representation constructs are configured to represent a two-dimensional pictorial representation of the entity attribute.
10. The system of claim 3 wherein the object is stored on a device in a network.
11. A method for creating a representation of an entity, the method comprising: permitting a user to select one of a predetermined set of entities; identifying an object associated with the selected entity; permitting the user to select one of a predetermined set of modifiable entity attributes; permitting the user to select one of a predetermined set of modifications to the selected modifiable entity attribute; identifying a data subset comprising fundamental representation constructs associated with the user-selected entity attribute; generating instructions for creating the representation of the user-selected entity attribute according to the user-selected modification using the identified data subset.
12. The method of claim 11 wherein the predetermined set of modifications are limited to produce representations according to real-life constraints upon the entity.
13. The method of claim 11 wherein the user-selected modification comprises a predefined system default.
14. The method of claim 11 wherein the fundamental representation constructs are configured to represent a three-dimensional representation of the entity attribute.
15. The method of claim 11 wherein the fundamental representation constructs are configured as a series of representations of the entity attribute, the series configured to simulate the entity in motion.
16. The method of claim 15 wherein the fundamental representation constructs are configured to represent two-dimensional pictorial representations of the series of representations of the entity attribute.
17. The method of claim 11 wherein the fundamental representation constructs are configured to represent a two-dimensional pictorial representation of the entity attribute.
18. The method of claim 11 wherein: the object is stored on a device remote from the user, and the user is connected to the device through a network.
19. A computer-readable medium embodying instructions for a device to perform a creation of a representation of an entity, the creation comprising: permitting a user to select one of a predetermined set of entities; identifying an object associated with the selected entity; permitting the user to select one of a predetermined set of modifiable entity attributes; permitting the user to select one of a predetermined set of modifications to the selected modifiable entity attribute; identifying a data subset comprising fundamental representation constructs associated with the user-selected attribute; generating instructions for creating the representation of the user-selected entity attribute according to the user-selected modification using the identified data subset.
20. The computer-readable medium of claim 19 wherein the predetermined set of modifications are limited to produce representations according to real-life constraints upon the entity.
21. The computer-readable medium of claim 19 wherein the user-selected modification comprises a predefined system default.
22. The computer-readable medium of claim 19 wherein the fundamental representation constructs are configured to represent a three-dimensional representation of the entity attribute.
23. The computer-readable medium of claim 19 wherein the fundamental representation constructs are configured as a series of representations of the entity attribute, the series configured to simulate the entity in motion.
24. The computer-readable medium of claim 23 wherein the fundamental representation constructs are configured to represent two-dimensional pictorial representations of the series of representations of the entity attribute.
25. The computer-readable medium of claim 19 wherein the fundamental representation constructs are configured to represent a two-dimensional pictorial representation of the entity attribute.
26. The computer-readable medium of claim 19 further comprising: permitting the user to select the one of a predetermined set of entities, the one of a predetermined set of modifiable entity attributes, or the one of a predetermined set of modifications to the selected modifiable entity attribute stored on a device remote from the user.
27. A system for creating a representation of an entity, the system comprising: means for permitting a user to select one of a predetermined set of entities; means for identifying an object associated with the selected entity; means for permitting the user to select one of a predetermined set of modifiable entity attributes; means for permitting the user to select one of a predetermined set of modifications to the selected modifiable entity attribute; means for identifying a data subset comprising fundamental representation constructs associated with the user-selected attribute; means for generating instructions for creating the representation of the user- selected entity attribute according to the user-selected modification using the identified data subset.
28. The system of claim 27 wherein the predetermined set of modifications are limited to produce representations according to real-life constraints upon the entity.
29. The system of claim 27 wherein the user-selected modification comprises a predefined system default.
30. The system of claim 27 wherein the fundamental representation constructs are configured to represent a three-dimensional representation of the entity attribute.
31. The system of claim 27 wherein the fundamental representation constructs are configured to represent a series of representations of the entity attribute, the series configured to simulate the entity in motion.
32. The system of claim 31 wherein the fundamental representation constructs are configured to represent a two-dimensional pictorial representation of the series of representations of the entity attribute.
33. The system of claim 27 wherein the fundamental representation constructs are configured to represent a two-dimensional pictorial representations of the entity attribute.
34. The system of claim 27 further comprising: means for permitting the user to select the one of a predetermined set of entities, the one of a predetermined set of modifiable entity attributes, or the one of a predetermined set of modifications to the selected modifiable entity attribute stored on a device remote from the user.
PCT/US2000/031320 1999-11-15 2000-11-14 Intelligent three-dimensional computer graphics system and method WO2001037216A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU16100/01A AU1610001A (en) 1999-11-15 2000-11-14 Intelligent three-dimensional computer graphics system and method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16551399P 1999-11-15 1999-11-15
US60/165,513 1999-11-15
US71381600A 2000-11-14 2000-11-14
US09/713,816 2000-11-14

Publications (2)

Publication Number Publication Date
WO2001037216A2 true WO2001037216A2 (en) 2001-05-25
WO2001037216A3 WO2001037216A3 (en) 2002-01-17

Family

ID=26861453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/031320 WO2001037216A2 (en) 1999-11-15 2000-11-14 Intelligent three-dimensional computer graphics system and method

Country Status (2)

Country Link
AU (1) AU1610001A (en)
WO (1) WO2001037216A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1426904A4 (en) * 2001-06-20 2005-09-07 Zenrin Co Ltd Three-dimensional electronic map data creation method
US8135724B2 (en) 2007-11-29 2012-03-13 Sony Corporation Digital media recasting
CN118381892A (en) * 2024-06-25 2024-07-23 常州微亿智造科技有限公司 Three-dimensional imaging control method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KHORAL RESEARCH: "Khoros Manuals, Envision Manual" HTTP://MERU-A.RNET.MISSOURI.EDU/MVL/KHOROS /TOPMOST_TOC.HTML, 1997, XP002169846 *
KHORAL RESEARCH: "Khoros Manuals, Visual Programming: The Cantata User's Guide" HTTP://MERU-A.RNET.MISSOURI.EDU/MVL/KHOROS /MANUAL/MANUAL/VISUALPROG/VP.1.VISUAL_LANG UAGE/MAIN_TOC.HTML, 1997, XP002169845 *
PETRUSKA P ET AL: "ROANS - intelligent simulation and programming system for robots and automated workcell" INES'97. 1997 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT ENGINEERING SYSTEMS. PROCEEDINGS (CAT. NO.97TH8224), BUDAPEST, HUNGARY, 15-17 SEPT. 1997, pages 451-456, XP002169847 1997, New York, NY, USA, IEEE, USA ISBN: 0-7803-3627-5 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1426904A4 (en) * 2001-06-20 2005-09-07 Zenrin Co Ltd Three-dimensional electronic map data creation method
US7343268B2 (en) 2001-06-20 2008-03-11 Zenrin Co., Ltd. Three-dimensional electronic map data creation method
US8135724B2 (en) 2007-11-29 2012-03-13 Sony Corporation Digital media recasting
CN118381892A (en) * 2024-06-25 2024-07-23 常州微亿智造科技有限公司 Three-dimensional imaging control method and system

Also Published As

Publication number Publication date
AU1610001A (en) 2001-05-30
WO2001037216A3 (en) 2002-01-17

Similar Documents

Publication Publication Date Title
US7515155B2 (en) Statistical dynamic modeling method and apparatus
US8698810B2 (en) Reorienting properties in hair dynamics
US10163243B2 (en) Simulation of hair in a distributed computing environment
US8482569B2 (en) Mesh transfer using UV-space
US20090213131A1 (en) Mesh transfer
US20040227760A1 (en) Statistical dynamic collisions method and apparatus
EP2260403B1 (en) Mesh transfer
US7333112B2 (en) Rig baking
US8368712B2 (en) Mesh transfer in n-D space
US8116591B2 (en) Diffusion-based interactive extrusion of two-dimensional images into three-dimensional models
WO2001037216A2 (en) Intelligent three-dimensional computer graphics system and method
Adzhiev et al. Augmented sculpture: Computer ghosts of physical artifacts
US8669980B1 (en) Procedural methods for editing hierarchical subdivision surface geometry
Adzhiev et al. Functionally based augmented sculpting
WO2004104935A1 (en) Statistical dynamic modeling method and apparatus
Kazakov et al. Interactive metamorphosis and carving in a multi-volume scene
Beebe A Bibliography of Computer Graphics Forum: 1982–2009
CN119478180A (en) Rendering of overlapping textures in 3D scenes
CN119478179A (en) Texture mapping in 3D scenes
Nishin et al. A powerful 3D authoring method using 2D image processing techniques
Madi Barrera, T., Hast, A., Bengtsson, E.
Duce et al. A Formal Specification of a Graphics System in the
Sousa et al. An Advanced Color Representation for Lossy
Brown Controlled Metamorphosis of Animated Objects
Hameluck Electronic Sculpting

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase