WO2010068210A1 - Manipulating unloaded objects - Google Patents
Manipulating unloaded objects Download PDFInfo
- Publication number
- WO2010068210A1 WO2010068210A1 PCT/US2008/086438 US2008086438W WO2010068210A1 WO 2010068210 A1 WO2010068210 A1 WO 2010068210A1 US 2008086438 W US2008086438 W US 2008086438W WO 2010068210 A1 WO2010068210 A1 WO 2010068210A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- lightweight
- user interface
- interface
- attributes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2213/00—Indexing scheme for animation
- G06T2213/08—Animation software package
Definitions
- the present invention relates generally to computer animation, and more particularly to manipulating objects without loading objects in memory.
- Each model generally includes several descriptors (portions of codes) that describe the process of forming the shape of an object. For example, one portion of code may describe how a particular model contributes to the final image. Another portion may describe how the model is presented to the user in an interactive session. Yet another portion (e.g., animation variables) may describe the attributes and input parameters that can be used for editing an object. As such, many descriptors described in a model may not be necessary to be loaded in memory to provide GUIs for manipulating the model. [0004] Further, computation models of some objects are often extremely complex, having upwards of millions of surfaces and tens of thousands of attributes.
- the rending process for such models can be very time consuming.
- the models of objects are coded in interpreter languages, such as (MIT Design Language), MDL for faster and flexible rendering.
- interpreter languages such as (MIT Design Language)
- MDL for faster and flexible rendering.
- identifying and extracting information for example, a particular portion describing the attributes and input parameters
- identifying and extracting information from the models coded in interpreter languages can be very hard and time consuming due to the way models are constructed using interpreter languages.
- the entire model is interpreted ahead of time and user interface data structures that represent the entire model are built and used for providing a GUI.
- user interface data structures of the entire model can be very large and thus consume a lot of memory when loaded for manipulation.
- the loading/unloading of such user interface data structures can also take a long time.
- the present invention provides systems and methods for providing a graphic user interface for manipulating an object without having to load the object in memory.
- each model is pre-processed to extract information that is necessary to provide a user interface for manipulating the model.
- extracted information may include attributes (e.g., animation variables, attributes, input parameters, etc.) of the model and semantic information that describes how to display a graphic user interface for editing such attributes.
- attributes e.g., animation variables, attributes, input parameters, etc.
- semantic information that describes how to display a graphic user interface for editing such attributes.
- a lightweight interface description is generated for each model.
- the lightweight interface description is then stored in a database and used by an animation tool application for enabling users to define an image of a particular object.
- the lightweight interface description includes only a small portion of information about a model, thereby saving loading time and memory space when the graphic user interface is provided.
- a method provides a user interface for manipulating an object model where the object model can be manipulated by updating animation variables of the object model.
- the method typically includes obtaining a lightweight interface description of the object model and formulating user interface elements based on the obtained lightweight interface description.
- a graphic user interface including the formulated user interface elements are presented to a user.
- the graphic user interface enables the user to specify values for the attributes of the object model.
- User input specifying values for one or more animation variables of the object model are received through the graphic user interface.
- the received user input is stored in a data file that is used to render the object model.
- the attributes includes animation variables or other attributes or variables that may change over time.
- the method includes receiving a user request prior to obtaining the lightweight description, to manipulate the object model without loading the object model in memory.
- an animation tool application provides a user interface for manipulating an object without loading the object in memory.
- the animation tool application typically includes a processor and a memory storing instructions that, when executed by the processor, cause the processor to interpret a model describing an image of each object in a database, to identify attributes of the interpreted model and to generate a lightweight interface description based on the identified attributes. Subsequently, the lightweight interface description is stored in the database.
- the lightweight interface description for each model can be used for generating a graphic user interface for manipulating the model.
- a computer program product embedded in a computer readable medium provides to a user a graphic user interface for manipulating an object without loading the object.
- the computer readable medium includes program code which when executed, provides a user with the ability to select an object to manipulate and displays user interface elements in a graphic user interface.
- the user interface elements are generated based on a lightweight interface description of the selected object that includes information about attributes of the selected object. The information about attributes is previously extracted from a model specifying the selected object and stored in the lightweight interface description.
- FIG. 1 illustrates a block diagram that depicts an exemplary system environment that can be used to practice various embodiments
- FIG. 2 illustrates a flow diagram depicts a routine for preprocessing models to extract information to build lightweight interface descriptions in accordance with embodiments
- FIG. 3 illustrates a flow diagram that depicts a routine that presents a user with a GUI for manipulating an object model using a lightweight interface description in accordance with some embodiments
- FIG. 4 illustrates an exemplary graphic user interface displayed on a user device in accordance with an embodiment.
- FIG. 5 is a block diagram of a computer system that may be used to practice various embodiments.
- a graphic user interface in which users can manipulate an object or a character without having to load a model of the object. More particularly, the graphic user interface is built based on interface information previously extracted and separately stored from an object model.
- a model is a computer specification describing a final image of an object or character.
- a model can generally describe anything in an image scene, such as an object or character, or light or shading features, for example..
- Some object models or character models are often very complex, having upwards of millions of surfaces and tens of thousands of attributes, and thus cannot be easily loaded in a memory at once. Thus, embodiments advantageously use only a small portion of information extracted from a model to provide a fast and efficient user interface.
- each model is pre-processed to extract interface information that is necessary to provide a user interface for manipulating the model.
- such extracted information includes attributes (e.g., animation variables, attributes, input parameters, etc.) of the model and semantic information that describes how to display a graphic user interface for editing such attributes.
- the attributes e.g., animation variables
- semantic information that describes how to display a graphic user interface for editing such attributes.
- the attributes e.g., animation variables
- the attributes and their associated functions can be used to abstract complicated modifications to an object or character model and to provide a relatively simple control.
- an attribute can include any attribute or variable that may change over time.
- animation variables can specify color or a ramp of a "light” object, or rotation angles of the joints of a character model, thereby positioning the character model's limbs and appendages.
- lightweight interface description is generated for each model.
- the lightweight interface description is stored in a database for use by an application such as an animation tool application that enables users to define and manipulate an image of a particular object.
- another data structure file (hereinafter, full interface description) may be generated that includes more information about the model than the lightweight interface description.
- a full interface description may include user interface data structures representing the entire model or a subset thereof. Thus, the full interface description may provide a more sophisticated user interface, but may be very large and thus take up a lot of memory when loaded for manipulation.
- a user may be given a choice to use the lightweight interface description of a model or to use the full interface description of a model.
- the system and method may suggest using the lightweight interface description for animation whenever it is suitable, for example, when there are many identical or similar objects to manipulate.
- the lightweight interface description includes a small portion of information about a model, thereby saving loading time and memory space compared to use of the full interface description.
- a block diagram depicts an exemplary system environment 100 that can be used to practice embodiments of the present invention.
- the exemplary system environment 100 may provide users with sophisticated graphic user interfaces for manipulating or editing an object without loading a model of the object in memory.
- the system environment 100 includes an animation tool application 120 that allows a user to manipulate objects or characters in a scene or an image.
- the animation tool application 120 is configured to present, on a user device 110, a graphic user interface 122 for use by a user 125 in manipulating an object model by specifying animation variable values of the model.
- the graphic user interface 122 can be presented in any suitable layout supported by a particular animation tool and will be discussed in further detail below in connection with Fig. 4.
- the animation tool application 120 is communicatively connected to one or more databases storing, but not limited to, lightweight interface descriptions, full interface descriptions, object models, updated object models, etc.
- each lightweight interface description corresponds to an object model that can be manipulated through the animation tool application 120.
- the system environment 100 also includes a preprocessing complier 130 that is configured to interpret each model and extract information about animation variables of the model.
- the preprocessing compiler 130 may be a part of the animation tool application or a stand-alone application.
- the preprocessing complier processes models of objects ahead of time to obtain and extract information about animation variables of the models.
- the extracted information is only a portion of the model specification (a portion of descriptors or codes describing an image of an object) but sufficient enough to provide a reasonable user interface for editing an object.
- the extracted information may not include a geometry specification of the object which is used for rendering an image.
- the extracted information is relatively small and easy to load in memory.
- the extracted information is stored as a data file (a lightweight interface description) in a hierarchical data structure, suitable for the animation tool application to understand the semantics of how to create a GUI and animation variables of the model objects.
- the extracted information may be stored in an Extensible Markup Language (XML) format, or the like.
- XML Extensible Markup Language
- the animation tool application 120 may invoke or request a preprocessing compiler 130 to interpret the new model, and the lightweight interface description is generated before providing a corresponding GUI for a user.
- the GUI presented is text based.
- user interface elements may include text-based prompts for entering information that would manipulate the model object.
- the databases include interface descriptions database 108, object model database 106, updated model database 102 or the like.
- the interface descriptions database 108 is queried to retrieve a lightweight interface description corresponding to the object model.
- the animation tool application 120 then generates an appropriate GUI based on the retrieved lightweight interface description, for enabling users to edit the object model.
- the object model may not need to be loaded and interpreted for generating a GUI within the animation tool application 120.
- the interface descriptions database 108 may also store full interface descriptions to support a conventional GUI within the application tool 120.
- a full interface description may include user interface data structures representing an entire model.
- using a lightweight interface description may not be desirable to manipulate certain objects.
- the full interface description may be used to provide a more sophisticated or robust user interface.
- the model may be loaded in memory and interpreted by the animation tool application for generating a GUI.
- some exemplary object models and their animation variables are discussed herein.
- MDL models are created by executing code written in interpreted languages.
- An interpreted language is a programming language whose implementation often takes the form of an interpreter.
- One of conventionally used interpreted languages is MIT Design Language (MDL or muddle).
- MDL models Some example models created by code written in MDL (hereinafter, MDL models) will be discussed below. It is noted that although MDL model examples are discussed herein, such examples are used only for exemplary purposes and thus not considered limiting the scope of the claims.
- the input parameters to a model coded in MDL are called "avars" for animation variables.
- an avar statement is used to create one or more of these inputs.
- Some of the avar statements contain attributes enclosed in curly braces. Such attributes may be used by the GUI of the animation system to make it easier for the user to edit the inputs to the model.
- information about avar statements containing attributes or the like may be identified. It is to be noted that unlike above-mentioned model code which includes only several avar statements and attributes, most MDL models used in animation are generally very complex and include many attributes and functions which may be hard to identify.
- each avar statement includes attributes for min, max, colorGroup, doc, channel, uiSubGroup, label etc. These attributes are used by the animation tool application to construct a relatively sophisticated UI element for editing colors of a light.
- the models of objects are coded in interpreter languages for faster and flexible rendering. Particularly, MDL is used to specify models because of its flexibility but one of the trade-offs is that it may be difficult to extract information from the models. Thus, to extract the avar statements and their attributes from a model, it was conventionally needed to interpret the entire model and build data structures in memory that represent the entire model.
- the preprocessing compiler is used to extract information about animation variables such as the avar statements and their attributes from models written in MDL. This preprocessing is performed ahead of time and allows some applications to build GUIs that represent object models without having to load the object models.
- the preprocessing compiler may utilize the existing animation application tool that is generally configured to interpret the descriptions of a model and to create the internal data structures.
- the preprocessing compiler may use the portion of the animation system that is configured to interpret the MDL code to load the model. It then queries the descriptions or codes in the model to obtain animation variables (e.g., all of its avars and the attributes to those avars in the MDL model example).
- the preprocessing compiler then writes the obtained information to a lightweight description (e.g., hierarchical structure file such as an XML file, or the like) and associates the lightweight interface description with the model.
- a lightweight description e.g., hierarchical structure file such as an XML file, or the like
- a database e.g., interface descriptions database.
- the applications are enabled to access the interface descriptions database to use the lightweight interface description (e.g., XML files) to present GUIs for models.
- the applications advantageously no longer need to load the entire model to interpret descriptors (e.g., the MDL code) to extract GUI information from the model.
- a flow diagram depicts a routine 200 for preprocessing models to extract information to construct lightweight interface descriptions in accordance with various embodiments.
- each model is preprocessed to identify and extract information about animation variables and functions to modify the animation variables. For example, for an MDL model, information about avars and attributes is extracted from the model.
- the preprocessing compiler generates a lightweight interface description for the model based on the extracted information. Semantic information of how to depict a graphic user interface is also interpreted and extracted from the model.
- the lightweight interface description may be in an XML format, including hierarchical structures that represent the extracted information.
- the lightweight interface description is associated with the corresponding model so that the lightweight interface description can be located and accessed when the model is selected by the user for manipulation.
- the created lightweight interface description is stored in the interface descriptions database.
- an entire model is interpreted and user interface data structures representing the entire model are created. Such user interface data structures are stored as a full interface description for the model in the interface descriptions database.
- the created lightweight interface descriptions (and full interface descriptions) are available for the animation tool to formulate a GUI.
- the routine 200 completes at block 214.
- a flow diagram depicts a routine 300 that presents a user with a GUI for manipulating an object model using a lightweight interface description in accordance with some embodiments.
- the animation tool application presents an initial graphic user interface where a user can select a particular object to manipulate.
- the selection may be the user clicking directly on an object on the display, may be in response to a user hitting a "hot key" on a keyboard, to a user selecting one or more icons or menu selections on the display, or may be performed by the selection of one or more keys on a keyboard, in any other conventional manner, or the like.
- the animation tool application receives a user request for using a lightweight description for manipulating an object model.
- the animation tool application identifies the object model which the user wants to edit based on the request.
- the animation tool application queries a lightweight description database to retrieve the lightweight interface description of the identified object model.
- the lightweight interface description includes information about animation variables for the object model.
- the animation tool application formulates graphic user interface elements based on the information stored in the retrieved lightweight interface description.
- the animation tool application presents to the user a GUI including the formulated graphic user interface elements so that a user can change or edit the animation variables of the object model.
- the graphic user interface elements may be menu selections, or visual indicators through which the user can enter desired values of the animation variables.
- the animation tool application receives user input relating to editing the animation variables through interaction with the user.
- the received user input is saved and used to modify the object model at block 312.
- the application stores the received user input and the modified object model in a database.
- a user may want to load a full interface description for manipulating a particular object.
- the routine 300 completes at block 314.
- FIG. 4 an exemplary graphic user interface is displayed on a user device in accordance with an embodiment.
- the animation tool application presents an initial graphic user interface that includes several menu selections for a user to start manipulating models. It is also assumed that the user wants to manipulate light objects. It is assumed that all "light" objects (i.e., light source objects) are preprocessed and a lightweight interface description for each light object is stored in the databases.
- a particular type of "light” source object can have various animation variables (e.g., avars, attributes, or like) that determine the illumination pattern for that type of source.
- a model for a light object may include animation variables controlling intensity, cutoff value, color, ramp, and the like as lights of different intensity can have different effective distances and a cutoff value can be set or determined for each light type (or individual light sources where preferred).
- a cutoff value can define a maximum distance between the light source and an object where the light source can still illuminate, or cast shadows on, the object.
- Such information can be described in a model of a "light” source object and can be manipulated by updating values to a set of the animation variables.
- a menu sub-window 410 listing all light objects which the user can select to manipulate.
- each object model has been preprocessed and a lightweight interface description for the object model is created and stored in a database.
- two menu choices are presented, such as a "Use Lightweight Description” button 416 and a "Use Full Description” button 418. By clicking a "Use Lightweight Description” button 416, the user can indicate to use a GUI generated based on a lightweight interface description of the selected object.
- the application Upon receipt of the user's selection, the application obtains a lightweight interface description of the selected object and formulates UI elements, such as an "Edit Model's Attributes" menu 420.
- the user can specify a set of animation variable values defining the selected object model.
- the user can specify animation variable values for some animation variables such as color, intensity of the light, ramps, and other attributes of the light model.
- the user can view the manipulated object in a model image sub-window 412 while the user is specifying a set of animation variable values for the object model.
- the user can further change or update a set of animation variable values for the object model.
- the user's inputted animation variable values are stored in a database.
- the user may use the animation tool to render the modified object to view and to further manipulate the modified object.
- a light object or user interface window is described in connection with Fig. 4, such an object and user interface window are used as an example.
- the depiction of the user interface 400 in Fig. 4 should be taken as being illustrative in nature, and not limiting to the scope of the disclosure. Any object can be manipulated through a GUI generated based on a lightweight interface description, thereby saving loading time and memory space. It is noted that if there are many identical or similar objects to manipulate, using a lightweight interface description can be much more beneficial, as loading time and memory space are conserved by not loading and interpreting the same model numerous times.
- FIG. 5 is a block diagram of a computer system that may be used to practice various embodiments.
- FIG. 5 is merely illustrative of an embodiment and does not limit the scope of the invention as recited in the claims.
- One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
- computer system 500 typically includes a monitor 510, computer 520, a keyboard 530, a user input device 540, computer interfaces 550, and the like.
- user input device 540 is typically embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like.
- User input device 540 typically allows a user to select objects, icons, text and the like that appear on the monitor 510 via a command such as a click of a button or the like.
- Embodiments of computer interfaces 550 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, Fire Wire interface, USB interface, and the like.
- computer interfaces 550 may be coupled to a computer network, to a Fire Wire bus, or the like.
- computer interfaces 550 may be physically integrated on the motherboard of computer 520, and may be a software program, such as soft DSL, or the like.
- computer 520 typically includes familiar computer components such as a processor 560, and memory storage devices, such as a random access memory (RAM) 570, disk drives 580, a GPU 585, and system bus 590 interconnecting the above components.
- processor 560 a processor
- memory storage devices such as a random access memory (RAM) 570, disk drives 580, a GPU 585, and system bus 590 interconnecting the above components.
- RAM random access memory
- disk drives 580 disk drives 580
- GPU 585 GPU 585
- system bus 590 interconnecting the above components.
- computer 520 includes one or more Xeon microprocessors from Intel. Further, one embodiment, computer 520 includes a UNIX-based operating system.
- RAM 570 and disk drive 580 are examples of tangible media configured to store data such as image files, models including geometrical descriptions of objects, ordered geometric descriptions of objects, procedural descriptions of models, scene descriptor files, shader code, a rendering engine, embodiments of the present invention, including executable computer code, human readable code, or the like.
- Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
- computer system 500 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like.
- other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
- GPU 585 may be any conventional graphics processing unit that may be user programmable. Such GPUs are available from NVIDIA, ATI, and other vendors.
- GPU 585 includes a graphics processor 593, a number of memories and/or registers 595, and a number of frame buffers 597.
- FIG. 5 is representative of a computer system capable of embodying the present invention.
- the computer may be a desktop, portable, rack-mounted or tablet configuration.
- the computer may be a series of networked computers.
- other micro processors are contemplated, such as PentiumTM or ItaniumTM microprocessors; OpteronTM or AthlonXPTM microprocessors from Advanced Micro Devices, Inc; and the like.
- other types of operating systems are contemplated, such as Windows®, WindowsXP®, WindowsNT®, or the like from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, and the like.
- the techniques described above may be implemented upon a chip or an auxiliary processing board.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
An animation tool application provides a graphic user interface in which users can manipulate an object or a character without loading an entire model of the object. Each model is preprocessed to extract information that is necessary to provide a user interface for manipulating the model. Based on the extracted information, a lightweight interface description is generated and separately stored from the model of the object. The graphic user interface is built based on a lightweight interface description of a particular model. Each lightweight interface description includes only a small portion of information about a model, thereby saving loading time and memory space when a graphic user interface is presented to a user.
Description
MANIPULATING UNLOADED OBJECTS
BACKGROUND
[0001] The present invention relates generally to computer animation, and more particularly to manipulating objects without loading objects in memory.
[0002] With the wide-spread availability of high speed computers, animators rely on computers to assist in their animation process. Typically, objects and characters in images are represented by computer models which can be manipulated through computer aided animation software. One of the pioneering companies in the computer aided animation (CAA) industry is Pixar. Pixar developed both computing platforms specially designed for CAA, and animation software now known as RenderMan®. RenderMan was particularly well received in the animation industry and recognized with two Academy Awards. RenderMan software is used to convert graphical specifications (models) of objects into one or more images. This technique is known in the industry as rendering. [0003] Before objects are rendered, the users or animators may manipulate inputs of the computational models of some objects using an animation tool application. It is not easy to provide a sophisticated Graphic User Interface (GUI) through which the user can manipulate objects. Each model generally includes several descriptors (portions of codes) that describe the process of forming the shape of an object. For example, one portion of code may describe how a particular model contributes to the final image. Another portion may describe how the model is presented to the user in an interactive session. Yet another portion (e.g., animation variables) may describe the attributes and input parameters that can be used for editing an object. As such, many descriptors described in a model may not be necessary to be loaded in memory to provide GUIs for manipulating the model. [0004] Further, computation models of some objects are often extremely complex, having upwards of millions of surfaces and tens of thousands of attributes. The rending process for such models can be very time consuming. Typically, the models of objects are coded in interpreter languages, such as (MIT Design Language), MDL for faster and flexible rendering. However, providing convenient user interfaces to manipulate models is often ignored when the models of
objects are coded. Thus, identifying and extracting information (for example, a particular portion describing the attributes and input parameters) from the models coded in interpreter languages can be very hard and time consuming due to the way models are constructed using interpreter languages. [0005] Conventionally, in order to provide relatively sophisticated UI elements for manipulating a model, the entire model is interpreted ahead of time and user interface data structures that represent the entire model are built and used for providing a GUI. However, such user interface data structures of the entire model can be very large and thus consume a lot of memory when loaded for manipulation. In addition, the loading/unloading of such user interface data structures can also take a long time.
[0006] Therefore it is desirable to provide methods and systems that overcome the above and other problems.
BRIEF SUMMARY [0007] The present invention provides systems and methods for providing a graphic user interface for manipulating an object without having to load the object in memory.
[0008] In certain embodiments, each model is pre-processed to extract information that is necessary to provide a user interface for manipulating the model. In one embodiment, such extracted information may include attributes (e.g., animation variables, attributes, input parameters, etc.) of the model and semantic information that describes how to display a graphic user interface for editing such attributes. In some embodiments, based on the extracted information, a lightweight interface description is generated for each model. The lightweight interface description is then stored in a database and used by an animation tool application for enabling users to define an image of a particular object. In one aspect, the lightweight interface description includes only a small portion of information about a model, thereby saving loading time and memory space when the graphic user interface is provided.
[0009] According to one aspect of the present invention, a method provides a user interface for manipulating an object model where the object model can be manipulated by updating animation variables of the object model. The method typically includes obtaining a lightweight interface
description of the object model and formulating user interface elements based on the obtained lightweight interface description. A graphic user interface including the formulated user interface elements are presented to a user. The graphic user interface enables the user to specify values for the attributes of the object model. User input specifying values for one or more animation variables of the object model are received through the graphic user interface. In certain aspects, the received user input is stored in a data file that is used to render the object model. In certain aspects, the attributes includes animation variables or other attributes or variables that may change over time. In certain aspects, the method includes receiving a user request prior to obtaining the lightweight description, to manipulate the object model without loading the object model in memory.
[0010] According to another aspect of the present invention, an animation tool application provides a user interface for manipulating an object without loading the object in memory. The animation tool application typically includes a processor and a memory storing instructions that, when executed by the processor, cause the processor to interpret a model describing an image of each object in a database, to identify attributes of the interpreted model and to generate a lightweight interface description based on the identified attributes. Subsequently, the lightweight interface description is stored in the database. The lightweight interface description for each model can be used for generating a graphic user interface for manipulating the model.
[0011] According to yet another aspect of the present invention, a computer program product embedded in a computer readable medium provides to a user a graphic user interface for manipulating an object without loading the object. The computer readable medium includes program code which when executed, provides a user with the ability to select an object to manipulate and displays user interface elements in a graphic user interface. The user interface elements are generated based on a lightweight interface description of the selected object that includes information about attributes of the selected object. The information about attributes is previously extracted from a model specifying the selected object and stored in the lightweight interface description.
[0012] Reference to the remaining portions of the specification, including the drawings and claims, will realize other features and advantages of the present invention. Further features and advantages of the present invention, as well as the structure and operation of various
embodiments of the present invention, are described in detail below with respect to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] In order to more fully understand the present invention, reference is made to the accompanying drawings. Understanding that these drawings are not to be considered limitations in the scope of the invention, the presently described embodiments and the presently understood best mode of the invention are described with additional detail through use of the accompanying drawings.
[0014] FIG. 1 illustrates a block diagram that depicts an exemplary system environment that can be used to practice various embodiments;
[0015] FIG. 2 illustrates a flow diagram depicts a routine for preprocessing models to extract information to build lightweight interface descriptions in accordance with embodiments; [0016] FIG. 3 illustrates a flow diagram that depicts a routine that presents a user with a GUI for manipulating an object model using a lightweight interface description in accordance with some embodiments; and
[0017] FIG. 4 illustrates an exemplary graphic user interface displayed on a user device in accordance with an embodiment. [0018] FIG. 5 is a block diagram of a computer system that may be used to practice various embodiments.
DETAILED DESCRIPTION
[0019] The embodiments discussed herein are illustrative of one or more examples. As these embodiments are described with reference to illustrations, various modifications or adaptations of the methods and/or specific structures described may become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon the teachings disclosed herein, and through which these teachings have advanced the art, are considered to be within the
scope of the present invention. Hence, the present descriptions and drawings should not be considered in a limiting sense, as it is understood that the present invention is in no way limited to only the embodiments illustrated.
[0020] In certain embodiments, a graphic user interface is provided in which users can manipulate an object or a character without having to load a model of the object. More particularly, the graphic user interface is built based on interface information previously extracted and separately stored from an object model. A model is a computer specification describing a final image of an object or character. A model can generally describe anything in an image scene, such as an object or character, or light or shading features, for example.. Some object models or character models are often very complex, having upwards of millions of surfaces and tens of thousands of attributes, and thus cannot be easily loaded in a memory at once. Thus, embodiments advantageously use only a small portion of information extracted from a model to provide a fast and efficient user interface. In some embodiments, each model is pre-processed to extract interface information that is necessary to provide a user interface for manipulating the model. In one embodiment, such extracted information includes attributes (e.g., animation variables, attributes, input parameters, etc.) of the model and semantic information that describes how to display a graphic user interface for editing such attributes. Generally, the attributes (e.g., animation variables) are parameters used by functions to modify an object model in an image or a scene. The attributes and their associated functions can be used to abstract complicated modifications to an object or character model and to provide a relatively simple control. In general an attribute can include any attribute or variable that may change over time. For example, animation variables can specify color or a ramp of a "light" object, or rotation angles of the joints of a character model, thereby positioning the character model's limbs and appendages. [0021] In some embodiments, based on the extracted information, a data structure file
(hereinafter, lightweight interface description) is generated for each model. The lightweight interface description is stored in a database for use by an application such as an animation tool application that enables users to define and manipulate an image of a particular object. Additionally, another data structure file (hereinafter, full interface description) may be generated that includes more information about the model than the lightweight interface description. A full
interface description may include user interface data structures representing the entire model or a subset thereof. Thus, the full interface description may provide a more sophisticated user interface, but may be very large and thus take up a lot of memory when loaded for manipulation. In some embodiments, a user may be given a choice to use the lightweight interface description of a model or to use the full interface description of a model. In some embodiments, the system and method may suggest using the lightweight interface description for animation whenever it is suitable, for example, when there are many identical or similar objects to manipulate. The lightweight interface description includes a small portion of information about a model, thereby saving loading time and memory space compared to use of the full interface description. [0022] Referring now to Fig. 1, a block diagram depicts an exemplary system environment 100 that can be used to practice embodiments of the present invention. The exemplary system environment 100 may provide users with sophisticated graphic user interfaces for manipulating or editing an object without loading a model of the object in memory. The system environment 100 includes an animation tool application 120 that allows a user to manipulate objects or characters in a scene or an image. The animation tool application 120 is configured to present, on a user device 110, a graphic user interface 122 for use by a user 125 in manipulating an object model by specifying animation variable values of the model. The graphic user interface 122 can be presented in any suitable layout supported by a particular animation tool and will be discussed in further detail below in connection with Fig. 4. The animation tool application 120 is communicatively connected to one or more databases storing, but not limited to, lightweight interface descriptions, full interface descriptions, object models, updated object models, etc. In one embodiment, each lightweight interface description corresponds to an object model that can be manipulated through the animation tool application 120.
[0023] In some embodiments, the system environment 100 also includes a preprocessing complier 130 that is configured to interpret each model and extract information about animation variables of the model. The preprocessing compiler 130 may be a part of the animation tool application or a stand-alone application. The preprocessing complier processes models of objects ahead of time to obtain and extract information about animation variables of the models. In an aspect, the extracted information is only a portion of the model specification (a portion of descriptors or codes describing an image of an object) but sufficient enough to provide a
reasonable user interface for editing an object. For example, the extracted information may not include a geometry specification of the object which is used for rendering an image. Thus, the extracted information is relatively small and easy to load in memory. The extracted information is stored as a data file (a lightweight interface description) in a hierarchical data structure, suitable for the animation tool application to understand the semantics of how to create a GUI and animation variables of the model objects. For example, the extracted information may be stored in an Extensible Markup Language (XML) format, or the like. In some cases, a new model may need to be manipulated but has not yet been preprocessed and thus no lightweight interface description is available. In such cases, the animation tool application 120 may invoke or request a preprocessing compiler 130 to interpret the new model, and the lightweight interface description is generated before providing a corresponding GUI for a user. In certain aspects, the GUI presented is text based. For example, user interface elements may include text-based prompts for entering information that would manipulate the model object.
[0024] In some embodiments, the databases include interface descriptions database 108, object model database 106, updated model database 102 or the like. When a user wants to edit an object model using a lightweight interface description, the interface descriptions database 108 is queried to retrieve a lightweight interface description corresponding to the object model. The animation tool application 120 then generates an appropriate GUI based on the retrieved lightweight interface description, for enabling users to edit the object model. As such, in some embodiments, once a lightweight interface description is generated and ready for use, the object model may not need to be loaded and interpreted for generating a GUI within the animation tool application 120.
[0025] The interface descriptions database 108 may also store full interface descriptions to support a conventional GUI within the application tool 120. As discussed above, a full interface description may include user interface data structures representing an entire model. In some cases, using a lightweight interface description may not be desirable to manipulate certain objects. In such cases, the full interface description may be used to provide a more sophisticated or robust user interface. Alternatively, the model may be loaded in memory and interpreted by the animation tool application for generating a GUI. For ease of discussion, some exemplary object models and their animation variables are discussed herein.
MDL Models
[0026] As will be appreciated by one of ordinary skill in the art, some models are created by executing code written in interpreted languages. An interpreted language is a programming language whose implementation often takes the form of an interpreter. One of conventionally used interpreted languages is MIT Design Language (MDL or muddle). Some example models created by code written in MDL (hereinafter, MDL models) will be discussed below. It is noted that although MDL model examples are discussed herein, such examples are used only for exemplary purposes and thus not considered limiting the scope of the claims. An example MDL model is as follows: macro do_Test() begin object "Test" { surface("plastic"); color( 1, 1, 1 ); avar LayTx=0 {type=AVT_layout}, LayTy=0 {type=AVT_layout} , LayTz=0 {type=AVT_layout} ; avar Tx=O, Ty=O, Tz = 0; avar Swidth=l {min=.001}, Sthick=l {min=.001}, Shigh=l {min=.001}; avar Rx=O, Ry=O, Rz=O, Rspin=0; avar squash= 1 {min=.001 } ; avar bendX = 0, bendY = 0, twist = 0; translate(LayTx,LayTy,LayTz); translate(Tx,Ty,Tz) ; rotate(Rz, z); rotate(Ry, y); rotate(Rx, x); rotate(Rspin, z); scale(Swidth, Sthick, Shigh); sphere();
} end
[0027] Generally, the input parameters to a model coded in MDL are called "avars" for animation variables. As shown in the above model code, an avar statement is used to create one or more of these inputs. Some of the avar statements contain attributes enclosed in curly braces.
Such attributes may be used by the GUI of the animation system to make it easier for the user to edit the inputs to the model. Thus, in one embodiment, when the MDL model is preprocessed, information about avar statements containing attributes or the like may be identified. It is to be noted that unlike above-mentioned model code which includes only several avar statements and attributes, most MDL models used in animation are generally very complex and include many attributes and functions which may be hard to identify.
[0028] The following is an example of a subroutine that declares avars for a light object's color: global macro Alight05a_Sharg_Color (name,
_value: (0.10,0.4,0.75) , _paramPref ix: "", _uiSubGroup : " " , _doc : " " , _label: "",
_redSuf f ix : "_red" , _greenSuf f ix : "_green" , _blueSuf f ix: "_blue" ) begin avar sym (cat (name, _redSuf fix) ) = _value [0] {min = 0, max = 1, type=AVT_light, colorGroup=name, doc=_doc, channel =" red" , uiSubGroup=_uiSubGroup, label=_label} ; avar syτn(cat (name, _greenSuf fix) ) = _value [1] {min = 0, max = 1, type=AVT_light, colorGroup=name, doc=_doc, channel= "green" , uiSubGroup=_uiSubGroup, label=_label} ; avar sym (cat (name, _blueSuf fix) ) = _value [2] {min = 0, max = 1, type=AVT_light, colorGroup=name , doc=_doc, channel="blue" , uiStibGroup=_uiSubGroup, label=_label} ; local col = (sym(cat (name, _redSuf fix) ) , sym(cat
(name, _greenSuf fix) ) , sym(cat (name, _blueSuf fix) ) ) ; return list (cat (_paramPref ix, name) , list ( "color", "uniform" , col) ) ; end
[0029] In the above example, each avar statement includes attributes for min, max, colorGroup, doc, channel, uiSubGroup, label etc. These attributes are used by the animation tool application to construct a relatively sophisticated UI element for editing colors of a light. [0030] In certain embodiments, the models of objects are coded in interpreter languages for faster and flexible rendering. Particularly, MDL is used to specify models because of its flexibility but one of the trade-offs is that it may be difficult to extract information from the models. Thus, to extract the avar statements and their attributes from a model, it was
conventionally needed to interpret the entire model and build data structures in memory that represent the entire model. As discussed above, identifying and extracting information from the MDL models may be very hard and time consuming due to the way of constructing models using MDL. In order to overcome such problems, in some embodiments, the preprocessing compiler is used to extract information about animation variables such as the avar statements and their attributes from models written in MDL. This preprocessing is performed ahead of time and allows some applications to build GUIs that represent object models without having to load the object models.
[0031] In some embodiments, the preprocessing compiler may utilize the existing animation application tool that is generally configured to interpret the descriptions of a model and to create the internal data structures. Referring back to the above mentioned MDL model example, the preprocessing compiler may use the portion of the animation system that is configured to interpret the MDL code to load the model. It then queries the descriptions or codes in the model to obtain animation variables (e.g., all of its avars and the attributes to those avars in the MDL model example). In one embodiment, the preprocessing compiler then writes the obtained information to a lightweight description (e.g., hierarchical structure file such as an XML file, or the like) and associates the lightweight interface description with the model. Once these lightweight descriptions have been created, those are stored in a database (e.g., interface descriptions database). The applications are enabled to access the interface descriptions database to use the lightweight interface description (e.g., XML files) to present GUIs for models. As such, once the models are preprocessed and corresponding XML files are created, the applications advantageously no longer need to load the entire model to interpret descriptors (e.g., the MDL code) to extract GUI information from the model.
[0032] Referring now to Fig. 2, a flow diagram depicts a routine 200 for preprocessing models to extract information to construct lightweight interface descriptions in accordance with various embodiments. Beginning with block 202, each model is preprocessed to identify and extract information about animation variables and functions to modify the animation variables. For example, for an MDL model, information about avars and attributes is extracted from the model. At block 204, the preprocessing compiler generates a lightweight interface description for the model based on the extracted information. Semantic information of how to depict a graphic user
interface is also interpreted and extracted from the model. As discussed above, the lightweight interface description may be in an XML format, including hierarchical structures that represent the extracted information. At block 206, the lightweight interface description is associated with the corresponding model so that the lightweight interface description can be located and accessed when the model is selected by the user for manipulation. At block 208, the created lightweight interface description is stored in the interface descriptions database. Optionally, at block 210, an entire model is interpreted and user interface data structures representing the entire model are created. Such user interface data structures are stored as a full interface description for the model in the interface descriptions database. At block 212, the created lightweight interface descriptions (and full interface descriptions) are available for the animation tool to formulate a GUI. The routine 200 completes at block 214.
[0033] Referring now to Fig. 3, a flow diagram depicts a routine 300 that presents a user with a GUI for manipulating an object model using a lightweight interface description in accordance with some embodiments. For the purpose of discussion, it is assumed that the animation tool application presents an initial graphic user interface where a user can select a particular object to manipulate. In various embodiments, the selection may be the user clicking directly on an object on the display, may be in response to a user hitting a "hot key" on a keyboard, to a user selecting one or more icons or menu selections on the display, or may be performed by the selection of one or more keys on a keyboard, in any other conventional manner, or the like. Beginning with block 302, the animation tool application receives a user request for using a lightweight description for manipulating an object model. The animation tool application identifies the object model which the user wants to edit based on the request. At block 304, the animation tool application queries a lightweight description database to retrieve the lightweight interface description of the identified object model. As discussed above, the lightweight interface description includes information about animation variables for the object model. At block 306, the animation tool application formulates graphic user interface elements based on the information stored in the retrieved lightweight interface description. At block 308, the animation tool application presents to the user a GUI including the formulated graphic user interface elements so that a user can change or edit the animation variables of the object model. The graphic user interface elements may be menu selections, or visual indicators through which the user can enter desired values of the animation variables. At block 310, the animation tool
application receives user input relating to editing the animation variables through interaction with the user. The received user input is saved and used to modify the object model at block 312. The application stores the received user input and the modified object model in a database. In some embodiments, a user may want to load a full interface description for manipulating a particular object. The routine 300 completes at block 314.
[0034] Referring now to Fig. 4, an exemplary graphic user interface is displayed on a user device in accordance with an embodiment. For the purpose of discussion, it is assumed that the animation tool application presents an initial graphic user interface that includes several menu selections for a user to start manipulating models. It is also assumed that the user wants to manipulate light objects. It is assumed that all "light" objects (i.e., light source objects) are preprocessed and a lightweight interface description for each light object is stored in the databases.
[0035] As will be appreciated with one of ordinary skill in the art, a particular type of "light" source object can have various animation variables (e.g., avars, attributes, or like) that determine the illumination pattern for that type of source. For example, a model for a light object may include animation variables controlling intensity, cutoff value, color, ramp, and the like as lights of different intensity can have different effective distances and a cutoff value can be set or determined for each light type (or individual light sources where preferred). A cutoff value can define a maximum distance between the light source and an object where the light source can still illuminate, or cast shadows on, the object. Such information can be described in a model of a "light" source object and can be manipulated by updating values to a set of the animation variables.
[0036] As shown, a menu sub-window 410 listing all light objects which the user can select to manipulate. As discussed above, it is assumed that each object model has been preprocessed and a lightweight interface description for the object model is created and stored in a database. In some embodiments, in order to provide more flexible user interfaces, two menu choices are presented, such as a "Use Lightweight Description" button 416 and a "Use Full Description" button 418. By clicking a "Use Lightweight Description" button 416, the user can indicate to use a GUI generated based on a lightweight interface description of the selected object. Upon receipt of the user's selection, the application obtains a lightweight interface description of the selected
object and formulates UI elements, such as an "Edit Model's Attributes" menu 420. The user can specify a set of animation variable values defining the selected object model. As shown, the user can specify animation variable values for some animation variables such as color, intensity of the light, ramps, and other attributes of the light model. The user can view the manipulated object in a model image sub-window 412 while the user is specifying a set of animation variable values for the object model. The user can further change or update a set of animation variable values for the object model.
[0037] Once the user is finished with manipulating the object model, the user's inputted animation variable values are stored in a database. In one aspect, the user may use the animation tool to render the modified object to view and to further manipulate the modified object.
Although a light object or user interface window is described in connection with Fig. 4, such an object and user interface window are used as an example. Thus, the depiction of the user interface 400 in Fig. 4 should be taken as being illustrative in nature, and not limiting to the scope of the disclosure. Any object can be manipulated through a GUI generated based on a lightweight interface description, thereby saving loading time and memory space. It is noted that if there are many identical or similar objects to manipulate, using a lightweight interface description can be much more beneficial, as loading time and memory space are conserved by not loading and interpreting the same model numerous times.
[0038] FIG. 5 is a block diagram of a computer system that may be used to practice various embodiments. FIG. 5 is merely illustrative of an embodiment and does not limit the scope of the invention as recited in the claims. One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
[0039] In one embodiment, computer system 500 typically includes a monitor 510, computer 520, a keyboard 530, a user input device 540, computer interfaces 550, and the like. [0040] In various embodiments, user input device 540 is typically embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like. User input device 540 typically allows a user to select objects, icons, text and the like that appear on the monitor 510 via a command such as a click of a button or the like.
[0041] Embodiments of computer interfaces 550 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, Fire Wire interface, USB interface, and the like. For example, computer interfaces 550 may be coupled to a computer network, to a Fire Wire bus, or the like. In other embodiments, computer interfaces 550 may be physically integrated on the motherboard of computer 520, and may be a software program, such as soft DSL, or the like.
[0042] In various embodiments, computer 520 typically includes familiar computer components such as a processor 560, and memory storage devices, such as a random access memory (RAM) 570, disk drives 580, a GPU 585, and system bus 590 interconnecting the above components.
[0043] In some embodiment, computer 520 includes one or more Xeon microprocessors from Intel. Further, one embodiment, computer 520 includes a UNIX-based operating system.
[0044] RAM 570 and disk drive 580 are examples of tangible media configured to store data such as image files, models including geometrical descriptions of objects, ordered geometric descriptions of objects, procedural descriptions of models, scene descriptor files, shader code, a rendering engine, embodiments of the present invention, including executable computer code, human readable code, or the like. Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
[0045] In various embodiments, computer system 500 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like. In alternative embodiments of the present invention, other communications software and transfer protocols may also be used, for example IPX, UDP or the like. [0046] In some embodiments of the present invention, GPU 585 may be any conventional graphics processing unit that may be user programmable. Such GPUs are available from NVIDIA, ATI, and other vendors. In this example, GPU 585 includes a graphics processor 593, a number of memories and/or registers 595, and a number of frame buffers 597.
[0047] FIG. 5 is representative of a computer system capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, the computer may be a desktop, portable, rack-mounted or tablet configuration. Additionally, the computer may be a series of networked computers. Further, the use of other micro processors are contemplated, such as Pentium™ or Itanium™ microprocessors; Opteron™ or AthlonXP™ microprocessors from Advanced Micro Devices, Inc; and the like. Further, other types of operating systems are contemplated, such as Windows®, WindowsXP®, WindowsNT®, or the like from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, and the like. In still other embodiments, the techniques described above may be implemented upon a chip or an auxiliary processing board.
[0048] While the invention has been described by way of example and in terms of the specific embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
WHAT IS CLAIMED IS: 1. A method of providing a user interface for manipulating an object model without loading the object model in memory, wherein the object model can be manipulated by updating attributes of the object model, the method comprising: obtaining a lightweight interface description of the object model; formulating user interface elements based on the lightweight interface description; presenting a graphic user interface including the formulated user interface elements, wherein the graphic user interface enables the user to specify values for one or more attributes of the object model; and receiving user input specifying values for one or more attributes of the object model through the graphic user interface.
2. The method of claim 1, wherein the lightweight interface description represents the attributes of the object model in hierarchical data structures.
3. The method of claim 2, wherein the lightweight interface description includes information about semantics information how to create the graphic user interface.
4. The method of claim 2, wherein the lightweight interface description includes information about input parameters and functions to control the input parameters of the object model.
5. The method of claim 1, wherein the lightweight interface description is an XML file.
6. The method of claim 1 , wherein prior to presenting the graphic user interface, the object model is preprocessed and interpreted to create the lightweight interface description.
7. The method of claim 1, wherein the user has a choice to use or not to use the lightweight interface description to manipulate a particular object model.
8. The method of claim 1, further including storing the received user input in a data file for use in rendering the object model.
9. The method of claim 1, further including receiving a user request to manipulate the object model prior to obtaining the lightweight interface description.
10. The method of claim 1, wherein the graphic user interface is text based.
11. The method of claim 1 , wherein the attributes include one or more animation variables.
12. An animation system that provides a user interface for manipulating an object without loading the object in memory, the system comprising: a processor; and a memory device including instructions that, when executed by the processor, cause the processor to: interpret a model corresponding to an object in a database; identify one or more attributes of the interpreted model; generate a lightweight interface description based on the one or more identified attributes; and store the lightweight interface description in the database, wherein the lightweight interface description is used for generating a graphic user interface on a display, wherein the graphic user interface is useful for manipulating the model.
13. The animation system of claim 12, wherein the generated lightweight interface description is an XML file.
14. The animation system of claim 12, wherein the memory device includes instructions that, when executed by the processor, cause the processor to: obtain a lightweight interface description of a model that is selected by a user for manipulation; formulate user interface elements based on the lightweight interface description; and present a graphic user interface including the formulated user interface elements through which the user manipulates the model.
15. The animation system of claim 12, wherein the memory device includes instructions that, when executed by the processor, cause the processor to: receive user input specifying values for one or more attributes through the graphic user interface.
16. The animation system of claim 12, wherein the user is provided with an option to use data structures representing the entire model instead of the obtained lightweight interface description.
17. The animation system of claim 12, wherein the attributes include one or more animation variables.
18. A computer program product embedded in a computer readable medium for providing a graphic user interface for manipulating an object without loading the object in memory, comprising: program code for obtaining a lightweight interface description of an object model; program code for formulating user interface elements based on the lightweight interface description; and program code for displaying the formulated interface elements in a graphic user interface, the graphic user interface enables the user to specify values for attributes of the object model.
19. The computer program product of claim 18, further including program code for receiving user input specifying values for one or more attributes of the object model through the graphic user interface.
20. The computer program product of claim 19, further including program code for storing the received user input in a data file which is used to render the object model.
21. The computer program product of claim 18, wherein the lightweight interface description includes semantic information that is used for generating the user interface elements for manipulating the object.
22. The computer program product of claim 18, wherein the model is written in an interpreted language.
23. A computer program product of claim 22, wherein the information about attributes includes information about animation variables of the object model.
24. A computer program product of claim 23, wherein the lightweight interface description includes semantic information that describes how to display the graphic user interface for editing the attributes.
25. A computer program product of claim 19, wherein the lightweight interface description is created in an XML format.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP08878811.2A EP2377098A4 (en) | 2008-12-11 | 2008-12-11 | Manipulating unloaded objects |
| PCT/US2008/086438 WO2010068210A1 (en) | 2008-12-11 | 2008-12-11 | Manipulating unloaded objects |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2008/086438 WO2010068210A1 (en) | 2008-12-11 | 2008-12-11 | Manipulating unloaded objects |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010068210A1 true WO2010068210A1 (en) | 2010-06-17 |
Family
ID=42242981
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2008/086438 Ceased WO2010068210A1 (en) | 2008-12-11 | 2008-12-11 | Manipulating unloaded objects |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP2377098A4 (en) |
| WO (1) | WO2010068210A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015177283A1 (en) * | 2014-05-22 | 2015-11-26 | Citerin Jean-Christophe | Method for the customisation of a customisable object for a client/server system; and associated information-recording support and client/server system |
| CN116958433A (en) * | 2023-07-21 | 2023-10-27 | 武汉熠腾科技有限公司 | Quick loading method and system for oversized surface number model |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030167456A1 (en) * | 2000-04-17 | 2003-09-04 | Vinay Sabharwal | Architecture for building scalable object oriented web database applications |
| US20030212987A1 (en) * | 2001-02-28 | 2003-11-13 | Demuth Steven J. | Client container for building EJB-hosted java applications |
| US20050248563A1 (en) * | 2004-05-10 | 2005-11-10 | Pixar | Techniques for rendering complex scenes |
| US20070192818A1 (en) * | 2004-10-12 | 2007-08-16 | Mikael Bourges-Sevenier | System and method for creating, distributing, and executing rich multimedia applications |
-
2008
- 2008-12-11 EP EP08878811.2A patent/EP2377098A4/en not_active Withdrawn
- 2008-12-11 WO PCT/US2008/086438 patent/WO2010068210A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030167456A1 (en) * | 2000-04-17 | 2003-09-04 | Vinay Sabharwal | Architecture for building scalable object oriented web database applications |
| US20030212987A1 (en) * | 2001-02-28 | 2003-11-13 | Demuth Steven J. | Client container for building EJB-hosted java applications |
| US20050248563A1 (en) * | 2004-05-10 | 2005-11-10 | Pixar | Techniques for rendering complex scenes |
| US20070192818A1 (en) * | 2004-10-12 | 2007-08-16 | Mikael Bourges-Sevenier | System and method for creating, distributing, and executing rich multimedia applications |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2377098A4 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015177283A1 (en) * | 2014-05-22 | 2015-11-26 | Citerin Jean-Christophe | Method for the customisation of a customisable object for a client/server system; and associated information-recording support and client/server system |
| FR3021444A1 (en) * | 2014-05-22 | 2015-11-27 | Jean-Christophe Citerin | METHOD FOR CUSTOMIZING A CUSTOMIZABLE OBJECT FOR A CLIENT / SERVER SYSTEM; INFORMATION RECORDING MEDIUM AND ASSOCIATED CLIENT / SERVER SYSTEM |
| CN116958433A (en) * | 2023-07-21 | 2023-10-27 | 武汉熠腾科技有限公司 | Quick loading method and system for oversized surface number model |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2377098A4 (en) | 2014-10-01 |
| EP2377098A1 (en) | 2011-10-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10592238B2 (en) | Application system that enables a plurality of runtime versions of an application | |
| AU2004205327B2 (en) | Programming interface for a computer platform | |
| US7426734B2 (en) | Facilitating presentation functionality through a programming interface media namespace | |
| CN1105964C (en) | Method for displaying functional objects in visual programming | |
| JP2005521179A (en) | System, method and computer program product for generating shader programs | |
| WO1996015494A1 (en) | Object-oriented operating system | |
| Conlan | The Blender Python API | |
| CA2475265A1 (en) | Data processing system and method | |
| US8253745B1 (en) | Systems and methods for creating and using modeless animation manipulation rigs | |
| KR20080042835A (en) | Scalable visual effects in the active content of the user interface | |
| WO2010068210A1 (en) | Manipulating unloaded objects | |
| US12361607B2 (en) | Generating digital images in new contexts while preserving color and composition using diffusion neural networks | |
| JP2019149165A (en) | Panel control system and method for editing the same | |
| CN119376718B (en) | Twin entity behavior control method and device based on building block script | |
| EP4687105A1 (en) | Graphical animation with image generation enhancements | |
| WO1996015493A1 (en) | Object-oriented operating system | |
| US20260030822A1 (en) | Graphical Animation with Image Generation Enhancements | |
| CN119690567A (en) | Graph drawing system and method | |
| Sanderson et al. | Using Smile: The AppleScript Integrated Production Environment | |
| Paffenroth et al. | DataViewer: A scene graph based visualization library | |
| WO1996015492A1 (en) | Object-oriented operating system | |
| Wang | Learning Swift with Playgrounds | |
| Yu | Function cross-reference browser | |
| Smith | Graphics and Drawing | |
| Dörfler et al. | LTI-LIB— |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08878811 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2008878811 Country of ref document: EP |