[go: up one dir, main page]

MXPA97002604A - Method for the generation of virtual image and suapar - Google Patents

Method for the generation of virtual image and suapar

Info

Publication number
MXPA97002604A
MXPA97002604A MXPA/A/1997/002604A MX9702604A MXPA97002604A MX PA97002604 A MXPA97002604 A MX PA97002604A MX 9702604 A MX9702604 A MX 9702604A MX PA97002604 A MXPA97002604 A MX PA97002604A
Authority
MX
Mexico
Prior art keywords
virtual
control
codes
virtual space
virtual image
Prior art date
Application number
MXPA/A/1997/002604A
Other languages
Spanish (es)
Other versions
MX9702604A (en
Inventor
Watari Juro
Sonoda Yoshihiro
Original Assignee
Sega Enterp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP20484895A external-priority patent/JP3734045B2/en
Application filed by Sega Enterp Ltd filed Critical Sega Enterp Ltd
Publication of MX9702604A publication Critical patent/MX9702604A/en
Publication of MXPA97002604A publication Critical patent/MXPA97002604A/en

Links

Abstract

The virtual image generating apparatus (1000) is a virtual image generating apparatus for a game unit or the like, and comprising a plurality of (for example 2) input means (11): control levers, levers or similar), to generate codes associated with the operation address, decoding means (101: a CPU, controller, or the like) to support the codes generated through the operation of the plurality of input means, and assign a profile of control for a mobile object associated with a combination of a plurality of input codes, and image generation means (101, 108-117: a CPU, a geometallizer, a graphics controller, or the like, to generate virtual images, with which reflect the control profiles of the mobile object assigned by the decoding means in relation to the movement of the moving objects within the virtual space. It is digitally detected and the corresponding movements are assigned, thereby eliminating unintended input and allowing mobile objects to be freely controlled in three dimensions in a virtual space.

Description

METHOD FOR THE GENERATION OF VIRTUAL IMAGE AND ITS APPARATUS TECHNICAL FIELD The present invention relates to a virtual image generation technique for use in game units, simulators and the like, and particularly a technique for the generation of images (later called "virtual images"), obtained when it is projected (projection in perspective) an object present in a virtual three-dimensional space (later called "virtual space") on a two-dimensional plane that corresponds to a specific visual point.
BACKGROUND OF THE INVENTION In recent years, game units and simulators equipped with on-board virtual image generating devices have been developed that make them possible for moving objects (objects) that move through three-dimensional space to combat each other. These virtual image generating apparatuses are usually equipped with a main unit of the virtual image generation apparatus which houses a computer unit for executing stored programs, an output device for sending control signals to the computer unit instructing it to move the objects displayed on the screen within the virtual image, a display device for viewing the virtual images generated by the computer unit 5 according to the program sequence, which a sound device for generating sounds according to the sequence of the program. Examples of devices with the architecture described above include a driving game unit with a 'tt' issue of automobile race, in which cars compete with enemy cars in a circuit, and simulators that recreate the experience of piloting a helicopter or airplane. In this type of device, the highly realistic simulation of the movement of the car or the helicopter is extremely important. For example, in a driving game similar to that described in Figure 8A, input devices resembling the real car, an accelerator pedal, and a brake pedal are used. In a helicopter or other input simulator device, they are processed by the CPU 0 (central processing unit) of the computer unit. The computer unit performs the calculations repeatedly to assign the relative position within the virtual space to the objects, including the data for the movement of the enemy object when the enemy objects are also present.
As players become more adept at games, it has become necessary to go beyond the conventional movement and develop mobile objects such as robot controlled by the player, humans and the like. Particularly, in the field of game devices, games are being developed in such a way that objects not only move in two dimensions over an environment created in virtual space (later called "virtual environment"), but also jump up from a scope -s virtual in order to jump over another character or to engage in combat in mid air. However, input devices for conventional virtual image generation apparatus, although suitable for controlling two-dimensional movement of objects through a virtual space, are not adapted to control movement in three dimensions such as jumping. For example, in the driving games mentioned above, the steering wheel (which is the main control means), controls the moving object in the lateral direction (as 0 is seen from the player's point of view), while the pedals of the Accelerator and brake control movement in the forward direction; there is no way to control the movement of the moving object in the vertical direction. Similarly, in simulators, a single control lever 5 is used to control all movements of the moving object in all three directions, the forward direction, the lateral direction and the vertical direction. In combat-style game units, the game unit must provide sufficient control to allow agile movement in order to avoid an enemy attack. In these cases, a special control button or a control lever can be provided to control the jump, but this makes the operation complicated and does not allow the action to be transmitted to the game unit with the same speed sensitivity as the player you want In addition, input devices that are heavily loaded with details cause higher costs. In order to improve control, a video game unit that provides a single game control using two control levers is taught in Japanese Patent Application published 6-277363. In this example of the state of the art, push vectors are assigned according to the inclination of the control lever of each of the two control levers, and the two vectors are synthesized to produce complex actions. However, in this example of the state of the art, it is difficult to quickly move the object in the desired direction through the synthesis of two vectors, and it is not possible to move the object freely to a desired position in three-dimensional space.
In order to solve this problem, it is an object of the present invention to provide a device and method of virtual image generation that allows a mobile object that is moved freely and without input errors in all three dimensions within a virtual space, and a device for it.
SUMMARY OF THE INVENTION The invention of claim 1 is a method for generating virtual images to generate virtual images that include mobile objects (robots, airplanes, and the like) that undergo relative motion within a virtually created virtual space (so called globally coordinated system). ), which comprises the steps of generating codes associated with the operation direction of a plurality of (for example, 2) input means (control levers, levers or similar), assigning control profiles for mobile objects associated with the combination of codes generated by the plurality of input means, and generating virtual images where the Assigned control profiles are reflected in relation to the movement of moving objects within the virtual space. The invention of claim 2, is a virtual image generation apparatus for generating virtual images that include moving objects that are submitted or that pass in relation to the movement through a virtually created virtual space, comprising a plurality of (for example 2) means of input (control levers, levers of government or the like), to generate codes associated with the operation direction, decoding means (a CPU, controller, or the like) to admit the codes generated through the operation of the plurality of input means, assigning a control profile for a mobile object associated with a combination of a plurality of input codes, and image generation means (a CPU, a geometallizer, a graphics controller, or the like), to generate virtual images, thereby reflecting the control profiles of the mobile object assigned by the decoding means in relation to the movement of mobile objects within the virtual space. As the aforementioned control profile, the invention of claim 3, is a virtual image generation apparatus as defined in claim 2, wherein the decoding means, in the event of a combination of codes between the means of input equal a given combination (for example, when the left input means lean to the left and the right input means lean to the right, assume that direction corresponding to the direction perpendicular to a horizontal plane in the virtual space through which the moving object moves in the upward direction, after which the decoding means allocates a control profile such that the moving objects move upwards, and the means of image generation, in the event that the profile assigned control is such that the moving object is moved upwards, generates a virtual image, whereby the object The mobile moves upwards from a horizontal plane in the virtual space. As an alternate control profile, the invention of claim 4 is a virtual image generation apparatus as defined in claim 2, wherein the decoding means, in the event that a combination of codes enter from the input means that equal to a given combination (for example, when the left input means is moved forward and the right input means is lowered towards the player), they assign a control profile such that the moving object rotates while remaining in the same position within the virtual space, and the means of image generation, in the event that the assigned control profile is such that the mobile object is rotated, generates a virtual image with which the mobile object rotates or rotates while remaining in the same position within the virtual space. As an alternate control profile, the invention of claim 5 is a virtual image generation apparatus as defined in claim 2, wherein the decoding means, in the event that a combination of codes enter from the input means equal a given combination (eg, example, when moving the left input medium forward diagonally to the right and lowering the right input medium to the right), assign a control profile such that the moving object moves in the lateral direction along of a particular circle that is centered on a given central axis within the virtual space, and the means of image generation, in the event that the assigned control profile is such that the moving object moves in the lateral direction along of said circle, generates a virtual image whereby the moving object moves in the lateral direction along said circle. The invention of claim 6 is a virtual image generating apparatus as defined in claim 2, wherein the input means are control levers that generate a central position code when moved to a certain location and, which generates a different code when it moves in any of the eight directions of the location around said determined location. Control buttons or switches that can detect a neutral position and eight directions can be replaced by the control levers. According to the invention of claim 1, or claim 2, numerous combinations of codes are produced by the control positions of the plurality of control means. By associating these various combinations with various movements of moving objects in the virtual space, mobile objects can be induced to subject them to complex movement. Therefore, the movement of a mobile object can be clearly defined through the selection of a given control position, and by perspective projection of the moving object, the virtual scope, and the like with reference to this defined movement, can be defined. generate virtual images adapted to game units, simulators and the like. Even under conditions where there is a high probability of unintended operation, such as when the player operates the input device in an intuitive manner in order to dodge a bullet, assigning to the moving object movements such that such movements approach those movements which are presumably attempted by the player, reduces the likelihood of unintended operation, thereby reducing the demands placed upon the player. Assigning three-dimensional movements, such as jumping through a moving object, as well as movements in two dimensions, it becomes possible to move the moving object in three dimensions. Specifically, according to the invention of claim 3, the movement in a perpendicular direction of a horizontal plane in a virtual space unassigned to a specific operation thereby allowing the three-dimensional movement of a mobile object that is controlled by means of an operation determined. According to the invention of claim 4, the rotation in a fixed position is assigned to a specific operation, in such a way that it allows the orientation of the moving object that is changed without changing its position in two dimensions in a virtual space by means of a certain operation. According to the invention of claim 5, an orbit around a given central axis is assigned to a specific operation making possible such actions as putting the orbit around an enemy subject by means of a certain operation. According to the invention of claim 6, control levers are used as the control means, and each control lever can maintain nine different control positions. In this way, the use of a plurality of control levers provides a number of combinations sufficient to allow control of the complex movements of a moving object.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a simplified block diagram of a game unit belonging to an embodiment of the present invention; Figure 2 is an illustrative diagram of input devices (control levers) belonging to the mode; Figure 3 is a diagram illustrating the operating method of the control lever; Figure 4 is an allocation diagram showing the control positions of the left and right control levers, and the associated movements of a mobile object in the mode; Figure 5 is a flow chart illustrating the operation of the game unit belonging to the mode; Figure 6 is a diagram illustrating mode l; Figure 7 is a diagram illustrating mode 2; Figure 8 is a diagram representing an input device for a conventional virtual image generation apparatus.
BEST MODE FOR CARRYING OUT THE INVENTION The favorable embodiments of the present invention will be described below with reference to the drawings. (1) Description of the Structure A structural diagram of a game unit representing an embodiment of the present invention is presented in Figure 1. In this embodiment, a robot serves as the moving object. Controlled by the player, the robot moves freely within the virtual space, which enters into combat with enemy robots. As shown in Figure 1, the game unit 1000 comprises the following basic structural elements: a main body of the game unit 10, an input device 11, an output device 12, a TV monitor 13 and a speaker 14. The input device 11 is provided with levers of control that are operated with the left and right hands of the player in order to control the movement of the robot. The output device 12, is provided with several types of lamps that warn the player of the operational status of the unit. The TV monitor 13 displays the image of the combat game; a display of mounted head (HMD for its acronym in English), a projector, or similar, can be used instead of a TV monitor.
As the image generating means, the main body of the game unit 10 has a counter 100, a CPU (central processing unit) 101; it is also equipped with ROM 102, RAM 103, a sound device 104, and an I / O interface 106, an information program processor 107, a coprocessor 108, ROM terrain information 109, a geometry 110, shape information ROM 111, a device for deployment 112, information for texture ROM 113, texture map RAM 114, a buffer frame 115, an image synthesis device 116 and a D / A converter 117. The main body of the game unit 10 generates new virtual images at certain intervals (for example, every 1/60 of a second, corresponding to the vertical synchronization cycle of the television format). The CPU 101, which functions as decoding means, is connected through an auxiliary connector to the counter 100, which stores initial values, to ROM 102, which stores the program for the game sequence and image generation, RAM 103, which temporarily stores the information and sound card 104, I / O interface 106, information program processor 107, coprocessor 108 and geometallizer 110. RAM 103 temporarily stores the information required for the coordinate conversion of polygon information and other functions, and stores various commands for the geometallizer (such as the deployment of objects), the results of the matrix operations during the process conversion operations, and other information. When the player puts the control signals through the input device 11, the I / O interface 106 issues the interrupt command to the CPU 101; when the CPU 101 sends information about the display of the lamp, this information is sent to the output device 12. The sound card 104 is connected to a speaker 14 through a power amplifier 105. The audio output signals the sound card 104 are amplified by the power amplifier 105 and transmitted to the horn 14. The ROM 111 stores information of the polygon required to generate virtual images of various physical objects such as the player's robots, enemy robots, bomb explosion images and virtual terrain elements such as obstacles, background and topographic features . The ROM 109 saves information of forms for physical objects (obstacles, buildings, topographic features and the like) with respect to which it is necessary to make overlapping determinations, that is, if a subject (object) must hit another topographic feature, or hide by a topographical feature. In contrast to relatively detailed polygon information groupings for the display of images stored in ROM 111, information pools stored in ROM 109 comprise rigid units, sufficient to perform overlapping determinations and the like. For example, the information of the topographic characteristics can include an ID for each surface, which defines a topographic characteristic, and what we identify as a relationship of this ID and topographic characteristic, and what we identify as a relation of this ID and topographic characteristic of Surface is added to a table and saved in ROM III. What we identify as polygon information is grouping of information that are sets that comprise a plurality of apices and that indicate the apices of the polygons (usually triangles or quadrangles), the elements that construct the shape of the physical object, in terms of coordinates relative or absolute. In order to generate virtual images, a coordinate system (world coordinate system) that indicates the relative positions of objects, obstacles, and other physical objects in a virtual space, must be converted to a two-dimensional coordinate system (coordinate system of visual point) that represents the virtual space seen from a designated visual point (for example, a camera or similar). The visual point is placed in a certain position (for example, diagonally above the subject) from where the controlled subject is visible. In this way, the coordinates of the visual point change according to the coordinates of the object. The coordinates of the object are sent as control signals from the input device 11 to the CPU 101. The total input device 11 is described in Figure 2A. As can be distinguished from the drawings, the input device 11 comprises a left control lever 11L operated with the left hand of the player and a right control lever 11L operated with the right hand. Each of the control levers has a total of 9 control positions, forward and backward, left and right, diagonal and neutral (see Figure 3). The control signals that correspond to the various control positions are produced as digital signal codes. As shown in Figure 2B, the control levers are equipped with 11F firing triggers and 11T turbo triggers for acceleration; they produce the codes when they are pressed. Figure 4 shows the movement assignment of the object for the control positions. Because each control lever has 9 codes, the simultaneous operation of both the left control lever and the right control lever gives a total of 81 possible combinations (equal to 9 possibilities x 9 possibilities). If a direction of movement of an object that is to be moved in the next interval is assigned to each combination, a total of 81 actions can be specified using the two control levers. Assignments must be made in such a way that the direction of movement of the real object reflects as closely as possible the direction in which the player intuitively tries to move the moving object. However, because horizontal movement is developed on the true horizontal, movement in a longitudinal direction, and rotation, also as a jump in the perpendicular direction from a horizontal plane in the virtual space (the address in z in the world system) of coordinates, these special actions must be assigned to certain control positions. When the player is attacked by the enemies, he or she moves the control lever to instinctively dodge the bullets of the enemies in order to avoid the attack. With the assignment of the control lever in this mode, the assignments of the movement are made in such a way that the movements attempted by the player are reflected in the movements of the objects, even for those actions in which the player develops them in a manner reflective Once 1 of the code combinations indicated in Figure 4 have been input from the input device 11, the CPU 101, which follows the program assigned in the manner indicated in Figure 4, generates the visual point coordinates and the Object coordinates for the next interval. Once the coordinates have been established, the CPU makes the collision and overlap determinations of the physical objects. Objects, obstacles, and other physical objects are composed of a plurality of polygon information. For each physical object, a certain apex of a polygon that is an element of a physical object is selected as the origin, the total form is decided using a coordinate system that indicates the coordinates of the other apices (body coordinate system), and it is associated in formation for polygons that form physical objects. To be able to display the image of an explosion when an object or obstacle hits a bullet or light beam, it is necessary to calculate the relative positions of the physical objects and make a collision determination to determine if the physical objects have collided.
To obtain relative positions for physical objects represented by body coordinate systems, conversions must be made to the determined coordinate system, which produces the virtual space (world coordinate system). Once the relative position for each physical object has been determined, then it is possible to determine if physical objects collide with each other. In order to allow the display of transparency of an obstacle when, from the visual point of which a virtual space is observed, an object or the like passes behind the obstacle, it is necessary to develop a determination of the state of overlap of the physical objects. To do this, the physical objects in the virtual space are converted to the coordinate system to be viewed from the visual point, and a relative vector is calculated for the obstacle and the object and a line-of-sight vector for the object and the visual point . Once the angles of these two vectors have been computed, it can be determined whether the object must be hidden by the obstacle or not. Because of these computer calculations of assumed coordinate conversion, matrix operations are required that include floating-point operations. The matrix operations are performed by the coprocessor 108 which references the terrain and similar information stored in ROM 109; as a result of the operations, the CPU 101 makes a collision determination or an overlap determination. An additional requirement for the display of the image is that the physical objects in a virtual space are projected onto a two-dimensional plane that constitutes the area of vision in a similar way to the physical objects present in a virtual space observed from a given virtual point (by example, a camera). This is called perspective projection, and the coordinate conversion made through matrix operations for perspective projection is called perspective conversion. It is the geometallizer 110 that executes the conversion in perspective to produce the virtual image that is displayed at that moment. The geometallizer 110 is connected to the shape information ROM 111 and to the deployment device 112. The geometallizer lio contains information given by the CPU 101, indicating the information required for the conversion in perspective, as well as the material information required for the conversion in perspective. Based on the matrix provided by the CPU 101, the geometallizer 110 performs conversions in perspective in the polygon information stored in ROM 111, of the shape information to produce transformed information from a three-dimensional system of coordinates in virtual space to a system of terrain of vision coordinates. At this stage, if it is necessary to display an explosion image as a result of an explosion determination by the CPU 101, polygon information is used for the explosion image. The deploying device 112 applies texture to a vision field with coordinate shape information system and sends result to the buffer frame 115. If as a result of the determination of the overlap by the CPU 101, the similar object is hidden behind an obstacle , the determined deployment of transparency is carried out (mesh treatment or translucent treatment). To apply texture, the display device 112 is connected to the ROM 113 texture information and the texture map RAM 114 and is also connected to the buffer frame 115. The program processor 107 calculates texts and other program information on the screen ( saved in ROM 102). The image synthesis device 116 imposes output text information from the processor 107 to the image information given by the aforementioned buffer frame 115 and resynthesizes the image. The information of the resynthesized image is fed to the television monitor 13 by means of the D / A converter 117.
(II) Description of the Operation Next, the operation of this mode will be described with reference to the flow chart in Figure 5. When the player moves the left control lever 11L, the right control lever, or both, thus introducing a new control signal to the I / O interface 106, the I / O interface 106 makes an interrupt command to the CPU 101. If there is no interruption (SI stage: NO), the CPU executes other processes (step S2), but if an interruption mandate has been made, (stage SI: YES), acquired. In this embodiment, for the purpose of determining the manner of entering or not entering unintentionally, the control signal is determined for each interval after an interruption command, and if the same input signal is input 8 consecutive times , a determination of the correct entry is made. To do this, the counter is first set to the initial value N (step S3) and the control signal of the left control lever and the control signal of the right control lever are input (step S4 S5). The CPU 101 compares the value of the control signal input during the previous interval with the value of the current input control signal (step S6) if the two do not equalize (step S6: N0), a determination of the inputs are not attempted, and the CPU waits for the next interrupt command (stage 1). If the value of the precontrol signal and the value of the current input control signal are equal (step S6: SI), the CPU determines whether the same determination has been made 8 times (step S7). If less than eight times (step S7.-NO), the counter N is incremented (step S8) and the same procedure is repeated (steps S4-S7). If the same value has been entered eight times (step S7: SI), the system proceeds to generate a virtual image based on the correct control signal. In step S9, based on the coordinates of the destination point of the player's robot (object), the CPU 101 creates a perspective conversion matrix, a matrix for the perspective conversion of shape information into the virtual space in the visual point coordinate system, and provides this to the geometallizer 110. At the same time, the CPU 101 provides the coprocessor 108 with the terrain information stored in ROM 109 and instructs the coprocessor to execute the coordinate conversion to make a collision determination; if a "collision" result occurs, the information indicating the necessary polygons is output to the geometallizer 110. Where the vector operations for making an overlap determination have produced an overlap result, the CPU 101 instructs the geometallizer 101 to produce a display of transparency. In Step S10, a processing similar to that described in step S9 for the enemy robot is executed. The enemy robot can be made to move according to the program stored in ROM 102, or made to be moved by another input device controlled by another player. In the Sil stage, the information required to designate the polygons required for the perspective conversion to the geometallizer 110 is provided. In step S12, the geometallizer 110 uses the perspective conversion matrix provided to execute the conversion in perspective for the information of Designated form and supply the result to the deployment device 112. The deployment device 112 executes the texture application and the like, for the polygons converted into perspective and produces the result towards the buffer frame 115. With the modality described above, the levers of control produces control signals that take the form of digital information, thereby minimizing the likelihood of unintended input. Because the movement assignments of the moving object are made in such a way that the objects can be moved correctly, control is facilitated, even in scenes where it is easy to make unintended movements. Special assignments are made for control positions in which the player is thinking difficult to use, such as jumping, spinning, circling an enemy, rapid acceleration, and sudden stop, thereby allowing objects to move freely in all three dimensions within a vitual space.
(II) Other modalities The present invention is not limited to the modality described above and can be adapted in various ways. For example, the input device was equipped with two control levers in the above mode, but the present invention can be adapted to any configuration that produces digital control signals, such as joysticks or control button that can be pressured in eight directions. The number of control addresses is not limited to eight; the implementation of more or less addresses is possible.
The control position assignments are not limited to the assignments indicated in Figure 4, and allow several modifications according to the specifications of the game unit, simulator, or other unit equipped with the image generating device belonging to the present invention. As the present invention was designed with the primary purpose of facilitating the control of mobile objects in the virtual space, the method for virtual image generation can employ various image generation methods that concern computer graphics.
Examples An example in which the game unit 1000, in the aforementioned embodiment of the invention is currently used, will be described: Figure 6 represents example 1 (scene 1), illustrating movements to evade the bullets fired by a combat partner (enemy) A of the same figure represents the positional relationship at the instant that the bullet is fired by the enemy; positions that are seen from above. As in the previous embodiment, the player manipulates the control levers in the manner shown in Figure 4 (1) to evade the bullet. If the control lever assignments have been made in the manner indicated in Figure 4, the player's object executes a "slow, left-handed, right turn". As shown in Figure 6B, the player's object moves to encircle the enemy. Scene 1, when it is actually displayed as a virtual image on a monitor, would appear as shown in C of the same figure. Because the visual point of the virtual image rotates with the movement of the player's object, the deployed position of the enemy is virtually unchanged as the movement of the object of the player surrounding the enemy unfolds. This image display minimizes the movement of the perspective line of the player, reducing the demands placed on the player and allowing the stimulation of the real combat to be maintained. As shown in D of the same figure, to advance the player's object forward after surrounding the enemy for the purpose of counterattacking, the player must lower both control levers in the forward direction (Figure 4 (2)). To execute both fast turns and advance forward within a short time, the player may at any time lower the control levers in the manner shown in Figure 4 (3). However, with the assignments indicated in Figure 4, these frequently used control positions are reflected in the objects as movements attempted by the player, such that the player's object can be advanced towards the enemy. That is, when the objects are controlled through a synthesis vector by two levers, as described in the Background of the Invention, the operation indicated in Figure 4 (3) can easily cause an unintended movement; in this example, however, the operation indicated in Figure 4 (3) has the assignment of "advance slightly to the left diagonally", thereby allowing the object to be moved in the intended direction without reducing the step of the game sequence. Figure 7 represents example 2 (scene 2), illustrating an enemy and the player's object surrounding each other around an obstacle. A of the same figure shows the positional relationship of the enemy and the object of the player. If only two-dimensional movements can be specified within the virtual space, as with conventional game units, only the movement around the same obstacle is possible, around which the two are trying to surround, which encourages the march. With this example, the object of the player can be "skipped" as it surrounds, as shown in Figure 4 (4), thereby allowing the player's object to fall on the enemy from above and attack the enemy , as shown in the real virtual image in Figure 7B. By making a "move forward" control movement after making the "jump" control movement, the player's object can move toward the enemy while maintaining the height. This produces a fast-paced game sequence without the complicated control movements.
INDUSTRIAL APPLICATION In accordance with the present invention, a plurality of control positions of the control device are combined, and a specific movement is specified for each control position. This reduces the likelihood of unintended movements and facilitates control, thereby making it possible to freely control mobile objects within the virtual space. For selected control positions, three-dimensional movement can be facilitated by assigning a jump motion, the rotation of a moving object can be facilitated by assigning a rotational movement, and movement on a circle around a predetermined axis it can be assigned to facilitate turns around an enemy.

Claims (6)

1. A method for generating virtual image to generate virtual images that include mobile objects that undergo relative movement within a virtually created virtual space, characterized in that it comprises the steps of: generating codes associated with the operation direction of a plurality of input means, assignment of control profiles for said mobile objects associated with the combination of codes generated by said plurality of input means, and generation of said virtual images in which the assigned control profiles are reflected in relation to the movement of said objects mobile phones within the virtual space.
2. An apparatus for generating virtual image to generate virtual images that include mobile objects that undergo relative movement within a virtually created virtual space, characterized in that it comprises: a plurality of input means for generating codes associated with the address of operation; decoding means for admitting the codes generated through the operation of said plurality of input means, and assigning a control profile to said mobile object associated with a combination of a plurality of input codes; and image generating means for generating said virtual images, wherein said control profiles of the mobile object assigned by said decoding means are reflected in relation to the movement of the mobile objects within said virtual space.
3. An apparatus for virtual image generation, according to claim 2, further characterized in that said decoding means, in the event that a combination of codes between said input means equal a given combination, assume that address corresponding to the perpendicular direction with respect to a horizontal plane in said virtual space through which the moving object moves in an upward direction, after which a control profile is assigned such that said moving objects move upwards, and said image generating means, in the event that said assigned control profile is such that said movable object moves in a vertical direction, they generate a virtual image whereby said moving object moves in said vertical direction from a horizontal plane in said space virtual.
4. An apparatus for virtual image generation, according to claim 2, further characterized in that said decoding means, in the event that a combination of codes enter the input means equal a given combination, assign a control profile such that said mobile object is rotated or rotated while remaining in the same position within said virtual space, and said image generating means, in the event that said assigned control profile is such that the mobile object rotates, generate a virtual image with which said mobile object rotates or rotates while remaining in the same position within said virtual space.
5. An apparatus for virtual image generation, according to claim 2, further characterized in that the decoding means, in the event that a combination of codes enter from the input means equal a given combination, assign a control profile such that said mobile object moves in the lateral direction along a determined circle that is centered on a given central axis within said virtual space, and said image generating means, in the event that said assigned control profile is such that said movable object moving in the lateral direction along said circle, generate a virtual image whereby said movable object moves in the lateral direction along said circle.
6. An apparatus for virtual image generation, according to claim 2, further characterized in that said input means are control levers that generate a central position code when moved to a certain location and, which generates a different code when it moves in any of the eight directions of the location around said determined location.
MXPA/A/1997/002604A 1995-08-10 1997-04-09 Method for the generation of virtual image and suapar MXPA97002604A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP20484895A JP3734045B2 (en) 1995-08-10 1995-08-10 Virtual image generation method and apparatus
JP7-204848 1995-08-10
PCT/JP1996/002267 WO1997006510A1 (en) 1995-08-10 1996-08-09 Virtual image formation method and its apparatus

Publications (2)

Publication Number Publication Date
MX9702604A MX9702604A (en) 1998-05-31
MXPA97002604A true MXPA97002604A (en) 1998-10-23

Family

ID=

Similar Documents

Publication Publication Date Title
US6154197A (en) Virtual image generation method and its apparatus
JP2667656B2 (en) Driving game machine
US5405152A (en) Method and apparatus for an interactive video game with physical feedback
US10510189B2 (en) Information processing apparatus, information processing system, and information processing method
US5616031A (en) System and method of shadowing an object in motion
EP0778548B1 (en) Image processor and game machine using the same
US7044855B2 (en) Game device
US6377277B1 (en) Virtual image generation apparatus and method
US6404436B1 (en) Image processing method, image processor, and pseudo-experience device
US6196919B1 (en) Shooting game apparatus, method of performing shooting game, and computer-readable recording medium storing shooting game program
KR100393504B1 (en) Object orientation control method and apparatus
WO1996008298A1 (en) Three-dimensional simulator and image synthesis method
CN113711162A (en) System and method for robotic interaction in mixed reality applications
MXPA97002604A (en) Method for the generation of virtual image and suapar
JP4074726B2 (en) Three-dimensional game apparatus and information storage medium thereof
JP2888723B2 (en) Three-dimensional game device and image composition method
JP6580373B2 (en) Program, system, and method for controlling head mounted display
US20070265044A1 (en) Game program product, game apparatus and game method
JP2021043696A (en) Program, information processing apparatus, information processing system, information processing method, and head-mounted display
JPH11258974A (en) Three-dimensional simulator device and image synthesizing method
JP3179739B2 (en) Driving game machine and recording medium storing driving game program
JP5122111B2 (en) Simulation game machine, simulation game machine program
JP2000040169A (en) Image processing apparatus and game apparatus using the same
MXPA97003200A (en) Image processing method, image processing system and virt reality system
JP3753802B2 (en) Device for processing linked moving objects