[go: up one dir, main page]

HK1105704A - Image processing device, image processing method, information recording medium, and program - Google Patents

Image processing device, image processing method, information recording medium, and program Download PDF

Info

Publication number
HK1105704A
HK1105704A HK07114113.6A HK07114113A HK1105704A HK 1105704 A HK1105704 A HK 1105704A HK 07114113 A HK07114113 A HK 07114113A HK 1105704 A HK1105704 A HK 1105704A
Authority
HK
Hong Kong
Prior art keywords
image
pixel
original image
pixels
unit
Prior art date
Application number
HK07114113.6A
Other languages
Chinese (zh)
Inventor
大久保建
Original Assignee
科乐美数码娱乐株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科乐美数码娱乐株式会社 filed Critical 科乐美数码娱乐株式会社
Publication of HK1105704A publication Critical patent/HK1105704A/en

Links

Description

Image processing device, image processing method, information recording medium, and program
Technical Field
The present invention relates to an image processing apparatus and an image processing method suitable for imparting a sense of speed to an image, a program for realizing these by a computer, and a computer-readable information recording medium on which the program is recorded.
Background
Conventionally, in the field of game technology, various techniques have been proposed to make the image display more speedy when displaying moving images or the like. For example, the applicant discloses a technique of changing a compression ratio at the time of displaying an image in accordance with a display speed to give a sense of speed to a viewer in the following documents.
Further, the following image processing techniques are proposed: image processing is performed so as to blur a video contained in an image in a direction in which the video moves, so that an afterimage appears as if it can be seen, giving the image a sense of speed. This image processing technique, also known as Motion Blur (Motion Blur) effect, uses dithering (ブレ) to represent the Motion of an object in one direction due to movement.
In general, in order to impart a motion blur effect to an image, it is necessary to generate an image in consideration of a shake of a video in a direction in which the video of an object moves after the direction is investigated, and a large amount of calculation and a long time are required to generate the image.
Patent document 1: japanese patent No. 3344555
Disclosure of Invention
However, for example, when displaying images on a game device or various simulation devices, various high-speed and simple image processing techniques for easily giving a sense of speed are required.
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide an image processing apparatus and an image processing method suitable for providing an effect of giving a sense of speed to an image, a program for realizing these by a computer, and a computer-readable information recording medium on which the program is recorded.
In order to achieve the above object, according to the principle of the present invention, the following invention is disclosed.
An image processing apparatus according to claim 1 of the present invention includes a dividing unit, an acquiring unit, a mixing unit, and an output unit, and is configured as follows.
That is, the dividing unit divides an image (hereinafter, referred to as an "original image") composed of a set of pixels into a plurality of regions.
Typically, the original image is a rectangular bitmap image suitable for display on various display devices, and each pixel included in the bitmap is handled as a pixel, and a numerical value indicating the hue, brightness, and chromaticity of the pixel or an RGB (Red Green Blue) value of the pixel becomes a pixel value.
Here, the dividing section divides the image such that the shape of each of the plurality of regions is long in a direction in which the video constituted by the pixels included in the region moves.
When the image displayed on the display device is a moving image showing, for example, the front view from the viewpoint of the driver in a racing game on a three-dimensional graphic display, an infinite point is arranged at the center of the screen, and the moving direction of the video constituted by the pixels is a direction away from the infinite point. Further, the farther from the point of infinity, the faster the video moves. When a single still image is given a sense of speed, the moving direction of the video may be considered in the same manner.
In the present invention, the region is divided such that the divided region is long in the moving direction.
On the other hand, the acquiring unit acquires, for each of the plurality of divided regions, a representative value of the region based on pixel values of pixels included in the region.
Since each region includes a plurality of pixels, there are various methods for obtaining a representative value from the plurality of pixels, and the method will be described later.
Further, the blending unit obtains, for each pixel included in the original image, a processed image composed of a processing result of blending the representative value obtained for the region including the pixel and the pixel value of the pixel at a predetermined blending ratio.
A new image (processed image) is obtained in which the pixel value of a pixel of an original image and the representative value of a region including the pixel are alpha-blended at a predetermined alpha value. The α value may be constant for the entire image or may be changed according to the moving speed of the pixels included in the area, or may be changed according to the speed of the vehicle on which the driver rides (the moving speed of the viewpoint in the virtual three-dimensional space) in the case of the racing game described above.
Finally, the output unit outputs the obtained processed image.
The result of the α blending is displayed on a display device or the like and output, and the shape of each region is long in the moving direction of the video, so that the result of the α blending becomes a video that blurring in the moving direction is larger than the original image. This will have the effect of a residual image due to the movement.
According to the present embodiment, by dividing an image into regions having a shape long in the moving distance direction and acquiring a representative value of the regions, an effect like a residual image can be easily obtained, and a sense of speed can be easily given to viewers.
Further, the image processing apparatus of the present invention may be configured as follows: the acquiring unit acquires an average of pixel values in each of the plurality of regions as a representative value of each of the plurality of regions.
That is, a natural afterimage is expressed by using the average of the pixel values of the region as a representative value of each region, and as described later, the average value can be easily obtained by an image arithmetic processor or the like based on the divided shapes.
The present invention is one of the preferred embodiments of the above invention, and according to the present invention, an afterimage can be naturally expressed.
Further, the image processing apparatus of the present invention may be configured as follows: the acquiring unit acquires, as a representative value of each of the plurality of regions, a pixel value of a pixel at a predetermined position included in the region.
In the above invention, the average of the pixel values of the regions is used as the representative value, but the present invention provides a simpler method of obtaining the representative value. As the predetermined position of each region, the following various methods can be adopted: the center of the area, the center of gravity, a certain point in the uppermost stream, a certain point in the lowermost stream in the moving direction of the video, and the like.
The present invention is one of the preferred embodiments of the above invention, and according to the present invention, it is possible to obtain a representative value for representing an afterimage extremely easily, which contributes to a reduction in the amount of computation.
Further, the image processing apparatus of the present invention may be configured as follows: the plurality of regions respectively have boundaries of a 1 st arc centered on a predetermined point, a 2 nd arc centered on the predetermined point, or the predetermined point, a line segment included on a 1 st straight line passing through the predetermined point, and a line segment included on a 2 nd straight line passing through the predetermined point, and the direction in which the video moves is along the 1 st arc.
Since the direction of the video movement is along this 1 st arc, this will have the effect of an afterimage of the rotating object. I.e. the centre of the sector is the centre of rotation.
The present invention can express a sense of speed to a viewer with a natural afterimage representing rotation in the above-described invention.
Further, the image processing apparatus of the present invention may be configured as follows: the plurality of regions each have, as boundaries, at least a line segment included in a 1 st straight line that passes through a predetermined one point and a line segment included in a 2 nd straight line that passes through the predetermined one point, and the direction in which the video moves is a direction converging on the predetermined one point or a direction diverging from the predetermined one point.
In the above invention, the region is divided by a sector, whereas the present embodiment divides the region by a quadrangle. Considering the example of a racing game, if two opposite sides of a quadrangle are extended (to obtain the 1 st line and the 2 nd line), the two sides intersect at an infinite point (a predetermined point).
The direction of the other group of two edges can be arbitrarily determined. For example, when the original image has a rectangular shape, there are methods such as: parallel to the sides of the rectangle closest to the quadrilateral.
In the case where the original image has a trapezoidal shape and a straight line passing through the oblique side of the trapezoidal shape intersects at a predetermined point, the straight line may be parallel to the upper and lower bases of the original image.
According to the present invention, since each region is divided into four, it is possible to easily obtain a representative value of the region by using a function such as texture pasting of an image arithmetic processor, and to perform high-speed processing.
The image processing apparatus of the present invention may be configured as follows.
That is, the original image is perspectively projected onto an intermediate 1 st image composed of a set of pixels smaller in number than the original image, the intermediate 1 st image is reverse perspectively projected onto an intermediate 2 nd image having the same shape as the original image, and the pixel value of each pixel included in the intermediate 2 nd image is set as a representative value of a region including the position of the pixel, thereby performing the division by the division unit and the acquisition by the acquisition unit.
In a three-dimensional graphic, texture information such as a triangle or a quadrangle is pasted to a region arranged in a virtual three-dimensional space, and perspective projection is performed at this time. In this pasting, the number of pixels can be increased or decreased.
The pasting corresponds to calculating and obtaining a representative value of each region if the number of pixels is reduced at the time of pasting. This is the intermediate 1 st image.
Further, if the number of pixels is increased and restored to the same shape as the original image, the pixels of the representative value are arranged on the same coordinates of the original image. This is the intermediate 2 nd image.
Then, a mixing section mixes the original image and the intermediate 2 nd image at the predetermined mixing ratio to obtain the processed image.
As described above, since the pixel including the representative value of the region of the pixel is drawn in each pixel of the intermediate 2 nd image, the pixel is alpha-synthesized with the original image, and an effect like a residual image can be obtained.
According to the present invention, it is possible to perform high-speed processing by using a technique of projecting polygons to polygons, such as texture pasting, which is widely used in the field of three-dimensional graphics and is widely used by dedicated hardware.
An image processing apparatus according to another aspect of the present invention includes a 1 st generating unit, a 2 nd generating unit, a mixing unit, and an output unit, and is configured as follows.
First, the 1 st generation unit renders an original image composed of a set of pixels as a texture, and generates an intermediate 1 st image composed of a set of a small number of pixels using the original image.
Similarly to the above invention, when the original image is composed of polygons, the 1 st generating unit projects the polygons composed of the same number of vertices as the original image. Thus, the intermediate 1 st image obtained is an image in which the original image is "flattened (つぶされた)" according to a certain rule.
On the other hand, the 2 nd generation unit generates an intermediate 2 nd image having the same shape as the original image by rendering the generated intermediate 1 st image as a texture.
That is, the 2 nd generation unit performs a process of restoring the intermediate 1 st image that has been "flattened" as described above to the original shape. By these processes, the pixel values are determined for the plurality of pixels of the original image → 1 point of the intermediate 1 st image → the plurality of pixels of the intermediate 2 nd image.
In the plural pixels of the original image → 1 point of the intermediate 1 st image, an average value of the plural pixels of the original image or a pixel at a predetermined position is obtained as a representative value, and the plural pixels of the intermediate 2 nd image all have pixel values of the obtained representative value.
Further, the mixing section mixes the original image and the generated intermediate 2 nd image at a predetermined mixing ratio to generate a processed image.
Alpha blending was performed in the same manner as in the above invention. The α value can be applied to various techniques as in the above invention.
Then, the output section outputs the generated processed image.
The output is output to a display device or the like in the same manner as the above invention.
Here, in the present invention, for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which area is drawn on the pixel in the intermediate 1 st image, in which the area of the pixel drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
In other words, the configuration is: in the case where the plurality of pixels of the original image → 1 point of the 1 st intermediate image → the plurality of pixels of the 2 nd intermediate image are converted, "the plurality of pixels of the original image" corresponds to "the region", and the shape thereof is long in the moving direction of the video.
Therefore, particularly when supported by a three-dimensional image arithmetic processor, it is possible to perform high-speed image processing arithmetic operations in the same manner as in the above-described invention, and it is possible to obtain an image effect like a residual image and give a sense of speed to viewers.
Further, the image processing apparatus of the present invention may be configured as follows: the predetermined mixing ratio is determined for each of the plurality of regions based on a distance between the region and the predetermined point, and when the distance increases, the mixing ratio between the representative value and the pixel value increases.
Further, in the case of viewing as a video of a moving image, a video distant from the point of infinity moves within the screen at a faster speed than a video close to the point of infinity. Therefore, the mixing ratio is changed between the vicinity of the infinity point and the far distance from the infinity point so that the farther from the infinity point, the greater the afterimage effect.
According to the present invention, it is possible to present the effect of a real afterimage to a viewer by appropriately changing the mixing ratio for each pixel, and it is possible to easily generate a sense of speed.
An image processing method according to another aspect of the present invention includes a segmentation step, an acquisition step, a mixing step, and an output step, and is configured as follows.
That is, in the dividing step, an image composed of a set of pixels (hereinafter referred to as "original image") is divided into a plurality of regions.
On the other hand, in the obtaining step, for each of the plurality of divided regions, a representative value of the region is obtained based on pixel values of pixels included in the region.
Further, in the blending step, a processed image is obtained which is composed of a result of processing in which the representative value obtained for the region including the pixel and the pixel value of the pixel are blended at a predetermined blending ratio for each pixel included in the original image.
Then, in the output step, the resulting processed image is output.
Here, in the dividing step, the image is divided so that the shape of each of the plurality of regions is long in a direction in which the video constituted by the pixels included in the region moves.
An image processing method according to another aspect of the present invention includes a 1 st generation step, a 2 nd generation step, a mixing step, and an output step, and is configured as follows.
That is, in the 1 st generation step, an original image composed of a set of pixels is rendered as a texture, and an intermediate 1 st image composed of a set of a small number of pixels is generated from the original image.
On the other hand, in the 2 nd generation step, the generated intermediate 1 st image is rendered as a texture, and an intermediate 2 nd image having the same shape as the original image is generated.
Further, in the mixing step, the original image and the generated intermediate 2 nd image are mixed at a predetermined mixing ratio to generate a processed image.
Then, in the output step, the generated processed image is output.
Here, for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which is drawn on the pixel in the intermediate 1 st image, in which the pixel is drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
A program according to another aspect of the present invention is configured to cause a computer to function as the image processing apparatus or to cause a computer to execute the image processing method.
The program of the present invention may be recorded on a computer-readable information recording medium such as an optical disk, a flexible disk, a hard disk, a magneto-optical disk, a digital video disk, a magnetic tape, or a semiconductor memory.
The program may be distributed/sold via a computer communication network independently of a computer executing the program. Further, the above-mentioned information recording medium may be distributed/sold independently of a computer.
Effects of the invention
According to the present invention, it is possible to provide an image processing apparatus, an image processing method, a program for realizing them by a computer, and a computer-readable information recording medium on which the program is recorded, which are suitable for imparting an effect of giving a sense of speed to an image.
Drawings
Fig. 1 is an explanatory diagram of a schematic configuration of a typical information processing apparatus that realizes an image processing apparatus according to one embodiment of the present invention.
Fig. 2 is a schematic diagram of the schematic configuration of an image processing apparatus according to one embodiment of the present invention.
Fig. 3 is a flowchart of a control flow of an image processing method executed by the image processing apparatus.
Fig. 4 is an explanatory diagram showing the shape of the original image.
Fig. 5 is an explanatory diagram showing a division state of an original image.
Fig. 6 is an explanatory diagram of a situation in which an original image is divided and a representative value is acquired when the polygon pasting function is used.
Fig. 7 is an explanatory diagram of a schematic configuration of an image processing apparatus according to another embodiment from another viewpoint.
Fig. 8 is a flowchart of a control flow of an image processing method executed by the image processing apparatus.
Description of the reference symbols
100 game device
101 CPU
102 ROM
103 RAM
104 interface
105 controller
106 external memory
107 image processing unit
108 DVD-ROM drive
109 NIC
110 sound processing part
201 image processing apparatus
202 division part
203 acquisition part
204 mixing part
205 output unit
601 original image
602 1 st intermediate image
603 nd 2 intermediate image
701 st generation part
702 nd generation unit
Detailed Description
The following describes embodiments of the present invention. Hereinafter, for easy understanding, an embodiment in which the present invention is applied to a game device that performs three-dimensional graphic display will be described, and the present invention can be similarly applied to information processing devices such as various computers, PDAs (personal digital assistants), and mobile phones. That is, the embodiments described below are for explanation and do not limit the scope of the invention of the present application. Therefore, those skilled in the art can adopt an embodiment in which each or all of the members are replaced with equivalent members, and these embodiments are also included in the scope of the present invention.
Example 1
Fig. 1 is an explanatory diagram of a schematic configuration of a typical game device that realizes an image processing device of the present invention. Hereinafter, description will be made with reference to the drawings.
The game device 100 includes a CPU (Central Processing Unit) 101, a ROM102, a RAM103, an Interface 104, a controller 105, an external memory 106, an image Processing Unit 107, a DVD-ROM drive 108, an NIC (Network Interface Card) 109, and a sound Processing Unit 110.
The image processing apparatus according to the present embodiment is realized by inserting a DVD-ROM storing a program and data for a game into the DVD-ROM drive 108, and turning on the power of the game apparatus 100 to execute the program.
The CPU101 controls the operation of the entire game device 100, is connected to each component, and exchanges control signals and data. The CPU101 can perform Arithmetic operations such as addition, subtraction, multiplication, and division, logical operations such as logical or, logical and, logical not, bit or, bit and, bit conversion, shift, and bit rotation (ビツト reflow), and the like on a storage area, which is a register (not shown) and can be accessed at high speed, by using an ALU (Arithmetic Logic Unit) (not shown). Further, in order to perform saturation operations such as addition, subtraction, multiplication, and division, trigonometric functions, and vector operations for supporting multimedia processing at high speed, the present invention may be implemented by including a coprocessor and a component configured by the CPU101 itself.
An IPL (Initial program loader) to be executed immediately after the power is turned on is recorded in the ROM102, and by executing the IPL, a program recorded in the DVD-ROM is read out to the RAM103 and started to be executed by the CPU 101. In addition, the ROM102 stores programs of an operating system and various data necessary for controlling the operation of the entire game device 100.
The RAM103 is used for temporarily storing data or programs, and holding programs or data read from the DVD-ROM, and data necessary for game progress or chat (チヤツト) communication. Further, the CPU101 sets a variable area in the RAM103, and performs the following processing: directly applying the value held in the variable to the ALU to perform an operation; or the values stored in the RAM103 are stored in the register first, then the register is operated, and the operation result is written back to the memory, and so on.
The controller 105 connected via the interface 104 accepts an operation input performed when the user executes a game such as a racing game.
The external memory 106 detachably connected via the interface 104 stores data indicating a game status (past performance, etc.) of a racing game, etc., data indicating a progress state of the game, data of a log (ログ) of chat communication, etc., in a rewritable manner. The user can appropriately record these data in the external memory 106 by performing instruction input via the controller 105.
A program for realizing a game and image data and sound data attached to the game are recorded on a DVD-ROM inserted in the DVD-ROM drive 108. The DVD-ROM drive 108 performs read processing on the DVD-ROM mounted therein, reads out necessary programs and data, and temporarily stores them in the RAM103 and the like, by the control of the CPU 101.
The image processing unit 107 processes data read from the DVD-ROM by the CPU101 or an image arithmetic processor (not shown) provided in the image processing unit 107, and then records the processed data in a frame memory (not shown) provided in the image processing unit 107. The image information recorded in the frame memory is converted into a video signal at a predetermined synchronization timing, and is output to a monitor (not shown) connected to the image processing unit 107. This enables various images to be displayed.
The image arithmetic processor can execute a superimposition operation of a two-dimensional image, a perspective operation such as alpha blending, and various saturation operations at high speed.
Further, it is also possible to draw polygon information that is arranged in the virtual three-dimensional space and to which various texture information is added by a Z buffer method, and to execute a drawing image for obtaining a polygon arranged in the virtual three-dimensional space in an overhead view from a predetermined viewpoint position in a predetermined line of sight direction at high speed.
Further, the CPU101 and the image arithmetic processor cooperate with each other to render the character string as a two-dimensional image on the frame memory or on each polygonal surface based on font information defining the shape of the character.
Further, the image arithmetic processor generally has a drawing function of appropriately deforming and projecting a bitmap image (texture image) of a triangle or a quadrangle onto the regions of other triangles or quadrangles.
The NIC109 is used for connecting the game device 100 to a computer communication network (not shown) such as the internet, and is configured by: components conforming to the 10BASE-T/100BASE-T standard used when constructing a LAN (Local Area Network), analog modems for connecting to the internet with telephone lines, ISDN (Integrated Services Digital Network) modems, ADSL (Asymmetric Digital Subscriber Line) modems, cable modems for connecting to the internet with cable lines, etc., and interfaces (not shown) between them and the CPU 101.
The audio processing unit 110 converts audio data read from the DVD-ROM into an analog audio signal, and outputs the analog audio signal from a speaker (not shown) connected thereto. Further, the effect sound or music data to be generated during the game is generated under the control of the CPU101, and the corresponding sound is output from the speaker.
When the audio data recorded on the DVD-ROM is MIDI data, the audio processing unit 110 refers to the sound source data included therein and converts the MIDI data into PCM data. In the case of compressed audio data such as ADPCM format or Ogg Vorbis format, the compressed audio data is expanded and converted into PCM data. The PCM data is converted into Digital/Analog (D/a) data at a timing corresponding to the sampling frequency of the PCM data, and is output to a speaker, whereby audio can be output.
The game device 100 may be configured by a large-capacity external storage device such as a hard disk so as to have the same functions as the ROM102, the RAM103, the external memory 106, and a DVD-ROM mounted on the DVD-ROM drive 108.
Fig. 2 is a schematic diagram of the schematic configuration of an image processing apparatus according to one embodiment of the present invention. Hereinafter, description will be made with reference to the drawings. Fig. 3 is a flowchart of a control flow of an image processing method executed with the image processing apparatus. Hereinafter, the description will be made with reference to these drawings.
The image processing apparatus 201 of the present embodiment includes a dividing unit 202, an acquiring unit 203, a mixing unit 204, and an output unit 205.
First, the image processing apparatus 201 receives an input of an image to be processed (hereinafter referred to as "original image") (step S301). The image processing apparatus 201 receives this input by reading information constituting the original image from a hard disk or a DVD-ROM or the like mounted on the DVD-ROM drive 108. The received original image is stored in a temporary storage area such as the RAM 103.
A typical original image is a bitmap image, each pixel included in the bitmap is treated as a pixel, and a numerical value indicating a hue, a brightness, and a chromaticity of the pixel or an RGB (Red Green Blue) value of the pixel is a pixel value.
Fig. 4 is an explanatory diagram of an example of the shape of the original image. As shown in fig. 4(a), a bitmap image having a rectangular shape is most typically used as the original image. The shape and size of the rectangle may be changed arbitrarily, but typically, the size is set to a particle size suitable for display on a monitor, a television, or the like.
In addition to the shape of the original image, various shapes are available. For example, in the case of a racing game showing the appearance of the outside world from the viewpoint of the driver, the center of the screen becomes an infinite point, and a portion displayed in the vicinity of the infinite point shows the state of the road surface or the appearance of another car, so that there is a case where the afterimage effect of generating a sense of speed is not used. Therefore, as shown in fig. 4(b), the remaining portion of the screen with a circle in the center may be used as the original image.
In addition, even when the appearance of the exterior of the racing game is shown from the viewpoint of the driver, various information such as the appearance of the driver's seat of the automobile, the speed, and the state of the engine are often provided on the lower part of the screen, and therefore, the afterimage effect of generating the sense of speed is not necessary. In this case, as shown in fig. 4(c), the remaining portion of the screen, which is formed by cutting a square shape in the center of the screen and cutting a trapezoid shape in the lower portion of the screen, may be used as the original image.
Further, a part of the shapes shown in (a) to (c) of fig. 4 may be used as the original image.
Next, the dividing unit 202 of the image processing apparatus 201 divides the original image into a plurality of regions (step S302). The dividing section 202 divides the image such that the shape of each of the plurality of regions is longer in the direction in which the video constituted by the pixels included in the region moves.
Therefore, the CPU101 operates as the dividing section 202 in cooperation with the RAM 103.
Fig. 5 is an explanatory diagram of a state in which the original image shown in fig. 4 is divided. Examples of the shapes of the regions divided from the original image shown in (a) to (c) of fig. 4 are shown in (a) to (e) of fig. 5.
The original images shown in fig. 4 are all graphical displays of the outside world viewed from the viewpoint when an object disposed in the outside world in the three-dimensional virtual space or the viewpoint moves.
Therefore, when the viewpoint is close to the infinity point (when the line of sight is directed to the traveling direction), and when the viewpoint is distant from the infinity point in the direction distant from the infinity point at the screen center (when the line of sight is directed in the direction opposite to the traveling direction), the video of the object constituted by the pixels of the original image radially moves in the direction close to the infinity point at the screen center. Further, the farther from the point of infinity at the center of the screen, the faster the video of the object moves.
In fig. 5(a), the original image is divided radially by line segments of the infinity point, and the shape of each region is triangular.
In fig. 5(b), the original image is divided radially by line segments of the infinity point, and the original image is divided into concentric circles centered on the infinity point, so that each region has a fan shape. The central angles of the respective fan shapes are equal in the division shown in the figure, and the lengths of the non-circular arc sides of the fan shapes (hereinafter referred to as "oblique sides" of the fan shapes) are also equal. Further, the length of the oblique side is longer than the length of the circular arc.
In fig. 5(c), the original image is divided radially by line segments at the point of infinity, the regions are divided by horizontal lines at the upper part of the original image, the regions are divided by vertical lines at the left and right parts of the original image, and the shape of each region is trapezoidal. Therefore, if the hypotenuses of the trapezoids are elongated, they intersect at an infinite point. In addition, the upper bottom and the lower bottom are horizontal or vertical.
Fig. 5(d) considers other segmentations. For example, when the viewpoint is rotated or when the outside world is rotated, the sense of speed is expressed by an afterimage expression. In this case, the pixel representing the external object makes a substantially circular motion around a predetermined center.
Therefore, the original image is divided by concentric circles centered on the center of the rotational motion, and further divided by a straight line passing through the center of the rotational motion. Therefore, as in fig. 5(b), the original image is divided into sectors, and the length of the arc of the sector is longer than the length of the oblique side.
Fig. 5(e) also considers other divisions, and considers the simplest unidirectional shift. This corresponds to, for example, a case where the line of sight is directed in the lateral direction of the traveling direction. In this case, the video of the external object moves in parallel in a certain direction. In this division, since it is assumed that the video of the external object moves in parallel in the horizontal direction, the original image is divided into a plurality of horizontally long rectangular regions.
In fig. 5(e), the regions are divided so as to be arranged in a staggered manner like the bricks, but division such that the vertices of each region are simply formed into a grid-like or tile-like aggregate may be employed.
As is clear from the examples shown in fig. 5(a) to (e), it is one of the features of the present invention that the original image is divided into long shapes in a direction in which it is assumed that an afterimage is to be generated. The length and width of the divided region may be determined by various methods.
The width may be approximately constant, or may be appropriately changed. When there is an infinite point, the central angle of the circular arc or the angle formed by the oblique sides of the trapezoid may be inversely proportional to the distance from the infinite point of the original image so that the width is approximately constant.
The length may be approximately constant, or may be appropriately changed. When there is an infinite point, if it is appropriately changed, the following method can be adopted: the further away from the point of infinity, the longer.
The length of the divided region may be changed in proportion to the moving speed of the viewpoint relative to the outside world in the virtual three-dimensional space. These methods may be combined.
When the original image is divided in this manner, the acquiring unit 203 acquires a representative value of each of the plurality of divided regions based on the pixel values of the pixels included in the region (step S303).
As representative values, there are the following.
(a) Pixel values of pixels appropriately selected in each region. As the position of the pixel, the following various methods can be adopted: the pixels arranged at the lowest address in the RAM103 are used in addition to the center and the center of gravity of the area, and any point in the uppermost stream and any point in the lowermost stream in the moving direction of the video.
(b) An average value of pixel values of pixels included in each region. When the image processing unit 107 includes hardware dedicated to graphics processing such as an image arithmetic processor, such an average value can be obtained at high speed. Details will be described later.
The method (a) is one of the simplest methods for selecting a representative value, and can reduce the processing time. In some fields where processed images are used, a sufficient sense of speed may be obtained by such a simple technique.
The technique (b) is a technique similar to the technique for calculating the motion blur effect, and is a technique for taking an average of pixels included in each region and using, as an afterimage, a video having the same shape as the region and having pixel values of the average.
In this way, the CPU101 operates as the acquisition unit 203 in cooperation with the RAM103 and the image processing unit 107.
Then, the blending unit 204 acquires a processed image composed of a result of blending the representative value acquired for the region including the pixel and the pixel value of the pixel at a predetermined blending ratio for each pixel included in the original image (step S304).
This processing corresponds to obtaining a new image (processed image) which is a synthesis result obtained by alpha-synthesizing (alpha-blending) the intermediate image obtained by filling each region of the original image with the representative value of the region and the original image at a predetermined alpha value.
The intermediate image may be actually generated, or may be calculated on a pixel-by-pixel basis as described above and processed as a virtual image.
As the α value used in α blending, the following ones can be employed.
(a) A constant. This corresponds to a case where the video of any object moves at a constant speed.
(b) So that the following changes are made: if the moving speed of the viewpoint with respect to the outside increases, the contribution degree of the original image in the α blend decreases.
(c) Near the infinity point, the contribution degree of the original image is high; the contribution degree of the original image decreases as the distance from the infinity point increases.
(d) The above-mentioned methods (b) and (c) are combined.
The faster the movement speed in the procedure (b) is, the farther from the infinity point in the procedure (c) is, the stronger the afterimage effect is. Therefore, only by appropriately changing the mixing ratio for each pixel, the effect of the real afterimage can be presented to the viewer, and the feeling of speed can be easily generated.
Therefore, the CPU101 operates as the mixing section 204 in cooperation with the RAM103 or the image processing section 107.
Finally, the output unit 205 outputs the obtained processed image (step S305). As the output destination of the image, a display device, a television, or the like connected through the image processing unit 107 can be used.
The result of the α blending is displayed on a display device or the like and output, and since the shape of each region is long in the moving direction of the video image, the intermediate image is a striped image long in the moving direction of the video image. Therefore, after the α blending, the original image becomes a video blurring in the moving direction. This will have the effect of being a residual image due to the movement.
As described above, according to the present embodiment, by dividing an image into regions having a shape long in the moving distance direction and acquiring a representative value of the regions, an effect like a residual image can be easily obtained, and a sense of speed can be easily given to viewers.
Further, in the case of viewing as a video of a moving image, a video distant from the point of infinity moves within the screen at a faster speed than a video close to the point of infinity. Therefore, the mixing ratio is changed between the vicinity of the infinity point and the far distance from the infinity point so that the farther from the infinity point, the greater the afterimage effect.
Example 2
Embodiment 2 of the present invention relates to a configuration in which the configuration of the above-described embodiment is newly viewed from another point of view by an image arithmetic processor or the like. Therefore, the present embodiment can adopt the same configuration or modification as those of the above embodiments unless otherwise specified.
As described above, in recent years, game devices 100 and general-purpose computers have increasingly included dedicated image processing processors. In such an image arithmetic processor, a function of pasting a polygon represented by a bitmap into an area of a polygon having a shape/size different from that of the polygon is often included. Such a function is sometimes provided as a function of rendering by pasting a texture, and also as various perspective transformation/perspective projection functions.
In a three-dimensional graphic, texture information such as a triangle or a quadrangle is pasted to a region arranged in a virtual three-dimensional space, and at this time, perspective projection is performed. In this pasting, the number of pixels can be increased or decreased.
Therefore, when pasting a certain polygon a to a polygon B of another shape, if the number of pixels is reduced, the region of the polygon a projected onto a certain pixel of the polygon B becomes the "divided region" in the above-described embodiment, and the pixel value of the pixel of the polygon B becomes the representative value. In contrast, if the processing from the polygon B to the polygon a is performed, an image "in which the region is" filled with "representative value" is obtained.
Fig. 6 is an explanatory diagram for explaining an outline of such processing. Hereinafter, description will be made with reference to the drawings.
First, the original image 601 has a trapezoidal shape. This is an original image composed of a part of the original image shown in fig. 5 (c).
An image obtained by pasting the original image 601 to a rectangular area is the 1 st intermediate image 602.
Further, an image obtained by pasting the 1 st intermediate image 602 to an area having the same shape as the original image 601 is the 2 nd intermediate image 603.
Here, the compression rate in the vertical direction and the compression rate in the horizontal direction in units of pixels from the original image 601 to the 1 st intermediate image 602 are considered, and in this example, the compression rate in the vertical direction is made higher than the compression rate in the horizontal direction. The compression rate may be a length in units of pixels in each direction.
In so doing, 1 pixel 605 of the 1 st intermediate image 602 corresponds to "a pixel along the region 606 that is long in the direction radially extending from the point of infinity" in the original image 601. That is, the "representative value" of the "pixels of the region 606 that is long in the direction radially extending from the infinity point" is the pixel value of 1 pixel of the 1 st intermediate image 602.
Then, in the 2 nd intermediate image 603, "the region 607 long in the direction radially extending from the infinity point" is filled with the pixel value of 1 pixel of the 1 st intermediate image 602.
That is, for each pixel included in the 1 st intermediate image 602, the shape of the area where the pixel is pasted onto the 2 nd intermediate image 603 is made longer in the direction in which the video of the object constituted by the pixels included in the area in the original image 601 moves. Further, the region in the original image 601 is rendered onto the pixel in the 1 st intermediate image 602.
In other words, when the plurality of pixels 606 of the original image 601 → 1 point 605 of the 1 st intermediate image 602 → the plurality of pixels 607 of the 2 nd intermediate image 603 are converted, "the plurality of pixels 606 of the original image 601" corresponds to "the area", and the shape thereof is configured to be long in the moving direction of the video.
By using the image arithmetic processor in this way, in the present embodiment, the region can be divided by the dividing unit 202 and the representative value can be acquired by the acquiring unit 203 at the same time. Hereinafter, the members of the present embodiment newly recognized from other points of view will be described in detail.
Fig. 7 is a schematic diagram of the schematic configuration of the image processing apparatus according to the present embodiment. Fig. 8 is a flowchart of a control flow of the processing of the image processing method executed by the image processing apparatus of the present embodiment. Hereinafter, the description will be made with reference to these drawings.
In this embodiment, recognizing the configuration of the image processing apparatus 201 again from another point of view, the image processing apparatus 201 includes a 1 st generation unit 701, a 2 nd generation unit 702, a mixing unit 204, and an output unit 205.
First, the image processing apparatus 201 accepts input of an original image to be processed (step S801). This is the same as the above embodiment.
Next, the 1 st generation unit 701 of the image processing apparatus 201 renders the original image 601 composed of a set of pixels as a texture, and generates the 1 st intermediate image 602 composed of a set of a small number of pixels from the original image 601 (step S802).
As described above, when the original image 601 is composed of polygons, the 1 st generating unit 701 projects the polygons composed of the same number of vertices as the original image. The 1 st intermediate image 602 thus obtained becomes an image in which the original image 601 is "flattened" according to a certain rule.
Next, the 2 nd generation unit 702 renders the generated 1 st intermediate image 602 as a texture, and generates a 2 nd intermediate image 603 having the same shape as the original image 601 (step S803).
That is, the 2 nd generation unit 702 performs a process of restoring the 1 st intermediate image 602 that has been "flattened" as described above to the original shape. By these processes, pixel values are determined for the plurality of pixels 606 of the original image 601 → 1 point 605 of the 1 st intermediate image 602 → the plurality of pixels 607 of the 2 nd intermediate image 603.
In the plurality of pixels 606 of the original image 601 → 1 point 605 of the 1 st intermediate image 602, an average value of the plurality of pixels 606 of the original image 601 or a pixel at a predetermined position is obtained as a representative value, and the plurality of pixels 607 of the 2 nd intermediate image 603 all have a pixel value of the obtained representative value.
Further, the blending unit 204 blends the original image 601 and the generated 2 nd intermediate image 603 at a predetermined blending ratio to generate a processed image (step S804). Alpha blending is performed in the same manner as in the above embodiment. Various techniques similar to those of the above embodiments can be applied to the α value.
Then, the output unit 205 outputs the generated processed image (step S805), and the present process is ended.
Therefore, particularly when supported by a three-dimensional image arithmetic processor, it is possible to perform high-speed image processing arithmetic operation in the same manner as in the above-described invention, and it is possible to obtain an image effect like a residual image and give a sense of speed to a viewer.
In the present application, priority is claimed based on Japanese patent application No. 2004-273846, and all the contents of the basic application are taken as contents of the present application.
Industrial applicability
As described above, it is possible to provide an image processing device, an image processing method, a program for realizing them on a computer and a computer-readable information recording medium storing the program, which are suitable for providing an effect of giving a sense of tempo to an image, and which can be applied to various virtual reality technologies for providing a virtual experience, in addition to a case of giving a sense of tempo to a still image and a case of realizing a game, an action game, or the like on a game device.

Claims (14)

1. An image processing apparatus is characterized in that,
the method comprises the following steps: a dividing unit that divides an original image, which is an image composed of a set of pixels, into a plurality of regions;
an acquisition unit that acquires, for each of the plurality of divided regions, a representative value of the region based on pixel values of pixels included in the region;
a blending unit that obtains, for each pixel included in the original image, a processed image composed of a processing result of blending a representative value obtained for a region including the pixel and a pixel value of the pixel at a predetermined blending ratio;
an output unit that outputs the processed image;
the dividing unit divides the image so that the shape of each of the plurality of regions is longer in a direction in which a video composed of pixels included in the region moves.
2. The image processing apparatus according to claim 1,
the acquiring unit acquires an average of pixel values in each of the plurality of regions as a representative value of each of the plurality of regions.
3. The image processing apparatus according to claim 1,
the acquiring unit acquires, as a representative value of each of the plurality of regions, a pixel value of a pixel at a predetermined position included in the region.
4. The image processing apparatus according to claim 1,
the plurality of regions respectively have boundaries of a 1 st arc centered on a predetermined point, a 2 nd arc centered on the predetermined point, or the predetermined point, a line segment included on a 1 st straight line passing through the predetermined point, and a line segment included on a 2 nd straight line passing through the predetermined point, and the direction in which the video moves is along the 1 st arc.
5. The image processing apparatus according to claim 1,
the plurality of regions each have, as boundaries, at least a line segment included in a 1 st straight line that passes through a predetermined one point and a line segment included in a 2 nd straight line that passes through the predetermined one point, and the direction in which the video moves is a direction converging on the predetermined one point or a direction diverging from the predetermined one point.
6. The image processing apparatus according to claim 5,
performing division by the division unit and acquisition by the acquisition unit by projecting the original image in a see-through manner onto an intermediate 1 st image composed of a set of pixels smaller in number than the original image, then projecting the intermediate 1 st image in a see-through manner onto an intermediate 2 nd image having the same shape as the original image, and setting a pixel value of each pixel included in the intermediate 2 nd image as a representative value of a region including a position of the pixel;
the mixing unit obtains the processed image by mixing the original image and the intermediate 2 nd image at the predetermined mixing ratio.
7. An image processing apparatus is characterized in that,
the method comprises the following steps: a 1 st generation unit that renders an original image composed of a set of pixels as a texture and generates an intermediate 1 st image composed of a set of a small number of pixels from the original image;
a 2 nd generation unit which generates an intermediate 2 nd image having the same shape as the original image by rendering the generated intermediate 1 st image as a texture;
a mixing unit that mixes the original image and the generated intermediate 2 nd image at a predetermined mixing ratio to generate a processed image;
an output unit that outputs the generated processed image;
for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which area is rendered on the pixel in the intermediate 1 st image, to be drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
8. The image processing apparatus according to claim 3,
the predetermined mixing ratio is determined for each of the plurality of regions based on a distance between the region and the predetermined point, and when the distance increases, the mixing ratio between the representative value and the pixel value increases.
9. An image processing method is characterized in that,
the method comprises the following steps: a dividing step of dividing an original image, which is an image composed of a set of pixels, into a plurality of regions;
an acquisition step of acquiring, for each of the plurality of divided regions, a representative value of the region based on pixel values of pixels included in the region;
a blending step of obtaining, for each pixel included in the original image, a processed image composed of a processing result of blending a representative value obtained for a region including the pixel and a pixel value of the pixel at a predetermined blending ratio;
an output step of outputting the obtained processed image;
in the dividing step, the image is divided so that the shape of each of the plurality of regions is long in a direction in which the video composed of the pixels included in the region moves.
10. An image processing method is characterized in that,
the method comprises the following steps: a 1 st generation step of rendering an original image composed of a set of pixels as a texture and generating an intermediate 1 st image composed of a set of a small number of pixels from the original image;
a 2 nd generation step of generating an intermediate 2 nd image having the same shape as the original image by rendering the generated intermediate 1 st image as a texture;
a mixing step of mixing the original image and the generated intermediate 2 nd image at a predetermined mixing ratio to generate a processed image;
an output step of outputting the generated processed image;
for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which area is rendered on the pixel in the intermediate 1 st image, to be drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
11. A computer-readable information recording medium having a program recorded thereon, the program causing a computer to function as:
a dividing unit that divides an original image, which is an image composed of a set of pixels, into a plurality of regions;
an acquisition unit that acquires, for each of the plurality of divided regions, a representative value of the region based on pixel values of pixels included in the region;
a blending unit that obtains, for each pixel included in the original image, a processed image composed of a processing result of blending a representative value obtained for a region including the pixel and a pixel value of the pixel at a predetermined blending ratio;
an output unit that outputs the processed image;
the dividing unit divides the image so that the shape of each of the plurality of regions is longer in a direction in which a video composed of pixels included in the region moves.
12. A computer-readable information recording medium having a program recorded thereon, the program causing a computer to function as:
a 1 st generation unit that renders an original image composed of a set of pixels as a texture and generates an intermediate 1 st image composed of a set of a small number of pixels from the original image;
a 2 nd generation unit which generates an intermediate 2 nd image having the same shape as the original image by rendering the generated intermediate 1 st image as a texture;
a mixing unit that mixes the original image and the generated intermediate 2 nd image at a predetermined mixing ratio to generate a processed image;
an output unit that outputs the generated processed image;
for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which area is rendered on the pixel in the intermediate 1 st image, to be drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
13. A program for causing a computer to function as:
a dividing unit that divides an original image, which is an image composed of a set of pixels, into a plurality of regions;
an acquisition unit that acquires, for each of the plurality of divided regions, a representative value of the region based on pixel values of pixels included in the region;
a blending unit that obtains, for each pixel included in the original image, a processed image composed of a processing result of blending a representative value obtained for a region including the pixel and a pixel value of the pixel at a predetermined blending ratio;
an output unit that outputs the processed image;
the dividing unit divides the image so that the shape of each of the plurality of regions is longer in a direction in which a video composed of pixels included in the region moves.
14. A program for causing a computer to function as:
a 1 st generation unit that renders an original image composed of a set of pixels as a texture and generates an intermediate 1 st image composed of a set of a small number of pixels from the original image;
a 2 nd generation unit which generates an intermediate 2 nd image having the same shape as the original image by rendering the generated intermediate 1 st image as a texture;
a mixing unit that mixes the original image and the generated intermediate 2 nd image at a predetermined mixing ratio to generate a processed image;
an output unit that outputs the generated processed image;
for each pixel included in the intermediate 1 st image, the shape of the area of the original image, which area is rendered on the pixel in the intermediate 1 st image, to be drawn in the intermediate 2 nd image is long in the direction in which the video composed of the pixels included in the area moves.
HK07114113.6A 2004-09-21 2005-09-12 Image processing device, image processing method, information recording medium, and program HK1105704A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP273846/2004 2004-09-21

Publications (1)

Publication Number Publication Date
HK1105704A true HK1105704A (en) 2008-02-22

Family

ID=

Similar Documents

Publication Publication Date Title
US20080192043A1 (en) Display, Displaying Method, Information Recording Medium, and Program
EP1642625B1 (en) Video game device and image processing program
JP3725524B2 (en) Method for generating computer display image and computer processing system and graphics processor for generating image data
US20060209061A1 (en) Generating 2D transitions using a 3D model
JP2004005452A (en) Image processor, image processing method, semiconductor device, computer program and record medium
JP3961524B2 (en) Image processing apparatus, image processing method, and program
WO2003009236A1 (en) Drawing method
US6982717B2 (en) Game apparatus, storage medium and computer program
EP1081654A2 (en) Method and apparatus for providing depth blur effects within a 3d videographics system
WO2006033261A1 (en) Image processor, image processing method, information recording medium, and program
US7479961B2 (en) Program, information storage medium, and image generation system
HK1105704A (en) Image processing device, image processing method, information recording medium, and program
JP2002163671A (en) Game system, program and information storage medium
JP4231684B2 (en) GAME DEVICE AND GAME PROGRAM
US6831639B2 (en) Computer readable storage medium storing 3-D game image processing program, 3-D game image processing method, video game machine, and 3-D game image processing program
JP2007272356A (en) Program, information storage medium, and image generation system
JP2003022453A (en) Method and device for plotting processing, recording medium having plotting processing program, recorded thereon, and plotting processing program
JP4554834B2 (en) Image processing apparatus and method, and program thereof
HK1103832A (en) Display, displaying method, information recording medium, and program
JP2003015214A (en) Image display device
JP2013050895A (en) Game device, game device control method, and program
JP2006277489A (en) Program, information storage medium, and image generation system
JP2006004364A (en) Program, information storage medium, and image generation system
HK1035599A (en) Method and apparatus for providing depth blur effects within a 3d videographics system