[go: up one dir, main page]

WO2018186169A1 - Video generation device, video generation method, and video generation program - Google Patents

Video generation device, video generation method, and video generation program Download PDF

Info

Publication number
WO2018186169A1
WO2018186169A1 PCT/JP2018/011071 JP2018011071W WO2018186169A1 WO 2018186169 A1 WO2018186169 A1 WO 2018186169A1 JP 2018011071 W JP2018011071 W JP 2018011071W WO 2018186169 A1 WO2018186169 A1 WO 2018186169A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
camera
visual field
interpupillary distance
video generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/011071
Other languages
French (fr)
Japanese (ja)
Inventor
加藤 勝也
貴司 吉本
泰弘 浜口
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Publication of WO2018186169A1 publication Critical patent/WO2018186169A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • the present invention relates to a video generation device, a video generation method, and a video generation program.
  • the present application claims priority on Japanese Patent Application No. 2017-075213 filed in Japan on April 5, 2017, the contents of which are incorporated herein by reference.
  • Patent Documents 1 and 2 describe a display device that realizes a glasses-type terminal.
  • AR and MR are realized by displaying different images on the left-eye display and right-eye display included in the glasses-type terminal to obtain the effects of convergence and binocular parallax.
  • An aspect of the present invention has been made in view of such circumstances, and an object thereof is to provide a video generation method and a video generation program capable of realizing AR and MR while suppressing the load on the processor.
  • a video generation method and a video generation program according to an aspect of the present invention are configured as follows.
  • a video generation method is a video generation method in which an object in a 3D (Three Dimensions) pseudo space is projected by a camera in the 3D pseudo space to generate a video and displayed on a video display device.
  • a visual field setting process for setting the visual field of the camera based on visual field information input from the video display device
  • an interpupillary distance setting process for setting an interpupillary distance, the visual field, the object, and the camera.
  • the video rendering process measures the load of the processor, and when the load falls below a threshold value, the camera is located at a position away from the camera by the interpupillary distance in the 3D pseudo space. Is added, and the video is generated by performing perspective projection with each of the cameras, and the video shift process is skipped.
  • the video rendering process adds a camera at a location separated from the camera by the interpupillary distance in the 3D pseudo space. Then, the video is generated by performing perspective projection with each of the cameras, and the processing of the video shift process is skipped.
  • the video generation program causes a computer to execute the above-described video generation method.
  • An image generation apparatus is an image generation apparatus that generates an image by perspectively projecting an object in a 3D (Three) pseudo space by a camera in the 3D pseudo space, and displays the image on a video display device.
  • a visual field setting unit that sets a visual field of the camera based on visual field information input from the video display device
  • an interpupillary distance setting unit that sets an interpupillary distance, the visual field, the object, and the camera.
  • a video rendering unit that renders a reference video based on a positional relationship; and, by shifting the reference video in the horizontal direction based on the visual field, the inter-pupil distance, and the positional relationship between the object and the camera,
  • the video generation method can generate a video that can be AR displayed while reducing the load on the processor.
  • FIG. 1 is a diagram illustrating an example of a video generation apparatus 10 according to the present embodiment.
  • FIG. 1 also shows the video display unit 11.
  • the video display unit 11 is provided in a video display device such as a glasses-type terminal, a smart glass, or a head-mounted display.
  • the video generation device 10 may be provided in the video display device, or may be provided in a terminal device that can be connected to the video display device such as a smartphone. As shown in FIG.
  • a video generation device 10 in this embodiment includes a visual field setting unit (visual field setting process) 101, an interpupillary distance setting unit (interpupillary distance setting process) 102, and a video rendering unit ( A video rendering process (103) 103 and a video shift unit (video shift process) 104 are included.
  • the visual field setting unit 101 sets the size of the visual field (FoV: “Field Of View”) used in the video rendering unit 103 based on the visual field information input from the video display unit 11.
  • the field of view is, for example, an angle in the vertical direction when the camera in the 3D pseudo space performs perspective projection.
  • the visual field setting unit 101 outputs the set visual field value to the video rendering unit 103.
  • the visual field information is, for example, the visual field value itself.
  • the visual field value may be a vertical value, a diagonal value, or a horizontal value. Further, the visual field value may be one or more values among a vertical value, a diagonal value, and a horizontal value.
  • the visual field information is, for example, an ID such as a vendor ID or a product ID of the video display device provided with the video display unit 11.
  • the video generation apparatus 10 can store the ID and the field-of-view value in association with each other.
  • the video generation apparatus 10 can have a table (database) that associates IDs and field-of-view values.
  • the video generation visual field setting unit 101 can determine the visual field value based on this ID.
  • the field-of-view setting unit 101 can determine the value of the field of view by, for example, comparing this ID with its own database.
  • the field-of-view setting unit 101 can, for example, inquire a server on the Internet for this ID and receive a field-of-view value.
  • the viewing angle setting unit 101 can use the table (database) on the server.
  • the interpupillary distance setting unit 102 sets the interpupillary distance (PD) and outputs it to the video rendering unit 103 and the video shift unit 104.
  • the interpupillary distance may be a fixed value or may be manually input by the user.
  • the interpupillary distance setting unit 102 may set the interpupillary distance measured by a sensor provided in the video display unit 11 or a wearable sensor.
  • the video rendering unit 103 renders video by the camera shooting an object in the 3D pseudo space.
  • the visual field of the camera can be set to the visual field value input from the visual field setting unit 101.
  • the video shift unit 104 receives display information from the video display unit 11. This display information may be obtained from the Internet.
  • the video shift unit 104 shifts the video input from the video rendering unit 103 based on the interpupillary distance input from the interpupillary distance setting unit 102 and the display information, and converts the video into two videos.
  • the video shift unit 104 outputs the two generated videos to the video display unit 11. Note that the video before the shift can be referred to as a reference video.
  • FIG. 2 shows an example of the 3D pseudo space used by the video rendering unit 103.
  • 20 is a view of the 3D pseudo space viewed from the height direction.
  • 21 is a view of the 3D pseudo space as viewed from the side.
  • the north direction of the 3D pseudo space can be the z axis
  • the height direction can be the y axis
  • the east direction can be the x axis.
  • 20 is a diagram of the 3D pseudo space viewed from the plus direction of the y axis
  • 21 is a diagram of the 3D pseudo space viewed from the plus direction of the x axis.
  • the camera 201 can generate an image by perspective projection based on the positional relationship between itself and the object 202 in the 3D pseudo space and the field of view set in the field of view setting unit 101.
  • the object 202 is a rectangular parallelepiped.
  • This perspective projection can be performed based on the field of view 211.
  • the visual field 211 can be the visual field set in the visual field setting unit 101.
  • the vertical visual field is set, but the horizontal visual field can be set from the display size and the aspect ratio of the video display unit 11.
  • Reference numeral 203 denotes the distance between the camera 201 and the object 202.
  • FIG. 3 is a diagram illustrating an example in which the video shift unit 104 shifts the image generated by the video rendering unit 103.
  • Reference numerals 301 and 311 denote displays of the video display unit 11.
  • Reference numeral 302 denotes a horizontal center line of the display 301, and reference numeral 312 denotes a center line of the display 311.
  • 303 represents the user's left eye and 313 represents the user's right eye.
  • 304 represents the center line of the left eye 303, and 314 represents the center line of the right eye 313.
  • Reference numeral 305 denotes an object corresponding to 202 in FIG. 2 rendered by the video rendering unit 103. In the case of 20 in FIG.
  • an image in which the object 305 is displayed on the center line 302 is generated on the display 301, and the image shift unit 104 applies a rightward shift thereto.
  • the video shift unit 104 adds a leftward shift to the video displayed on the display 311.
  • the shift amount can be performed based on the visual field set by the visual field setting unit 101, the inter-pupil distance set by the inter-pupil distance setting unit 102, and the distance between the object in the 3D pseudo space rendered by the video rendering process 103 and the camera.
  • FIG. 4 is a diagram in which the setting of the 3D pseudo space in FIG. 2 is changed.
  • the object 402 is obtained by moving the object 202 in the horizontal direction.
  • Reference numeral 403 denotes the distance between the camera 201 and the object 402.
  • Reference numeral 404 denotes a line in the same direction as the horizontal direction of the camera 201 and passes through the object 402.
  • a 1 (xx 0 ) + a 0 . x 0 and the coefficient a i is a value determined by the image display device 11 can be known from the output obtained by inputting the information of the image display device 11 in the database.
  • the database may be in the video generation device 10.
  • the database may be on the server.
  • the video rendering unit 103 performs rendering once, and generates two videos that can obtain the effect of congestion based on the copy, thereby reducing the load on the processor. can do.
  • the video shift performed by the video shift unit 104 may be performed when the processor load exceeds a threshold.
  • the video rendering process 103 can prepare two cameras for the left eye and the right eye in the 3D pseudo space, and render two videos.
  • An image generated by the left-eye camera can be displayed on the display 301 of FIG. 3, and an image generated by the right-eye camera can be displayed on the display 311 of FIG. In this case, the processing of the video shift unit 104 is skipped.
  • the video rendering unit 103 may calculate the processor load.
  • the video generation device 10 performs rendering using one camera in the 3D pseudo space, and generates a video with congestion by shifting the rendered video in the horizontal direction. In the present embodiment, a case will be described in which a plurality of target objects exist in the 3D pseudo space.
  • FIG. 5 is an example in which an object 501 further exists in the 3D pseudo space of FIG.
  • the sense of distance between the camera 201 and the object 501 is not correct.
  • the video rendering unit 103 in FIG. 1 detects that the number of objects in the 3D pseudo space is two or more, it sets not the camera 201 but also a left eye camera and a right eye camera. Based on these, a left-eye video and a right-eye video can be generated. By doing so, the sense of distance of the object 501 can be correctly expressed.
  • the object 202 and the object 501 can include depth on information.
  • the video rendering unit 103 in FIG. 1 can determine whether to use the left-eye camera and the right-eye camera based on whether the number of objects for which depth-on information is set is one. For example, in FIG. 5, when depth on information is set for the object 202 and depth on information is not set for the object 501, the video rendering unit 103 renders a video based on the camera 201, and The shift unit 104 shifts the video, thereby generating a left-eye video and a right-eye video. By doing so, the load on the processor can be reduced.
  • a program that operates in the video generation device, the video generation method, and the video generation program according to one aspect of the present invention is a program that controls a CPU or the like so as to realize the functions of the above-described embodiments according to one aspect of the present invention.
  • a program that causes a computer to function Information handled by these devices is temporarily stored in the RAM at the time of processing, then stored in various ROMs and HDDs, read out by the CPU, and corrected and written as necessary.
  • a semiconductor medium for example, ROM, nonvolatile memory card, etc.
  • an optical recording medium for example, DVD, MO, MD, CD, BD, etc.
  • a magnetic recording medium for example, magnetic tape, Any of a flexible disk etc.
  • the program when distributing to the market, can be stored and distributed on a portable recording medium, or transferred to a server computer connected via a network such as the Internet.
  • the storage device of the server computer is also included in one embodiment of the present invention.
  • part or all of the video generation method and the video generation program in the above-described embodiments may be realized as an LSI that is typically an integrated circuit.
  • Each functional block of the receiving apparatus may be individually formed as a chip, or a part or all of them may be integrated into a chip. When each functional block is integrated, an integrated circuit controller for controlling them is added.
  • the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • an integrated circuit based on the technology can also be used.
  • the present invention is not limited to the above-described embodiment. It goes without saying that the video generation method of the present invention is not limited to application to eyeglass-type terminals, but can be applied to portable devices, wearable devices, and the like.
  • One embodiment of the present invention is suitable for use in a video generation method and a video generation program.
  • One embodiment of the present invention is used in, for example, a communication system, a communication device (for example, a mobile phone device, a base station device, a wireless LAN device, or a sensor device), an integrated circuit (for example, a communication chip), a program, or the like. be able to.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A video generation method that generates a video by having a camera in a three-dimensional (3D) pseudo space perform a perspective projection of an object in the 3D pseudo space, and displays the video on a video display device, said video generation method comprising: a field of view setting step in which the field of view of the camera is set on the basis of field of view information that is inputted from the video display device; an interpupillary distance setting step in which an interpupillary distance is set; a video rendering step in which a reference video is rendered on the basis of the field of view and a positional relationship between the object and the camera; and a video shifting step in which two videos are generated by shifting the reference video in a sideways direction on the basis of the field of view, the interpupillary distance, and the positional relationship between the object and the camera.

Description

映像生成装置、映像生成方法および映像生成プログラムVIDEO GENERATION DEVICE, VIDEO GENERATION METHOD, AND VIDEO GENERATION PROGRAM

 本発明は、映像生成装置、映像生成方法および映像生成プログラムに関する。
 本願は、2017年4月5日に日本に出願された特願2017-075213号について優先権を主張し、その内容をここに援用する。
The present invention relates to a video generation device, a video generation method, and a video generation program.
The present application claims priority on Japanese Patent Application No. 2017-075213 filed in Japan on April 5, 2017, the contents of which are incorporated herein by reference.

 近年、拡張現実(AR; Augmented Reality)や複合現実(MR; Mixed Reality)を実現するため、メガネ型端末やスマートグラス等のウエアラブル機器が注目されている。特許文献1および2には、メガネ型端末を実現する表示装置が記載されている。メガネ型端末が備える左目用と右目用のディスプレイに、異なる映像を表示して輻輳や両眼視差の効果を得ることで、ARやMRを実現する。 Recently, in order to realize augmented reality (AR) and mixed reality (MR), wearable devices such as eyeglass-type terminals and smart glasses are attracting attention. Patent Documents 1 and 2 describe a display device that realizes a glasses-type terminal. AR and MR are realized by displaying different images on the left-eye display and right-eye display included in the glasses-type terminal to obtain the effects of convergence and binocular parallax.

特開2016-149587JP 2016-1449587 WO2016/113951WO2016 / 113951

 しかしながら、左目用と右目用の2つの映像をリアルタイムに生成する場合、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、APU(Accelerated Processing Unit)、等のプロセッサの負荷が増加し、処理落ちが発生するという問題がある。 However, when two images for the left eye and right eye are generated in real time, the load on the processor such as CPU (Central Processing Unit), GPU (Graphics Processing Unit), APU (Accelerated Processing Unit), etc. will increase and processing will be lost. There is a problem that occurs.

 本発明の一態様はこのような事情を鑑みてなされたものであり、その目的は、プロセッサの負荷を抑制しながらARやMRを実現できる映像生成方法および映像生成プログラムを提供することにある。 An aspect of the present invention has been made in view of such circumstances, and an object thereof is to provide a video generation method and a video generation program capable of realizing AR and MR while suppressing the load on the processor.

 上述した課題を解決するために本発明の一態様に係る映像生成方法および映像生成プログラムの構成は、次の通りである。 In order to solve the above-described problems, a video generation method and a video generation program according to an aspect of the present invention are configured as follows.

 本発明の一態様による映像生成方法は、3D(Three Dimensions)擬似空間におけるオブジェクトを、前記3D擬似空間におけるカメラが透視投影することで、映像を生成し、映像表示装置に表示する映像生成方法であって、前記映像表示装置から入力される視野情報に基づいて前記カメラの視野を設定する視野設定過程と、瞳孔間距離を設定する瞳孔間距離設定過程と、前記視野および前記オブジェクトと前記カメラの位置関係に基づいて基準映像をレンダリングする映像レンダリング過程と、前記視野、前記瞳孔間距離、と前記オブジェクトと前記カメラの位置関係に基づいて、前記基準映像を横方向にシフトすることで、2つの前記映像を生成する映像シフト過程と、を備える。 A video generation method according to an aspect of the present invention is a video generation method in which an object in a 3D (Three Dimensions) pseudo space is projected by a camera in the 3D pseudo space to generate a video and displayed on a video display device. A visual field setting process for setting the visual field of the camera based on visual field information input from the video display device, an interpupillary distance setting process for setting an interpupillary distance, the visual field, the object, and the camera. By shifting the reference image in the horizontal direction based on the image rendering process for rendering the reference image based on the positional relationship, the visual field, the inter-pupil distance, and the positional relationship between the object and the camera, A video shift process for generating the video.

 また、本発明の映像生成方法において、前記映像レンダリング過程は、プロセッサの負荷を測定し、前記負荷が閾値を下回る場合、前記3D疑似空間において、前記カメラから前記瞳孔間距離だけ離れた場所にカメラを追加し、それぞれの前記カメラで透視投影を行うことで前記映像を生成し、前記映像シフト過程の処理をスキップする。 Also, in the video generation method of the present invention, the video rendering process measures the load of the processor, and when the load falls below a threshold value, the camera is located at a position away from the camera by the interpupillary distance in the 3D pseudo space. Is added, and the video is generated by performing perspective projection with each of the cameras, and the video shift process is skipped.

 また、本発明の映像生成方法において、前記映像レンダリング過程は、前記3D擬似空間におけるオブジェクト数が2以上の場合、前記3D疑似空間において、前記カメラから前記瞳孔間距離だけ離れた場所にカメラを追加し、それぞれの前記カメラで透視投影を行うことで前記映像を生成し、前記映像シフト過程の処理をスキップする。 In the video generation method of the present invention, when the number of objects in the 3D pseudo space is two or more, the video rendering process adds a camera at a location separated from the camera by the interpupillary distance in the 3D pseudo space. Then, the video is generated by performing perspective projection with each of the cameras, and the processing of the video shift process is skipped.

 本発明の一態様による映像生成プログラムは、上述した映像生成方法をコンピュータに実行させる。 The video generation program according to an aspect of the present invention causes a computer to execute the above-described video generation method.

 本発明の一態様による映像生成装置は、3D(Three Dimensions)擬似空間におけるオブジェクトを、前記3D擬似空間におけるカメラが透視投影することで、映像を生成し、映像表示装置に表示する映像生成装置であって、前記映像表示装置から入力される視野情報に基づいて前記カメラの視野を設定する視野設定部と、瞳孔間距離を設定する瞳孔間距離設定部と、前記視野および前記オブジェクトと前記カメラの位置関係に基づいて基準映像をレンダリングする映像レンダリング部と、前記視野、前記瞳孔間距離、と前記オブジェクトと前記カメラの位置関係に基づいて、前記基準映像を横方向にシフトすることで、2つの前記映像を生成する映像シフト部と、を備える。 An image generation apparatus according to an aspect of the present invention is an image generation apparatus that generates an image by perspectively projecting an object in a 3D (Three) pseudo space by a camera in the 3D pseudo space, and displays the image on a video display device. A visual field setting unit that sets a visual field of the camera based on visual field information input from the video display device, an interpupillary distance setting unit that sets an interpupillary distance, the visual field, the object, and the camera. A video rendering unit that renders a reference video based on a positional relationship; and, by shifting the reference video in the horizontal direction based on the visual field, the inter-pupil distance, and the positional relationship between the object and the camera, A video shift unit for generating the video.

 本発明の一態様によれば、映像生成方法は、プロセッサの負荷を低減しながら、AR表示できる映像を生成することができる。 According to one aspect of the present invention, the video generation method can generate a video that can be AR displayed while reducing the load on the processor.

第1の実施形態に係る映像生成装置の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the video generation apparatus which concerns on 1st Embodiment. 第1の実施形態における3D擬似空間の一例を示す図である。It is a figure which shows an example of 3D pseudo | simulation space in 1st Embodiment. 第1の実施形態における映像生成方法が出力する映像と目の関係の一例を示す図である。It is a figure which shows an example of the relationship between the image | video and the eyes which the image | video production | generation method in 1st Embodiment outputs. 第2の実施形態における3D擬似空間の一例を示す図である。It is a figure which shows an example of 3D pseudo | simulation space in 2nd Embodiment. 第2の実施形態における3D擬似空間の一例を示す図である。It is a figure which shows an example of 3D pseudo | simulation space in 2nd Embodiment.

 (第1の実施形態)
 図1は、本実施形態に係る映像生成装置10の一例を示す図である。図1には、映像表示部11を併せて記す。映像表示部11は、例えばメガネ型端末、スマートグラス、またはヘッドマウントディスプレイなどの映像表示装置に具備される。映像生成装置10は、前記映像表示装置内に備えられてもよいし、スマホなどの前記映像表示装置と接続可能な端末装置に備えられてもよい。図1に示すように、本実施形態における映像生成装置10(映像生成方法)は、視野設定部(視野設定過程)101、瞳孔間距離設定部(瞳孔間距離設定過程)102、映像レンダリング部(映像レンダリング過程)103、および映像シフト部(映像シフト過程)104を含んで構成される。
(First embodiment)
FIG. 1 is a diagram illustrating an example of a video generation apparatus 10 according to the present embodiment. FIG. 1 also shows the video display unit 11. The video display unit 11 is provided in a video display device such as a glasses-type terminal, a smart glass, or a head-mounted display. The video generation device 10 may be provided in the video display device, or may be provided in a terminal device that can be connected to the video display device such as a smartphone. As shown in FIG. 1, a video generation device 10 (video generation method) in this embodiment includes a visual field setting unit (visual field setting process) 101, an interpupillary distance setting unit (interpupillary distance setting process) 102, and a video rendering unit ( A video rendering process (103) 103 and a video shift unit (video shift process) 104 are included.

 視野設定部101は、映像表示部11から入力される視野情報に基づいて、映像レンダリング部103で用いられる視野(FoV; Field of View)の大きさを設定する。視野とは、例えば、3D擬似空間におけるカメラが透視投影を行う際の、縦方向の角度である。視野設定部101は、設定された視野の値を映像レンダリング部103に出力する。視野情報は、例えば、視野の値そのものである。視野の値は、縦方向の値でも良いし、斜め方向の値や横方向の値でも良い。また、視野の値は、縦方向の値、斜め方向の値及び横方向の値のうち、1つ以上の値でもよい。視野情報は、例えば、映像表示部11が具備された映像表示装置のベンダーIDやプロダクトID等のIDである。映像生成装置10は、前記IDと、視野の値をと関連付けて格納することができる。映像生成装置10は、IDと視野の値を関連付けたテーブル(データベース)を有することができる。映像生成視野設定部101は、このIDに基づいて、視野の値を決定することができる。視野設定部101は、例えば、このIDと自身が持つデータベースを比較することで、視野の値を決定することができる。視野設定部101は、例えば、このIDをインターネット上のサーバに問い合わせ、視野の値を受信することができる。例えば、視野角設定部101は、サーバ上にある前記テーブル(データベース)を用いることができる。 The visual field setting unit 101 sets the size of the visual field (FoV: “Field Of View”) used in the video rendering unit 103 based on the visual field information input from the video display unit 11. The field of view is, for example, an angle in the vertical direction when the camera in the 3D pseudo space performs perspective projection. The visual field setting unit 101 outputs the set visual field value to the video rendering unit 103. The visual field information is, for example, the visual field value itself. The visual field value may be a vertical value, a diagonal value, or a horizontal value. Further, the visual field value may be one or more values among a vertical value, a diagonal value, and a horizontal value. The visual field information is, for example, an ID such as a vendor ID or a product ID of the video display device provided with the video display unit 11. The video generation apparatus 10 can store the ID and the field-of-view value in association with each other. The video generation apparatus 10 can have a table (database) that associates IDs and field-of-view values. The video generation visual field setting unit 101 can determine the visual field value based on this ID. The field-of-view setting unit 101 can determine the value of the field of view by, for example, comparing this ID with its own database. The field-of-view setting unit 101 can, for example, inquire a server on the Internet for this ID and receive a field-of-view value. For example, the viewing angle setting unit 101 can use the table (database) on the server.

 瞳孔間距離設定部102は、瞳孔間距離(PD; Pupillary Distance)を設定し、映像レンダリング部103と映像シフト部104に出力する。瞳孔間距離は、固定値としても良いし、ユーザが値を手入力しても良い。あるいは、瞳孔間距離設定部102は、映像表示部11に備えられたセンサやウェアラブルセンサで測定された瞳孔間距離を設定しても良い。 The interpupillary distance setting unit 102 sets the interpupillary distance (PD) and outputs it to the video rendering unit 103 and the video shift unit 104. The interpupillary distance may be a fixed value or may be manually input by the user. Alternatively, the interpupillary distance setting unit 102 may set the interpupillary distance measured by a sensor provided in the video display unit 11 or a wearable sensor.

 映像レンダリング部103は、3D擬似空間においてカメラがオブジェクトを撮影することで、映像をレンダリングする。この際、カメラの視野は、視野設定部101から入力される視野の値に設定されることができる。 The video rendering unit 103 renders video by the camera shooting an object in the 3D pseudo space. At this time, the visual field of the camera can be set to the visual field value input from the visual field setting unit 101.

 映像シフト部104は、映像表示部11からディスプレイ情報を受信する。このディスプレイ情報はインターネットから取得しても良い。映像シフト部104は、映像レンダリング部103から入力される映像を、瞳孔間距離設定部102から入力される瞳孔間距離とディスプレイ情報に基づいてシフトし、2つの映像に変換する。映像シフト部104は、生成した2つの映像を映像表示部11に出力する。なお、シフトを行う前の映像を基準映像と呼ぶことができる。 The video shift unit 104 receives display information from the video display unit 11. This display information may be obtained from the Internet. The video shift unit 104 shifts the video input from the video rendering unit 103 based on the interpupillary distance input from the interpupillary distance setting unit 102 and the display information, and converts the video into two videos. The video shift unit 104 outputs the two generated videos to the video display unit 11. Note that the video before the shift can be referred to as a reference video.

 図2は、映像レンダリング部103が用いる3D擬似空間の一例を示す。20は3D擬似空間を高さ方向から見た図である。21は3D疑似空間を横から見た図である。例えば、3D擬似空間の北方向をz軸、高さ方向をy軸、東方向をx軸とすることができる。その場合、20は3D疑似空間をy軸のプラス方向から見た図であり、21は3D疑似空間をx軸のプラス方向から見た図である。カメラ201は、自身と3D擬似空間におけるオブジェクト202との位置関係と、視野設定部101に設定された視野に基づいて、透視投影で映像を生成することができる。なお、この例において、オブジェクト202は直方体である。この透視投影は、視野211に基づいて行われることができる。視野211は視野設定部101に設定された視野とすることができる。この例において、縦方向視野が設定されるが、横方向視野は映像表示部11の表示サイズやアスペクト比から設定されることができる。また、203はカメラ201とオブジェクト202との距離である。 FIG. 2 shows an example of the 3D pseudo space used by the video rendering unit 103. 20 is a view of the 3D pseudo space viewed from the height direction. 21 is a view of the 3D pseudo space as viewed from the side. For example, the north direction of the 3D pseudo space can be the z axis, the height direction can be the y axis, and the east direction can be the x axis. In this case, 20 is a diagram of the 3D pseudo space viewed from the plus direction of the y axis, and 21 is a diagram of the 3D pseudo space viewed from the plus direction of the x axis. The camera 201 can generate an image by perspective projection based on the positional relationship between itself and the object 202 in the 3D pseudo space and the field of view set in the field of view setting unit 101. In this example, the object 202 is a rectangular parallelepiped. This perspective projection can be performed based on the field of view 211. The visual field 211 can be the visual field set in the visual field setting unit 101. In this example, the vertical visual field is set, but the horizontal visual field can be set from the display size and the aspect ratio of the video display unit 11. Reference numeral 203 denotes the distance between the camera 201 and the object 202.

 図3は、映像シフト部104が、映像レンダリング部103によって生成された画像をシフトする一例を示した図である。301と311は、映像表示部11のディスプレイである。302はディスプレイ301の横方向の中心線であり、312はディスプレイ311の中心線である。303はユーザの左目を表し、313はユーザの右目を表す。304は左目303の中心線を表し、314は右目313の中心線を表す。305は、映像レンダリング部103がレンダリングした図2の202に対応するオブジェクトである。図2の20の場合、オブジェクト305がディスプレイ301においては中心線302の上に表示される映像が生成されるが、映像シフト部104はこれに右方向シフトを与える。同様に、映像シフト部104は、ディスプレイ311に表示される映像には左方向シフトを加える。これにより、ユーザは、自身から距離203だけ離れた位置にオブジェクト306が存在するように輻輳の効果を得ることができる。シフト量は、視野設定部101が設定する視野、瞳孔間距離設定部102が設定する瞳孔間距離、映像レンダリング過程103がレンダリングする3D疑似空間のオブジェクトとカメラの距離に基づいて行うことができる。 FIG. 3 is a diagram illustrating an example in which the video shift unit 104 shifts the image generated by the video rendering unit 103. Reference numerals 301 and 311 denote displays of the video display unit 11. Reference numeral 302 denotes a horizontal center line of the display 301, and reference numeral 312 denotes a center line of the display 311. 303 represents the user's left eye and 313 represents the user's right eye. 304 represents the center line of the left eye 303, and 314 represents the center line of the right eye 313. Reference numeral 305 denotes an object corresponding to 202 in FIG. 2 rendered by the video rendering unit 103. In the case of 20 in FIG. 2, an image in which the object 305 is displayed on the center line 302 is generated on the display 301, and the image shift unit 104 applies a rightward shift thereto. Similarly, the video shift unit 104 adds a leftward shift to the video displayed on the display 311. Thereby, the user can obtain the effect of congestion so that the object 306 exists at a position away from the user by the distance 203. The shift amount can be performed based on the visual field set by the visual field setting unit 101, the inter-pupil distance set by the inter-pupil distance setting unit 102, and the distance between the object in the 3D pseudo space rendered by the video rendering process 103 and the camera.

 図4は、図2の3D擬似空間の設定を変更したものである。オブジェクト402は、オブジェクト202を横方向に動かしたものである。403はカメラ201とオブジェクト402の距離である。404は、カメラ201の横方向と同じ方向の線であり、オブジェクト402を通る。405は、カメラ201から線404におろした垂線の長さである。長さ405の値をxとすると、例えば、xに関する多項式を用いてシフト量を決定することができる。具体的に、シフト量yは、y=a(x-x+・・・a(x-x)+aとすることができる。xと係数aは映像表示装置11によって決まる値であり、データベースに映像表示装置11の情報を入力して得られる出力から知ることができる。データベースは映像生成装置10内にあってもよい。データベースはサーバ上にあってもよい。 FIG. 4 is a diagram in which the setting of the 3D pseudo space in FIG. 2 is changed. The object 402 is obtained by moving the object 202 in the horizontal direction. Reference numeral 403 denotes the distance between the camera 201 and the object 402. Reference numeral 404 denotes a line in the same direction as the horizontal direction of the camera 201 and passes through the object 402. Reference numeral 405 denotes the length of a perpendicular line taken from the camera 201 to the line 404. If the value of the length 405 is x, for example, the shift amount can be determined using a polynomial related to x. Specifically, the shift amount y can be y = a n (xx 0 ) n +... A 1 (xx 0 ) + a 0 . x 0 and the coefficient a i is a value determined by the image display device 11 can be known from the output obtained by inputting the information of the image display device 11 in the database. The database may be in the video generation device 10. The database may be on the server.

 このように、本実施形態によれば、映像レンダリング部103がレンダリングを行う回数が1回であり、そのコピーに基づいて輻輳の効果を得られる2つの映像を生成するため、プロセッサの負荷を軽減することができる。 As described above, according to the present embodiment, the video rendering unit 103 performs rendering once, and generates two videos that can obtain the effect of congestion based on the copy, thereby reducing the load on the processor. can do.

 なお、映像シフト部104が行う映像シフトは、プロセッサの負荷が閾値を超えた場合に行うようにしてもよい。負荷が閾値を超えない場合、映像レンダリング過程103は3D疑似空間に左目用と右目用の2つのカメラを用意し、2つの映像をレンダリングすることができる。左目用のカメラによって生成された映像は図3のディスプレイ301で表示されることができ、右目用のカメラによって生成された映像は図3のディスプレイ311で表示されることができる。この場合、映像シフト部104の処理はスキップされる。映像レンダリング部103が、プロセッサの負荷を算出するようにしてもよい。(第2の実施形態)
 第1の実施形態では、映像生成装置10は、3D擬似空間において1つのカメラを用いてレンダリングを行い、レンダリングされた映像を横方向にシフトすることで、輻輳のある映像を生成する。本実施形態では、3D擬似空間において対象となるオブジェクトが複数存在する場合について説明する。
Note that the video shift performed by the video shift unit 104 may be performed when the processor load exceeds a threshold. When the load does not exceed the threshold, the video rendering process 103 can prepare two cameras for the left eye and the right eye in the 3D pseudo space, and render two videos. An image generated by the left-eye camera can be displayed on the display 301 of FIG. 3, and an image generated by the right-eye camera can be displayed on the display 311 of FIG. In this case, the processing of the video shift unit 104 is skipped. The video rendering unit 103 may calculate the processor load. (Second Embodiment)
In the first embodiment, the video generation device 10 performs rendering using one camera in the 3D pseudo space, and generates a video with congestion by shifting the rendered video in the horizontal direction. In the present embodiment, a case will be described in which a plurality of target objects exist in the 3D pseudo space.

 図5は、図2の3D擬似空間に、さらにオブジェクト501が存在する場合の一例である。このような場合、カメラ201とオブジェクト202の位置関係だけに基づく横方向シフトで映像を生成すると、カメラ201とオブジェクト501との距離感が正しくなくなる。これを避けるため、例えば、図1の映像レンダリング部103は、3D擬似空間内のオブジェクト数が2以上であることを検知した場合、カメラ201ではなく、さらに左目用カメラと右目用カメラを設定し、それらに基づいて左目用の映像と右目用の映像を生成することができる。このようにすることで、オブジェクト501の距離感を正しく表現することができる。 FIG. 5 is an example in which an object 501 further exists in the 3D pseudo space of FIG. In such a case, if a video is generated by a lateral shift based only on the positional relationship between the camera 201 and the object 202, the sense of distance between the camera 201 and the object 501 is not correct. In order to avoid this, for example, when the video rendering unit 103 in FIG. 1 detects that the number of objects in the 3D pseudo space is two or more, it sets not the camera 201 but also a left eye camera and a right eye camera. Based on these, a left-eye video and a right-eye video can be generated. By doing so, the sense of distance of the object 501 can be correctly expressed.

 または、オブジェクト202やオブジェクト501は、奥行きオン情報を備えることができる。具体的に、図1の映像レンダリング部103は、左目用カメラと右目用カメラを使うか否かを、奥行きオン情報が設定されているオブジェクト数が1かどうかに基づいて決めることができる。例えば、図5において、オブジェクト202には奥行きオン情報が設定されており、オブジェクト501には奥行きオン情報が設定されていない場合、映像レンダリング部103は、カメラ201に基づいて映像をレンダリングし、映像シフト部104がその映像をシフトすることで、左目用映像と右目用映像を生成することができる。このようにすることで、プロセッサの負荷を低減することができる。 Alternatively, the object 202 and the object 501 can include depth on information. Specifically, the video rendering unit 103 in FIG. 1 can determine whether to use the left-eye camera and the right-eye camera based on whether the number of objects for which depth-on information is set is one. For example, in FIG. 5, when depth on information is set for the object 202 and depth on information is not set for the object 501, the video rendering unit 103 renders a video based on the camera 201, and The shift unit 104 shifts the video, thereby generating a left-eye video and a right-eye video. By doing so, the load on the processor can be reduced.

 なお、本発明の一態様に係る映像生成装置、映像生成方法および映像生成プログラムで動作するプログラムは、本発明の一態様に関わる上記実施形態の機能を実現するように、CPU等を制御するプログラム(コンピュータを機能させるプログラム)である。そして、これら装置で取り扱われる情報は、その処理時に一時的にRAMに蓄積され、その後、各種ROMやHDDに格納され、必要に応じてCPUによって読み出し、修正・書き込みが行なわれる。プログラムを格納する記録媒体としては、半導体媒体(例えば、ROM、不揮発性メモリカード等)、光記録媒体(例えば、DVD、MO、MD、CD、BD等)、磁気記録媒体(例えば、磁気テープ、フレキシブルディスク等)等のいずれであってもよい。また、ロードしたプログラムを実行することにより、上述した実施形態の機能が実現されるだけでなく、そのプログラムの指示に基づき、オペレーティングシステムあるいは他のアプリケーションプログラム等と共同して処理することにより、本発明の機能が実現される場合もある。 Note that a program that operates in the video generation device, the video generation method, and the video generation program according to one aspect of the present invention is a program that controls a CPU or the like so as to realize the functions of the above-described embodiments according to one aspect of the present invention. (A program that causes a computer to function). Information handled by these devices is temporarily stored in the RAM at the time of processing, then stored in various ROMs and HDDs, read out by the CPU, and corrected and written as necessary. As a recording medium for storing the program, a semiconductor medium (for example, ROM, nonvolatile memory card, etc.), an optical recording medium (for example, DVD, MO, MD, CD, BD, etc.), a magnetic recording medium (for example, magnetic tape, Any of a flexible disk etc. may be sufficient. In addition, by executing the loaded program, not only the functions of the above-described embodiment are realized, but also based on the instructions of the program, the processing is performed in cooperation with the operating system or other application programs. The functions of the invention may be realized.

 また市場に流通させる場合には、可搬型の記録媒体にプログラムを格納して流通させたり、インターネット等のネットワークを介して接続されたサーバコンピュータに転送したりすることができる。この場合、サーバコンピュータの記憶装置も本発明の一態様に含まれる。また、上述した実施形態における映像生成方法および映像生成プログラムの一部、または全部を典型的には集積回路であるLSIとして実現してもよい。受信装置の各機能ブロックは個別にチップ化してもよいし、一部、または全部を集積してチップ化してもよい。各機能ブロックを集積回路化した場合に、それらを制御する集積回路制御部が付加される。 Also, when distributing to the market, the program can be stored and distributed on a portable recording medium, or transferred to a server computer connected via a network such as the Internet. In this case, the storage device of the server computer is also included in one embodiment of the present invention. In addition, part or all of the video generation method and the video generation program in the above-described embodiments may be realized as an LSI that is typically an integrated circuit. Each functional block of the receiving apparatus may be individually formed as a chip, or a part or all of them may be integrated into a chip. When each functional block is integrated, an integrated circuit controller for controlling them is added.

 また、集積回路化の手法はLSIに限らず専用回路、または汎用プロセッサで実現しても良い。また、半導体技術の進歩によりLSIに代替する集積回路化の技術が出現した場合、当該技術による集積回路を用いることも可能である。 Further, the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. In addition, when an integrated circuit technology that replaces LSI appears due to progress in semiconductor technology, an integrated circuit based on the technology can also be used.

 なお、本願発明は上述の実施形態に限定されるものではない。本願発明の映像生成方法は、メガネ型端末への適用に限定されるものではなく、携帯型の機器や、ウエアラブル機器などに適用出来ることは言うまでもない。 Note that the present invention is not limited to the above-described embodiment. It goes without saying that the video generation method of the present invention is not limited to application to eyeglass-type terminals, but can be applied to portable devices, wearable devices, and the like.

 以上、この発明の実施形態を、図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も特許請求の範囲に含まれる。 The embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and the design and the like within the scope not departing from the gist of the present invention are also claimed. Included in the range.

 本発明の一態様は、映像生成方法および映像生成プログラムに用いて好適である。本発明の一態様は、例えば、通信システム、通信機器(例えば、携帯電話装置、基地局装置、無線LAN装置、或いはセンサーデバイス)、集積回路(例えば、通信チップ)、又はプログラム等において、利用することができる。 One embodiment of the present invention is suitable for use in a video generation method and a video generation program. One embodiment of the present invention is used in, for example, a communication system, a communication device (for example, a mobile phone device, a base station device, a wireless LAN device, or a sensor device), an integrated circuit (for example, a communication chip), a program, or the like. be able to.

10 映像生成部
11 映像表示部
101 視野設定部
102 瞳孔間距離設定部
103 映像レンダリング部
104 映像シフト部
20 3D疑似空間(高さ方向)
21 3D疑似空間(横方向)
201 カメラ
202 オブジェクト
203 距離
301、311 ディスプレイ
302、312 ディスプレイの中心線
303、313 目
304、314 目の中心線
305、306 オブジェクト
402 オブジェクト
403 距離
404 カメラの横方向と同じ方向でオブジェクトを通る線
405 距離
501 オブジェクト
DESCRIPTION OF SYMBOLS 10 Image | video production | generation part 11 Image | video display part 101 Visual field setting part 102 Interpupillary distance setting part 103 Image | video rendering part 104 Image | video shift part 20 3D pseudo space (height direction)
21 3D pseudo space (horizontal direction)
201 Camera 202 Object 203 Distance 301, 311 Display 302, 312 Display center line 303, 313 eyes 304, 314 Center lines 305, 306 Object 402 Object 403 Distance 404 Line 405 passing through the object in the same direction as the horizontal direction of the camera Distance 501 object

Claims (6)

 3D(Three Dimensions)擬似空間におけるオブジェクトを、前記3D擬似空間におけるカメラが透視投影することで、映像を生成し、映像表示装置に表示する映像生成方法であって、
 前記映像表示装置から入力される視野情報に基づいて前記カメラの視野を設定する視野設定過程と、
 瞳孔間距離を設定する瞳孔間距離設定過程と、
 前記視野および前記オブジェクトと前記カメラの位置関係に基づいて基準映像をレンダリングする映像レンダリング過程と、
 前記視野、前記瞳孔間距離、と前記オブジェクトと前記カメラの位置関係に基づいて、前記基準映像を横方向にシフトすることで、2つの前記映像を生成する映像シフト過程と、
 を備える映像生成方法。
An image generating method for generating an image in a 3D (Three Dimensions) pseudo space by a perspective projection of a camera in the 3D pseudo space and displaying the image on a video display device,
A visual field setting process for setting the visual field of the camera based on visual field information input from the video display device;
An interpupillary distance setting process for setting the interpupillary distance;
A video rendering process for rendering a reference video based on the visual field and a positional relationship between the object and the camera;
Based on the visual field, the interpupillary distance, and the positional relationship between the object and the camera, a video shift process for generating two images by shifting the reference image in a horizontal direction;
A video generation method comprising:
 前記映像レンダリング過程は、プロセッサの負荷を測定し、
 前記負荷が閾値を下回る場合、前記3D疑似空間において、前記カメラから前記瞳孔間距離だけ離れた場所にカメラを追加し、
 それぞれの前記カメラで透視投影を行うことで前記映像を生成し、
 前記映像シフト過程の処理をスキップする、
 請求項1に記載の映像生成方法。
The video rendering process measures the processor load,
If the load is below a threshold, add a camera in the 3D pseudospace at a distance from the camera by the interpupillary distance;
Generate the video by performing perspective projection with each of the cameras,
Skip the video shift process;
The video generation method according to claim 1.
 前記映像レンダリング過程は、前記3D擬似空間におけるオブジェクト数が2以上の場合、前記3D疑似空間において、前記カメラから前記瞳孔間距離だけ離れた場所にカメラを追加し、
 それぞれの前記カメラで透視投影を行うことで前記映像を生成し、
 前記映像シフト過程の処理をスキップする、
 請求項1に記載の映像生成方法。
In the video rendering process, when the number of objects in the 3D pseudo space is 2 or more, a camera is added to the 3D pseudo space at a location separated from the camera by the interpupillary distance,
Generate the video by performing perspective projection with each of the cameras,
Skip the video shift process;
The video generation method according to claim 1.
 前記映像レンダリング過程は、前記3D擬似空間におけるオブジェクトであって、奥行きオン情報が設定されているオブジェクトが2以上の場合、前記3D疑似空間において、前記カメラから前記瞳孔間距離だけ離れた場所にカメラを追加し、
 それぞれの前記カメラで透視投影を行うことで前記映像を生成し、
 前記映像シフト過程の処理をスキップする、
 請求項1に記載の映像生成方法。
When the video rendering process is an object in the 3D pseudo space and the number of objects for which depth-on information is set is two or more, the camera is placed at a location separated from the camera by the interpupillary distance in the 3D pseudo space. Add
Generate the video by performing perspective projection with each of the cameras,
Skip the video shift process;
The video generation method according to claim 1.
 請求項1に記載の映像生成方法をコンピュータに実行させるための映像生成プログラム。 A video generation program for causing a computer to execute the video generation method according to claim 1.  3D(Three Dimensions)擬似空間におけるオブジェクトを、前記3D擬似空間におけるカメラが透視投影することで、映像を生成し、映像表示装置に表示する映像生成装置であって、
 前記映像表示装置から入力される視野情報に基づいて前記カメラの視野を設定する視野設定部と、
 瞳孔間距離を設定する瞳孔間距離設定部と、
 前記視野および前記オブジェクトと前記カメラの位置関係に基づいて基準映像をレンダリングする映像レンダリング部と、
 前記視野、前記瞳孔間距離、と前記オブジェクトと前記カメラの位置関係に基づいて、前記基準映像を横方向にシフトすることで、2つの前記映像を生成する映像シフト部と、
 を備える映像生成装置。
A video generation apparatus that generates an image by causing a camera in the 3D pseudo space to project an object in a 3D (Three Dimensions) pseudo space, and displays the image on a video display device.
A visual field setting unit for setting the visual field of the camera based on visual field information input from the video display device;
An interpupillary distance setting unit for setting an interpupillary distance;
A video rendering unit for rendering a reference video based on the visual field and the positional relationship between the object and the camera;
Based on the visual field, the interpupillary distance, and the positional relationship between the object and the camera, a video shift unit that generates two images by shifting the reference image in the horizontal direction;
A video generation apparatus comprising:
PCT/JP2018/011071 2017-04-05 2018-03-20 Video generation device, video generation method, and video generation program Ceased WO2018186169A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017075213A JP2020098945A (en) 2017-04-05 2017-04-05 Video generating device, video generating method and video generation program
JP2017-075213 2017-04-05

Publications (1)

Publication Number Publication Date
WO2018186169A1 true WO2018186169A1 (en) 2018-10-11

Family

ID=63713220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/011071 Ceased WO2018186169A1 (en) 2017-04-05 2018-03-20 Video generation device, video generation method, and video generation program

Country Status (2)

Country Link
JP (1) JP2020098945A (en)
WO (1) WO2018186169A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116076071A (en) * 2020-06-03 2023-05-05 杰瑞·尼姆斯 2D image capture system and display of 3D digital images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11195131A (en) * 1997-12-26 1999-07-21 Canon Inc Virtual reality method and apparatus and storage medium
JP2002073003A (en) * 2000-08-28 2002-03-12 Namco Ltd Stereoscopic image generation device and information storage medium
JP2014192550A (en) * 2013-03-26 2014-10-06 Seiko Epson Corp Head-mounted display device, and control method of head-mounted display device
JP2014199617A (en) * 2013-03-29 2014-10-23 株式会社バンダイナムコゲームス Image generation system and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11195131A (en) * 1997-12-26 1999-07-21 Canon Inc Virtual reality method and apparatus and storage medium
JP2002073003A (en) * 2000-08-28 2002-03-12 Namco Ltd Stereoscopic image generation device and information storage medium
JP2014192550A (en) * 2013-03-26 2014-10-06 Seiko Epson Corp Head-mounted display device, and control method of head-mounted display device
JP2014199617A (en) * 2013-03-29 2014-10-23 株式会社バンダイナムコゲームス Image generation system and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116076071A (en) * 2020-06-03 2023-05-05 杰瑞·尼姆斯 2D image capture system and display of 3D digital images

Also Published As

Publication number Publication date
JP2020098945A (en) 2020-06-25

Similar Documents

Publication Publication Date Title
US20180192022A1 (en) Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices
CN107590771B (en) 2D video with options for projection viewing in modeled 3D space
US9241155B2 (en) 3-D rendering for a rotated viewer
US20120044241A1 (en) Three-dimensional on-screen display imaging system and method
TW202332263A (en) Stereoscopic image playback apparatus and method of generating stereoscopic images thereof
CN101180891A (en) Stereoscopic image display device, stereoscopic image display method, and computer program
CN110892717A (en) Image processor and control method of image processor
CN106797462B (en) Multi-view image display device and control method thereof
US9167225B2 (en) Information processing apparatus, program, and information processing method
JP2022051978A (en) Image processing device, image processing method, and program
CN111656409B (en) Information processing device and information processing method
KR20170065208A (en) Method and apparatus for processing 3-dimension image, and graphic processing unit
US9225968B2 (en) Image producing apparatus, system and method for producing planar and stereoscopic images
CN112752085A (en) Naked eye 3D video playing system and method based on human eye tracking
CN114513646B (en) Method and device for generating panoramic video in three-dimensional virtual scene
WO2018186169A1 (en) Video generation device, video generation method, and video generation program
US11477419B2 (en) Apparatus and method for image display
WO2012021129A1 (en) 3d rendering for a rotated viewer
KR102223339B1 (en) Method for providing augmented reality-video game, device and system
WO2018186168A1 (en) Video generation device, video generation method, and video generation program
EP4328657B1 (en) Method and computer device for 3d scene generation
US10757401B2 (en) Display system and method for display control of a video based on different view positions
CN103313075A (en) Image processing device, image processing method and non-transitory computer readable recording medium for recording image processing program
TWI879032B (en) Stereoscopic display system
JP2020167657A (en) Image processing equipment, head-mounted display, and image display method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18781423

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18781423

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP