CN112004162A - Online 3D content playing system and method - Google Patents
Online 3D content playing system and method Download PDFInfo
- Publication number
- CN112004162A CN112004162A CN202010933418.3A CN202010933418A CN112004162A CN 112004162 A CN112004162 A CN 112004162A CN 202010933418 A CN202010933418 A CN 202010933418A CN 112004162 A CN112004162 A CN 112004162A
- Authority
- CN
- China
- Prior art keywords
- viewpoint
- resource
- file
- type
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000009877 rendering Methods 0.000 claims abstract description 18
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to an online 3D content playing system and method. The system comprises a display layer, a service layer, a rendering layer and a display terminal which are sequentially connected, wherein the display layer comprises a resource input module, the service layer comprises a resource identification module and a viewpoint type identification module, the resource input module is used for sending a resource file to the resource identification module, the resource identification module is used for identifying the type of the resource file, the type of the resource file comprises a model file, a picture file and a video file, the resource identification module is connected with the viewpoint type identification module, the viewpoint type identification module is used for identifying the viewpoint type of the resource file and generating an image according to different viewpoint types, the rendering layer is used for synthesizing and rendering the image and sending the image to the display terminal for displaying, and the viewpoint types comprise 2D, a left viewpoint, a right viewpoint, an upper viewpoint, a lower viewpoint, an eight viewpoint and. The invention can automatically identify the viewpoint type and relieve the problem of relative shortage of 3D display content resources.
Description
Technical Field
The invention relates to the technical field of 3D content playing, in particular to an online 3D content playing system and method.
Background
3D imaging, namely three-dimensional imaging, borrows naked eye 3D device, polarisation 3D device etc. according to the different schemes of synthesis, lets the user experience three-dimensional sensation.
For users with lack of relevant knowledge in relevant 3D imaging, if the viewpoint type needs to be manually selected, much trouble is added to the use, and the traditional 3D imaging does not have the function of viewpoint type identification. Traditional 3D imaging can only show pictures or videos, and the problem that 3D display content resources are relatively deficient exists.
Disclosure of Invention
The invention aims to provide an online 3D content playing system and method, which can automatically identify the type of a viewpoint and relieve the problem of relative shortage of 3D display content resources.
In order to achieve the purpose, the invention provides the following scheme:
the online 3D content playing system comprises a display layer, a service layer, a rendering layer and a display terminal which are sequentially connected, wherein the display layer comprises a resource input module, the service layer comprises a resource identification module and a viewpoint type identification module, the resource input module is used for sending a resource file to the resource identification module, the resource identification module is used for identifying the type of the resource file, the type of the resource file comprises a model file, a picture file and a video file, the resource identification module is connected with the viewpoint type identification module, the viewpoint type identification module is used for identifying the viewpoint type of the resource file and generating images according to different viewpoint types, the rendering layer is used for synthesizing and rendering the images and sending the images to the display terminal for display, and the viewpoint types comprise 2D, left and right viewpoints, Upper and lower two viewpoints, eight viewpoints, and nine viewpoints.
Optionally, the rendering layer includes an image processing module of a Shader, and the image processing module of the Shader is used for performing image synthesis using different shaders for different display terminals and different viewpoint types.
Optionally, the display layer further includes a user login module and a display terminal selection module, and the display terminal selection module is connected to the display terminal.
Optionally, the service layer further includes a permission module and a data loading module, and the permission module is connected to the user login module.
Optionally, the online 3D content playing system further includes a data layer, where the data layer includes a playlist recording module, a resource viewpoint type recording module, and user-related login data.
An online 3D content playing method includes:
acquiring a resource file;
judging whether the type of the resource file is a model file or not;
if the model file is the model file, identifying the viewpoint type according to the model file;
generating images with different viewing angles through a virtual camera according to the viewpoint types;
synthesizing and displaying the images with different visual angles;
if the resource file is not the model file, judging whether the type of the resource file is a video file;
if the video file is the video file, cloud decoding is carried out on the video file to obtain a decoded picture;
identifying the decoded view type;
generating an image according to the viewpoint type, and synthesizing and displaying the image;
if the resource file is not a video file, the type of the resource file is a picture, and the viewpoint type of the picture is directly identified;
and generating images according to the viewpoint types, and synthesizing and displaying the images.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the online 3D content playing system can synthesize a 3D stereo image online through a universal browser, can automatically identify the playing viewpoint type by selecting different terminal types and matching the terminal to present a 3D vision online, can avoid the installation of a related video player, synchronously solves the 3D imaging of a model file, and can display various types of resources.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a structural diagram of the online 3D content playing system according to the present invention;
FIG. 2 is a schematic view of the viewpoint type of the present invention;
FIG. 3 is a schematic diagram of a nine-point view of the present invention;
FIG. 4 is a schematic view of the virtual camera imaging apparent distance of the present invention;
FIG. 5 is a schematic diagram of rendering layer processing according to the present invention;
fig. 6 is a flowchart of an online 3D content playing method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an online 3D content playing system and method, which can automatically identify the type of a viewpoint and relieve the problem of relative shortage of 3D display content resources.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a structural diagram of the online 3D content playing system according to the present invention. As shown in fig. 1, an online 3D content playing system includes a display layer 1, a service layer 2, a rendering layer 3 and a display terminal 4, which are connected in sequence, where the display layer 1 includes a resource input module, the service layer 2 includes a resource identification module and a viewpoint type identification module, the resource input module is configured to send a resource file to the resource identification module, the resource identification module is configured to identify a type of the resource file, the type of the resource file includes a model file, a picture file and a video file, the resource identification module is connected to the viewpoint type identification module, the viewpoint type identification module is configured to identify a viewpoint type of the resource file and generate an image according to different viewpoint types, the rendering layer 3 is configured to synthesize and render the image and send the image to the display terminal 4 for display, the viewpoint types include 2D, left and right viewpoints, upper and lower viewpoints, eight viewpoints, and nine viewpoints. Fig. 2 is a view type diagram of the present invention. Fig. 3 is a diagram illustrating a nine-viewpoint of the present invention. Fig. 4 is a schematic view of the virtual camera imaging viewing distance according to the present invention.
The rendering layer 3 includes image processing modules of shaders for performing image synthesis using different shaders for different display terminals 4 and different viewpoint types.
The resource input module in the display layer 1 specifically comprises a local resource selection unit and a cloud resource selection unit, a user can select content to be displayed, and the selectable content comprises a model file, a picture file and a video file.
The display layer 1 further comprises a user login module and a display terminal 4 selection module, and the display terminal 4 selection module is connected with the display terminal 4.
The service layer 2 further comprises an authority module and a data loading module, and the authority module is connected with the user login module. The authority module is used for distinguishing user authority and is only matched with the personal login module for use.
The resource identification module is used for identifying the data form according to the resource file extension and distinguishing the resource file types according to the resource file extension, wherein the resource file types are respectively a model file, a picture file and a video file, the identifiable data form is recorded in the background database, and the number of the identifiable type types depends on the types contained in the analysis library of the picture, the video and the model. And after the resource file identification is finished, continuing to perform downwards according to a normal flow.
The viewpoint type identification module only aims at pictures and videos, if the pictures are pictures, the current picture image data are directly used, if the pictures are video resources, more frames of images are extracted, and for each image, the following splitting processing is carried out:
the length and width of the obtained original image are respectively width and height:
left and right images:
widthL=widhtR=width*0.5f;heightL=heightR=height;
Rectangle rect_L=new Rectangle(0,0,width,height);
Rectangle rect_R=new Rectangle(width,0,width,height);
the left, right, left and right image data are divided in this way.
The image is respectively decomposed into blocks, a left viewpoint and a right viewpoint are divided into a left image and a right image, an upper viewpoint and a lower viewpoint are divided into an upper image and a lower image, an eight viewpoint and a nine viewpoint are divided into nine images, and viewpoint judgment is carried out by the following method:
the method comprises the following steps: perceptual hashing algorithm
1. For convenience of calculation and improvement of recognition accuracy, the acquired block map is subjected to image reduction processing, and the image size is reduced to 32 × 32 or 8 × 8.
2. And converting the reduced image into a Gray-scale image, acquiring the Gray-scale value of each pixel bit, and acquiring an array of Gray-scale values by using a basic formula Gray-scale of R0.299+ G0.587+ B0.114.
3. The mean of all gray values is calculated, summed and summed, divided by the total number, and the mean of the gray values is calculated.
4. And comparing the calculated value with each bit gray value in the array, wherein the bit which is greater than or equal to the average value is marked with 1, and the bit which is less than the average value is marked with 0, so as to obtain a hash array.
After the hash array of each image is calculated, comparison and identification can be started, the hash array values of the two images are correspondingly compared, the similarity ratio can be obtained, a weight can be set at the position, for example, 95%, and when the similarity exceeds 95%, the two images are judged to be similar.
If the similarity of the two left and right divided images exceeds the weight, the resource is set as a left view point and a right view point.
Otherwise, performing up-down segmentation, calculating a hash array by using a perceptual hash algorithm, then performing comparison and identification, and if the similarity of the two images which are divided up and down exceeds a weight value, setting the resource as an upper view point and a lower view point.
Otherwise, carrying out nine-square grid segmentation, calculating a hash array by using a perceptual hash algorithm, then carrying out comparison and identification, carrying out similarity judgment on the latter eight images and the first image one by one, setting the resource as a nine-viewpoint when the similarity of all eight groups exceeds the weight, setting the resource as an eight-viewpoint when only seven groups of similarity exceeds the weight, and otherwise, judging the resource as a 2D resource.
And aiming at the model file, generating a corresponding number of virtual cameras in real time according to the selected display terminal 4, and shooting at different positions at different angles to generate images of different viewpoints, namely nine viewpoints.
After the recognition and the image generation are completed, the generated image is output to the rendering layer 3.
The rendering layer 3 is mainly an image processing module of a Shader, and different shaders are required to be used for image synthesis aiming at different terminal display devices and different viewpoint types. FIG. 5 is a diagram illustrating a rendering layer process according to the present invention.
After the image group is acquired, an interlaced image needs to be rendered and synthesized according to the selected device type and parameters, and is output to a device for display.
For example, for the transverse interleaving of polarized light 3D, the resolution of the current image is obtained for two images of left and right views, and the processing may be distributed according to the single and double column numbers according to the one-pixel-to-one-pixel transverse interleaving process under the current resolution column number:
for example, for a multi-view naked eye 3D device, RGB rearrangement is performed according to a grating arrangement format according to device grating characteristic parameters, so that a synthesized image is consistent with grating arrangement. Let the grating period be p, the grating gradient be theta, the 2D display screen sub-pixel width be Wp, the number of viewpoints be K, and its correlation expression be:
k is an integer value.
The online 3D content playing system further comprises a data layer, wherein the data layer comprises a playlist recording module, a resource viewpoint type recording module and user-related login data.
The display terminal 4 comprises a naked eye 3D device terminal, a polarized light 3D device terminal and the like.
The present invention further provides an online 3D content playing method, fig. 6 is a flowchart of the online 3D content playing method, as shown in fig. 6, the method includes:
step 101: and acquiring the resource file.
Step 102: and judging whether the type of the resource file is a model file.
Step 103: and if the model file is the model file, identifying the viewpoint type according to the model file.
Step 104: and generating images with different viewing angles through the virtual camera according to the viewpoint types.
Step 105: and synthesizing and displaying the images with different viewing angles.
Step 106: if not, judging whether the type of the resource file is a video file.
Step 107: and if the video file is the video file, cloud decoding is carried out on the video file to obtain a decoded picture.
Step 108: identifying the decoded view type.
Step 109: and generating images according to the viewpoint types, and synthesizing and displaying the images.
Step 110: if the video file is not the video file, the type of the resource file is a picture, and the view point type of the picture is directly identified.
Step 111: and generating images according to the viewpoint types, and synthesizing and displaying the images.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to assist understanding of the system and its core concepts; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (6)
1. The online 3D content playing system is characterized by comprising a display layer, a service layer, a rendering layer and a display terminal which are sequentially connected, wherein the display layer comprises a resource input module, the service layer comprises a resource identification module and a viewpoint type identification module, the resource input module is used for sending a resource file to the resource identification module, the resource identification module is used for identifying the type of the resource file, the type of the resource file comprises a model file, a picture file and a video file, the resource identification module is connected with the viewpoint type identification module, the viewpoint type identification module is used for identifying the viewpoint type of the resource file and generating an image according to different viewpoint types, the rendering layer is used for synthesizing and rendering the image and sending the image to the display terminal for display, and the viewpoint types comprise 2D, B, C, left and right viewpoints, upper and lower viewpoints, eight viewpoints, and nine viewpoints.
2. The online 3D content playing system according to claim 1, wherein the rendering layer comprises an image processing module of a Shader for performing image composition using different shaders for different display terminals and different viewpoint types.
3. The on-line 3D content playing system according to claim 1, wherein the presentation layer further comprises a user login module and a display terminal selection module, and the display terminal selection module is connected to the display terminal.
4. The online 3D content playing system according to claim 1, wherein the service layer further includes a rights module and a data loading module, and the rights module is connected to the user login module.
5. The online 3D content playing system according to claim 1, wherein the online 3D content playing system further comprises a data layer, the data layer comprising a playlist recording module, a resource viewpoint type recording module, and user-related login data.
6. An online 3D content playing method according to any one of claims 1 to 5, comprising:
acquiring a resource file;
judging whether the type of the resource file is a model file or not;
if the model file is the model file, identifying the viewpoint type according to the model file;
generating images with different viewing angles through a virtual camera according to the viewpoint types;
synthesizing and displaying the images with different visual angles;
if the resource file is not the model file, judging whether the type of the resource file is a video file;
if the video file is the video file, cloud decoding is carried out on the video file to obtain a decoded picture;
identifying the decoded view type;
generating an image according to the viewpoint type, and synthesizing and displaying the image;
if the resource file is not a video file, the type of the resource file is a picture, and the viewpoint type of the picture is directly identified;
and generating images according to the viewpoint types, and synthesizing and displaying the images.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010933418.3A CN112004162B (en) | 2020-09-08 | 2020-09-08 | Online 3D content playing system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010933418.3A CN112004162B (en) | 2020-09-08 | 2020-09-08 | Online 3D content playing system and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112004162A true CN112004162A (en) | 2020-11-27 |
| CN112004162B CN112004162B (en) | 2022-06-21 |
Family
ID=73468881
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010933418.3A Active CN112004162B (en) | 2020-09-08 | 2020-09-08 | Online 3D content playing system and method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112004162B (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009157708A2 (en) * | 2008-06-24 | 2009-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for processing 3d video image |
| CN101980545A (en) * | 2010-11-29 | 2011-02-23 | 深圳市九洲电器有限公司 | Method for automatically detecting 3DTV video program format |
| CN102395037A (en) * | 2011-06-30 | 2012-03-28 | 深圳超多维光电子有限公司 | Format recognition method and device |
| CN102752620A (en) * | 2012-06-20 | 2012-10-24 | 四川长虹电器股份有限公司 | Television broadcasting method of 3D (three dimensional) videos |
| CN102957930A (en) * | 2012-09-03 | 2013-03-06 | 雷欧尼斯(北京)信息技术有限公司 | Method and system for automatically identifying 3D (Three-Dimensional) format of digital content |
| CN103198294A (en) * | 2013-02-22 | 2013-07-10 | 广州市朗辰电子科技有限公司 | Identification method of viewpoint type of video or image |
| US20130182067A1 (en) * | 2010-06-02 | 2013-07-18 | Satoshi Otsuka | Reception device, display control method, transmission device, and transmission method |
| CN103593837A (en) * | 2012-08-15 | 2014-02-19 | 联咏科技股份有限公司 | Method for automatically detecting image format and related device |
| CN108830198A (en) * | 2018-05-31 | 2018-11-16 | 上海玮舟微电子科技有限公司 | Recognition methods, device, equipment and the storage medium of video format |
| CN110574075A (en) * | 2017-12-14 | 2019-12-13 | 佳能株式会社 | System, method and program for generating virtual viewpoint images |
| CN111061686A (en) * | 2019-10-24 | 2020-04-24 | 京东数字科技控股有限公司 | Resource processing method, device, server and storage medium |
-
2020
- 2020-09-08 CN CN202010933418.3A patent/CN112004162B/en active Active
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009157708A2 (en) * | 2008-06-24 | 2009-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for processing 3d video image |
| US20130182067A1 (en) * | 2010-06-02 | 2013-07-18 | Satoshi Otsuka | Reception device, display control method, transmission device, and transmission method |
| CN101980545A (en) * | 2010-11-29 | 2011-02-23 | 深圳市九洲电器有限公司 | Method for automatically detecting 3DTV video program format |
| CN102395037A (en) * | 2011-06-30 | 2012-03-28 | 深圳超多维光电子有限公司 | Format recognition method and device |
| CN102752620A (en) * | 2012-06-20 | 2012-10-24 | 四川长虹电器股份有限公司 | Television broadcasting method of 3D (three dimensional) videos |
| CN103593837A (en) * | 2012-08-15 | 2014-02-19 | 联咏科技股份有限公司 | Method for automatically detecting image format and related device |
| CN102957930A (en) * | 2012-09-03 | 2013-03-06 | 雷欧尼斯(北京)信息技术有限公司 | Method and system for automatically identifying 3D (Three-Dimensional) format of digital content |
| CN103198294A (en) * | 2013-02-22 | 2013-07-10 | 广州市朗辰电子科技有限公司 | Identification method of viewpoint type of video or image |
| CN110574075A (en) * | 2017-12-14 | 2019-12-13 | 佳能株式会社 | System, method and program for generating virtual viewpoint images |
| CN108830198A (en) * | 2018-05-31 | 2018-11-16 | 上海玮舟微电子科技有限公司 | Recognition methods, device, equipment and the storage medium of video format |
| CN111061686A (en) * | 2019-10-24 | 2020-04-24 | 京东数字科技控股有限公司 | Resource processing method, device, server and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112004162B (en) | 2022-06-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Yang et al. | Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration | |
| Shan et al. | A no-reference image quality assessment metric by multiple characteristics of light field images | |
| CN114648482B (en) | Quality evaluation method and system for stereoscopic panoramic images | |
| CN106162137B (en) | Virtual visual point synthesizing method and device | |
| EP4028995B1 (en) | Apparatus and method for evaluating a quality of image capture of a scene | |
| US20080246757A1 (en) | 3D Image Generation and Display System | |
| US9031356B2 (en) | Applying perceptually correct 3D film noise | |
| US20110026808A1 (en) | Apparatus, method and computer-readable medium generating depth map | |
| US8675042B2 (en) | Image processing apparatus, multi-eye digital camera, and program | |
| CN108833877B (en) | Image processing method and device, computer device and readable storage medium | |
| CN115546162B (en) | A virtual reality image quality assessment method and system | |
| US20230283759A1 (en) | System and method for presenting three-dimensional content | |
| CN109661816A (en) | The method and display device of panoramic picture are generated and shown based on rendering engine | |
| CN116450002A (en) | VR image processing method, device, electronic equipment and readable storage medium | |
| Martin et al. | Nerf view synthesis: Subjective quality assessment and objective metrics evaluation | |
| KR20150105069A (en) | Cube effect method of 2d image for mixed reality type virtual performance system | |
| Zhang et al. | Saliency-guided no-reference omnidirectional image quality assessment via scene content perceiving | |
| Zhou et al. | Visual comfort assessment for stereoscopic image retargeting | |
| Abid et al. | Towards visual saliency computation on 3d graphical contents for interactive visualization | |
| CN114449303B (en) | Live broadcast picture generation method and device, storage medium and electronic device | |
| Lazzarotto et al. | On the impact of spatial rendering on point cloud subjective visual quality assessment | |
| CN112004162B (en) | Online 3D content playing system and method | |
| Park et al. | Stereoscopic 3D visual attention model considering comfortable viewing | |
| CN111526354B (en) | A Stereoscopic Video Comfort Prediction Method Based on Multi-scale Spatial Disparity Information | |
| CN107155101A (en) | The generation method and device for the 3D videos that a kind of 3D players are used |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |