CN111050156A - Projection method and system based on four-fold screen field and four-fold screen field - Google Patents
Projection method and system based on four-fold screen field and four-fold screen field Download PDFInfo
- Publication number
- CN111050156A CN111050156A CN201811186078.1A CN201811186078A CN111050156A CN 111050156 A CN111050156 A CN 111050156A CN 201811186078 A CN201811186078 A CN 201811186078A CN 111050156 A CN111050156 A CN 111050156A
- Authority
- CN
- China
- Prior art keywords
- azimuth
- projection
- virtual
- angle
- viewing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000000007 visual effect Effects 0.000 claims description 51
- 238000004364 calculation method Methods 0.000 claims description 27
- 230000004927 fusion Effects 0.000 claims description 9
- 230000008602 contraction Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 11
- 238000003384 imaging method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3188—Scale or resolution adjustment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention discloses a projection method and system based on a four-fold screen field and the four-fold screen field, which relate to the technical field of image processing, and the method comprises the following steps: acquiring position reference information corresponding to a viewing position and a virtual scene corresponding to the quadruple screen field with the current projection size; calculating azimuth viewing angles of the viewing position in four azimuths by combining the position reference information; the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle; and fusing the virtual pictures of the four azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to the four-fold screen field. The four-fold screen field can change the size of a projection area according to actual requirements, and has high adaptability; and the projection effect changes along with the change of the watching position, so that a viewer really watches the stereoscopic scene image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a projection method and system based on a four-fold screen field and the four-fold screen field.
Background
At present, the use of multi-screen fusion is mostly limited to plane fusion, i.e. multiple screens are commonly connected and fused for imaging. The stereoscopic space multi-screen fusion imaging is less in application, and three-dimensional software is used for manual splicing in the fusion imaging process and is often used in occasions such as commercial performances; such as furniture displays, floor type displays, and the like.
During display, the real three-dimensional space adopts a six-sided space structure which is the same as a room and has a fixed space size, and the size of the space can not be changed to meet the requirements of different users for experiencing different space sizes to image and view the film; and sometimes, projection does not need to use all six walls, which causes waste of space.
In addition, the multi-screen fusion imaging in the three-dimensional space has the condition that the watching visual angle is fixed, and the watching visual angle of a watcher cannot be updated in real time along with the position change of the watcher. When the viewing angle is fixed, and a viewer stands at another position to watch, the displayed stereoscopic scene image has distortion and other phenomena, so that the watching feeling of the viewer is influenced, and the viewer cannot really watch the stereoscopic scene image.
Disclosure of Invention
The invention aims to provide a projection method and system based on a four-fold screen field and the four-fold screen field, so that a viewer can view a stereoscopic scene image in real time, and the use experience of the user is improved.
The technical scheme provided by the invention is as follows:
a projection method based on a four-fold screen field comprises the following steps: the four-fold screen field with variable projection size consists of four projection surfaces; the method comprises the following steps: acquiring position reference information corresponding to a viewing position and a virtual scene corresponding to the quadruple screen field with the current projection size; calculating azimuth viewing angles of the viewing position in four azimuths by combining the position reference information; the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle; and fusing the virtual pictures of the four azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to the four-fold screen field.
In the technical scheme, the size (of the projection area) of the four-fold screen field is adjustable, and different requirements can be met. And the projected scene picture can change along with the change of the watching position, thereby improving the effect of watching the three-dimensional scene image by the watcher.
Further, the four-fold screen field comprises a projection surface positioned in front and other three projection surfaces which are not positioned above and below the projection surface in front; the calculating the azimuth viewing angles of the viewing position at four azimuths by combining the position reference information comprises: calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information; and calculating the azimuth viewing angles of the rest three azimuths according to the calculated angle relation between the azimuth corresponding to the azimuth viewing angle and the other three azimuths.
In the technical scheme, after one azimuth angle is calculated, other azimuth angles are directly calculated according to the relation between the other azimuth angles, and the calculation is convenient and quick.
Further, the calculating the azimuth viewing angles of the viewing position at four azimuths by combining the position reference information comprises: and respectively calculating the azimuth viewing angles of the viewing position in the four azimuths by combining the position reference information and the four azimuth viewing angle calculation formulas.
In the technical scheme, different view angle calculation formulas are arranged in different directions, and the calculation result is more accurate and reliable.
Further, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the length of a visual angle area corresponding to the azimuth visual angle of an azimuth is larger than the length of a projection plane corresponding to the azimuth, calculating a cutting area corresponding to the azimuth visual angle, and cutting out a corresponding virtual picture in a virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area; and when the length of a view angle area corresponding to the azimuth view angle of an azimuth is not more than the length of a projection plane corresponding to the azimuth, cutting out a virtual picture corresponding to the azimuth view angle in a virtual scene.
According to the technical scheme, the virtual picture is obtained by cutting or cutting according to different conditions, so that the obtained scene picture has a stereoscopic impression, and the viewpoint experience of a user is improved.
Further, the calculating the cutting area corresponding to the orientation view specifically includes: calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
In the technical scheme, the calculation method of the cutting area is provided, and calculation is convenient.
Further, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
In the technical scheme, different modes are selected to obtain the virtual picture according to different coordinate information in the position reference information, so that the watching effect of a viewer at the watching position is ensured.
Further, the calculating of the clipping area corresponding to the azimuth viewing angle specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
In the technical scheme, another mode for calculating the cutting area is provided, and the method is widely applied.
The invention also provides a four-fold screen field, comprising: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; and the projection size of at least one projection surface is variable.
In the technical scheme, the size (of the projection area) of the four-fold screen field is adjustable, and different requirements can be met.
Furthermore, the projection surface with the variable projection size is a movable wall surface, and the projection size of the projection surface is changed in a moving mode; or the projection surface with the variable projection size is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode; or the projection surface with the variable projection size is a plurality of pull-down curtains, and the projection size of the projection surface is changed in a folding and unfolding mode.
In the technical scheme, various different modes for changing the projection size of the projection surface provide various choices, and the application is wide.
The invention also provides a projection system based on the four-fold screen field, which comprises: the system comprises intelligent equipment, projection equipment and a four-fold screen field; the four-fold screen field comprises: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; the projection size of at least one projection surface is variable; the projection system based on the four-fold screen field further comprises: the position acquisition module is used for acquiring position reference information corresponding to the watching position; the smart device includes: the scene acquisition module is used for acquiring a virtual scene corresponding to the quadruple screen field with the current projection size; the viewing angle calculation module is used for calculating the azimuth viewing angles of the watching position in four azimuths by combining the position reference information; the image generation module is used for generating a virtual image corresponding to each azimuth angle according to each azimuth angle in the virtual scene; the image fusion module is used for fusing the virtual images of the four azimuth visual angles into a scene image for watching the virtual scene at the watching position; the projection equipment projects the scene picture to the four-fold screen field, and the inner side of the four-fold screen field forms the watching position.
In the technical scheme, the size (of the projection area) of the four-fold screen field is adjustable, and different requirements can be met. And the projected scene picture can change along with the change of the watching position, thereby improving the effect of watching the three-dimensional scene image by the watcher.
Compared with the prior art, the projection method and system based on the four-fold screen field and the four-fold screen field have the advantages that:
the four-fold screen field can change the size of a projection area according to actual requirements, and has high adaptability; and the projection effect changes along with the change of the watching position, so that a viewer really watches the stereoscopic scene image.
Drawings
The above features, technical features, advantages and implementations of a four-fold screen field will be further described in the following description of preferred embodiments in a clearly understandable manner, in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method for quartered screen field based projection according to the present invention;
FIG. 2 is a flow chart of another embodiment of a method for projection based on a four-fold screen field according to the present invention;
FIG. 3 is a flow chart of a method for projecting a screen field based on a quadruple screen according to another embodiment of the present invention;
FIG. 4 is a flow chart of yet another embodiment of a method for quartered screen field based projection according to the present invention;
FIG. 5 is a schematic diagram of a projection system of an embodiment of the present invention based on a four-fold screen field;
FIG. 6 is a schematic view of the arrangement of the projection plane according to the present invention;
FIG. 7 is a schematic view of the viewing angle at various orientations of a viewpoint/viewing position in accordance with the present invention;
FIG. 8 is a schematic view of the viewing angle at various orientations of another viewpoint/viewing position in accordance with the present invention;
FIG. 9 is a schematic view of a perspective of another viewpoint/viewing position in various orientations of the present invention;
FIG. 10 is a schematic view of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 11 is a schematic view of cropping at a view point/viewing position rear orientation in accordance with the present invention;
FIG. 12 is a schematic diagram of cropping in the left direction from a viewpoint/viewing position in accordance with the present invention;
fig. 13 is a schematic diagram of cropping in the right direction from a viewpoint/viewing position in the present invention.
The reference numbers illustrate:
10. the intelligent device comprises an intelligent device, 11, a scene acquisition module, 12, a visual angle calculation module, 13, an image generation module, 14, an image fusion module, 20, a projection device and 30, and a position acquisition module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
According to an embodiment of the present invention, as shown in fig. 1, a projection method based on a four-fold screen field includes: the four-fold screen field with the variable projection size is composed of four projection surfaces.
Specifically, the specific positions of the four projection surfaces are various, for example: a four-fold screen field consisting of front, rear, left and right projection surfaces; a four-fold screen field consisting of front, left, upper and lower projection surfaces; a four-fold screen field formed by the front, right, lower and upper projection surfaces, and the like. As long as the four projection surfaces are connected to each other to form a whole with a three-dimensional space (as a projection).
Each projection surface is correspondingly provided with one or more projection devices, so that a foundation is provided for projection of subsequent scene pictures. The plurality of projection devices are responsible for one projection surface, so that the projection effect with high definition can be obtained even if the projection surface with long length (for example, 6 meters and 8 meters) is projected. For example: a projection plane 8 meters long is responsible for projecting half by 2 projection devices respectively, one projection plane projects a scene picture 4 meters in front of the projection plane, and the other projection plane projects a scene picture 4 meters behind the projection plane, so that the projection effect is ensured.
The method comprises the following steps:
s101, position reference information corresponding to the watching position and a virtual scene corresponding to a four-fold screen field of the current projection size are obtained.
Specifically, after a viewer enters a viewing space (i.e., the inner side of a four-fold screen field), a viewing position of the viewer is obtained by using a mobile terminal carried by the viewer; the mobile terminal can complete indoor positioning. The mobile terminal can be a mobile phone, a tablet personal computer, an intelligent bracelet and the like, and integrates an indoor positioning function on equipment frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
Because the projection size in four book curtain place is variable, need select corresponding virtual scene according to the size of four sides projection surface in four book curtain place, the effect is real when guaranteeing follow-up projection to four book curtain place.
For example: the four-fold screen field consists of front, left, right and lower four projection surfaces, the corresponding sizes of the four projection surfaces are 3 × 2.8 meters, 4 × 2.8 meters and 4 × 3 meters respectively, and then the virtual scene with the projection effect corresponding to each size is obtained.
There are various ways to vary the size of the quarter-fold screen field, for example: 1. one or more projection surfaces are movable projection surfaces which can move along the guide rail, so that the projectable range of the four-fold screen field is changed; 2. one or more projection surfaces are movable projection surfaces which can be folded and contracted like folding fans, and the corresponding movable projection surfaces are folded and contracted according to actual requirements to adjust the projectable range of the four-fold screen field; 3. the screen is characterized in that a plurality of screens are arranged on the projection surface (such as front, left, right and back) of the side surface according to a grid, the screen is arranged above the projection surface according to a grid, the width of the screen is set according to requirements (such as 50cm), when the projection surface in the front, left, right and back directions is required, the screen in the corresponding position can be put down, the specific number of the screens to be put down is determined according to the front, left, right and back dimensions, and taking the projection surface in the front as an example, the length of the projection surface is 3 meters, and a single screen is 50cm, and 6 screens are put down to form the. Of course, other ways of varying the projection size of the four-fold screen field are also included, and are not limited herein.
It should be noted that the direction of any one projection surface may be set to be the front direction, and the directions of the other projection surfaces are determined according to the position of the projection surface located in the front direction.
S102, the azimuth viewing angles of the viewing position in four azimuths are calculated by combining the position reference information.
Specifically, at different positions, the perspective view of a person may also be different at each orientation; if at different positions, the pictures presented by watching the same object at the same direction are different; the different pictures are seen because the perspective view angle changes when the object is viewed.
The position information of the watching position comprises X-axis coordinate information, Y-axis coordinate information and Z-axis coordinate information, and a plurality of azimuth viewing angles can be calculated through the position information of the watching position; for example: a forward azimuth viewing angle, a rearward azimuth viewing angle, a left azimuth viewing angle, a right azimuth viewing angle, an upper azimuth viewing angle, and a lower azimuth viewing angle.
One projection surface is set as the front projection surface according to actual requirements, and the orientations of the other three projection surfaces are confirmed according to the positions of the projection surfaces in front. For example: the projection plane located on the left side of the projection plane located in front is oriented to the left.
S103, the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle.
Specifically, the virtual scene is an integral picture, and the virtual scene may be a scene in which a house is decorated, a scene displayed in a commodity room, a scene displayed in a commodity, and the like. Cutting a virtual scene in a three-dimensional space; after the azimuth visual angle of the watching position is calculated, if the azimuth visual angle in front is combined, the virtual scene is cut into a virtual picture in front in a three-dimensional space; in this way, virtual screens of the rear, left, right, upper, and lower sides can be obtained.
Because only the four-fold screen field is applied in the embodiment, only the virtual pictures corresponding to four directions are cut.
S104, the virtual pictures of the four azimuth viewing angles are fused into a scene picture of a virtual scene watched at the watching position, and the scene picture is projected to a four-fold screen field.
Specifically, after the virtual pictures in the four directions are obtained, the virtual pictures in the four directions are seamlessly spliced and fused into a complete scene picture viewed at the viewing position, and the complete scene picture is projected to a four-fold screen field for a user to stand at the viewing position to view.
In this embodiment, when the position reference information corresponding to the viewing position is obtained, the position reference information may be two types of position information:
in the first type, the position reference information is virtual position information:
converting viewing position information in a viewing space (namely in a four-fold screen field) into virtual position information in a virtual scene according to a corresponding relation between space coordinates of the viewing space and virtual coordinates of the virtual scene; and uses the virtual location information as location reference information.
Specifically, under the condition of real-time rendering, viewing position information is converted into virtual position information, and the calculation of the azimuth angle of view and the generation of a virtual picture are completed through the virtual position information. The essence of real-time rendering is the real-time computation and output of graphics data.
In the second type, the position reference information is position pixel information:
converting viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
Specifically, under the condition of offline rendering, viewing position information is converted into position pixel information, and the calculation of the azimuth viewing angle and the generation of a virtual picture are completed through the position pixel information.
Wherein, a scene model of the virtual scene and a space model of the viewing space (namely a model of the four-fold screen field) are in a specific proportional relationship; the viewing space is a four-fold screen field composed of any 4 projection surfaces as shown in fig. 6. The specific proportion relation is 1: 1.
when the size of a projection plane (namely, the projection size) in a four-fold screen field in a real space is changed, the size adaptability of a scene model of a virtual scene is changed, and the conditions that 1: 1, in a ratio of 1.
Calculating four corresponding azimuth viewing angles at different viewing positions, wherein the same azimuth has different azimuth viewing angles according to different viewing positions; and aiming at different azimuth viewing angles, virtual pictures generated in the same azimuth are different. In the same viewing position, the virtual pictures in the four directions are seamlessly spliced to form a complete scene picture which is projected to a four-fold screen field, and the viewing angle of the virtual pictures changes along with the position change of a viewer, so that the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
According to an embodiment of the present invention, as shown in fig. 2, a projection method based on a four-fold screen field includes: the four-fold screen field with the changeable projection size is composed of four projection surfaces, wherein the four-fold screen field comprises a projection surface positioned in front, and the other three projection surfaces are not positioned above and below the projection surface positioned in front.
The method comprises the following steps:
s201, acquiring position reference information corresponding to a viewing position and a virtual scene corresponding to a quadruple screen field with a current projection size;
s202, calculating the azimuth viewing angles of the viewing position in the four azimuths by combining the position reference information:
s212, calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information;
s222 calculates azimuth viewing angles of the remaining three azimuths according to the calculated angular relationships between the azimuths corresponding to the azimuth viewing angles and the other three azimuths.
Specifically, when four azimuth viewing angles of the four-fold screen field need to be calculated, for example, the four azimuth viewing angles of the front, rear, left, and right, the forward azimuth viewing angle may be calculated by using a viewing angle calculation formula of the forward azimuth viewing angle, as shown in fig. 10, the forward azimuth viewing angle is FOV, FOV is 2 ∠ θ, and tan θ is (L θ) (L is a distance between the front and rear of the screen field and the front of the screen field, and the distance between the front and rear of the screen field 12+ s)/y; where L1 is the lateral length of the viewing space, i.e., the length of the front projection plane in the four-fold screen field, s is the lateral offset from the center of the viewing space, and y is the viewing distance in front of the viewing space.
Different viewing positions, the front azimuth viewing angle and the left azimuth viewing angle, or the azimuth angle between the front azimuth viewing angle and the right azimuth viewing angle is a fixed angle of 180 degrees, and after the front azimuth viewing angle is calculated, the front azimuth viewing angle is subtracted from the fixed angle of 180 degrees, so that the azimuth viewing angle of the left or right azimuth can be obtained.
As shown in fig. 10, the azimuth angle between the front azimuth view and the right azimuth view is a fixed angle of 180 °, and the azimuth view of the right azimuth is equal to 180 ° minus the front azimuth view, since the front and rear azimuth views are equal and the circumferential angle of the viewpoint o is 360 °, in the case where the right azimuth view and ∠ aob are known, the azimuth view of the left side can be calculated.
Optionally, the projection method based on the four-fold screen field further includes: generating a plurality of orthogonal cameras and binding the orthogonal cameras with each other; each orthogonal camera is perpendicular to a plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
Specifically, binding the orthogonal cameras with each other means that the coordinates of the orthogonal cameras are the same and are located at the same point. According to the angle of the visual angle, the orthogonal camera is perpendicular to the corresponding plane (such as the plane corresponding to the front), the size and the position of the angle of the visual angle correspond to a unique visual cone, a part of pictures of the virtual scene are intercepted through the visual cone, and a plurality of pictures are seamlessly spliced to obtain an integral stereo space picture.
S203 generating a virtual image corresponding to each azimuth angle by the virtual scene according to each azimuth angle includes:
s213, when the length of a view angle area corresponding to the azimuth view angle of one azimuth is larger than the length of a projection plane corresponding to the azimuth, calculating a cutting area corresponding to the azimuth view angle, and cutting out a corresponding virtual picture in the virtual scene according to the cutting area and the azimuth view angle corresponding to the cutting area;
s223, when the length of the view angle area corresponding to the azimuth view angle of one azimuth is not more than the length of the projection plane corresponding to the azimuth, cutting out a virtual picture corresponding to the azimuth view angle in the virtual scene.
Specifically, after four azimuth viewing angles are calculated, the length of a viewing angle area corresponding to each azimuth viewing angle is analyzed, and if the length of the viewing angle area exceeds the length of a projection area of a projection surface, a cutting area needs to be calculated, and a normal picture is cut.
As shown in fig. 10 and 11, since the length of the viewing angle region corresponding to the front azimuth viewing angle is 2s more than the length of the front projection surface and the length of the viewing angle region corresponding to the rear azimuth viewing angle is 2s more than the length of the rear projection surface, it is necessary to calculate the cut region corresponding to both, and cut out the virtual screen to be projected on the corresponding projection surface in the virtual scene according to the orientation viewing angles corresponding to the cut region and the cut region.
Similarly, referring to fig. 10 and 11, since the lengths of the viewing angle regions corresponding to the left and right azimuth viewing angles are not greater than the lengths of their respective corresponding projection surfaces, it is only necessary to directly cut out the virtual frames corresponding to the viewing angle regions in the virtual scene without calculating the clipping regions.
As shown in fig. 8 and 9, as shown in fig. 12 and 13, according to the actual display situation, a virtual picture corresponding to a front azimuth viewing angle may be cut out from a virtual scene, a virtual picture corresponding to a rear azimuth viewing angle may be cut out from the virtual scene, and virtual pictures corresponding to the front and rear azimuth viewing angles may also be cut out from the virtual scene, and the virtual pictures do not cut out normal pictures of the front and rear azimuths in the virtual scene.
As shown in fig. 12 and 13, since the length of the viewing angle region corresponding to the left azimuth viewing angle and the right azimuth viewing angle is larger than the length of the projection plane corresponding to the azimuth, the normal screen needs to be cut when the projection planes of the two azimuths are concerned.
If the four-fold screen field adopts four projection surfaces of the front, the left, the right and the rear, and the viewing position is located at the viewpoint as shown in fig. 10, the virtual pictures corresponding to the projection surfaces of the front and the rear need to be obtained by calculating the clipping area and clipping in the virtual scene, and the virtual pictures corresponding to the projection surfaces of the left and the right are obtained by directly clipping the virtual pictures in the virtual scene.
Specifically, the viewing position is a central position, as shown in fig. 7, when the azimuth viewing angles of all the two opposite orientations are equal, the virtual picture obtained by cutting the virtual scene in each orientation (i.e., any four orientations of front, left, right, back, top and bottom) at the central position is a normal picture, and no clipping is required.
Projection surface of projection surface
S204, the virtual pictures of the four azimuth viewing angles are fused into a scene picture of a virtual scene watched at the watching position, and the scene picture is projected to a four-fold screen field.
In this embodiment, when the virtual picture corresponding to each direction is cut out according to the four directions corresponding to the four-fold iconography in the virtual scene, each orthogonal camera captures the virtual picture corresponding to each direction in the virtual scene by combining the direction view angle and the position reference information corresponding to each orthogonal camera.
When the corresponding virtual picture is cut out in the virtual scene according to the cutting area and the azimuth angle corresponding to the cutting area, the azimuth angle, the cutting area and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera cuts out the virtual picture corresponding to each azimuth in the virtual scene.
According to an embodiment provided by the present invention, as shown in fig. 3, a projection method based on a four-fold screen field includes: the four-fold screen field with the variable projection size is composed of four projection surfaces.
The method comprises the following steps:
s301, position reference information corresponding to the watching position and a virtual scene corresponding to the quadruple screen field with the current projection size are obtained.
S302 calculating the azimuth viewing angles of the viewing position at four azimuths by combining the position reference information includes: s312, the position reference information and the four-azimuth view angle calculation formula are combined to calculate the azimuth view angles of the viewing position in the four azimuths, respectively.
Specifically, when four azimuth viewing angles of the four-fold screen field need to be calculated, for example, the azimuth viewing angles of the front, rear, right and upper four azimuths; the front azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the front azimuth viewing angle; the rear azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the rear azimuth viewing angle; the right side direction visual angle can be calculated by utilizing the visual angle calculation formula of the right side direction visual angle, and the upper side direction visual angle can be calculated by utilizing the visual angle calculation formula of the upper side direction visual angle.
As shown in fig. 10, the forward azimuth angle is FOV, FOV is 2 ∠ θ, tan θ is (L)12+ s)/y; where L1 is the lateral length of the viewing space, i.e., the lateral length of the front projection plane in the three-fold screen field, s is the lateral offset from the center of the viewing space, and y is the viewing distance directly in front of the viewing space.
As shown in fig. 11, the rear azimuth viewing angle is FOV,tanθ=(L1/2+s)/(L2-y); where L2 is the lateral length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
When the position information of the viewpoint o is known, the azimuth viewing angle of each azimuth can be calculated, and the azimuth viewing angles corresponding to the left and right sides of the viewpoint can also be calculated by a formula, which is not described herein again.
S303, generating a virtual picture corresponding to each azimuth angle by the virtual scene according to each azimuth angle;
s304, the virtual pictures of the four azimuth viewing angles are fused into a scene picture of a virtual scene watched at the watching position, and the scene picture is projected to a four-fold screen field.
In the embodiment, the azimuth viewing angle corresponding to each azimuth is respectively calculated by using the viewing angle calculation formula corresponding to each azimuth; the accuracy of each azimuth viewing angle can be improved; the accuracy of other azimuth viewing angles cannot be influenced due to the fact that one azimuth viewing angle is calculated wrongly.
According to an embodiment provided by the present invention, as shown in fig. 4, a projection method based on a four-fold screen field includes: the four-fold screen field with the variable projection size is composed of four projection surfaces.
The method comprises the following steps:
s401, position reference information corresponding to the watching position and a virtual scene corresponding to a quadruple screen field with the current projection size are obtained.
S402 calculates the azimuth viewing angles of the viewing position at four azimuths by combining the position reference information.
S403, generating a plurality of orthogonal cameras and binding the orthogonal cameras with one another; each orthogonal camera is perpendicular to a plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
Specifically, binding the orthogonal cameras with each other means that the coordinates of the orthogonal cameras are the same and are located at the same point.
S404, the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle, which includes the following four cases:
the first method comprises the following steps:
s414, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene;
s424, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Specifically, the X-axis centerline is a line that is 1/2 transverse length of the viewing space and parallel to the Y-axis; if the viewing space is 4 m long and 2m wide, the X-axis center line is a straight line 1m wide and parallel to the Y-axis.
The X-axis centerline is a line of 1/2 transverse length of the viewing space and parallel to the Y-axis; when the viewing space is expressed in pixels, the specification is 800dp in length and 400dp in width, and the X-axis center line is a straight line 200dp in width and parallel to the Y-axis.
When the X coordinate information in the position reference information is 1m or 200dp, if the X axis corresponds to the front and rear positions, according to the actual display condition (i.e. the position of the projection plane of the four-fold screen), a virtual picture corresponding to the front position view angle can be cut out from the virtual scene, a virtual picture corresponding to the rear position view angle can be cut out from the virtual scene, and virtual pictures corresponding to the front and rear position view angles can be cut out from the virtual scene, so that the virtual pictures do not cut out the normal pictures of the front and rear positions in the virtual scene.
When the position reference information contains Y coordinate information and Z coordinate information, if the Y axis corresponds to a left direction and a right direction, the Z axis corresponds to an upper direction and a lower direction.
The pictures corresponding to the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle are not normal pictures any more, and the normal pictures need to be cut.
And according to the requirement of the actual display condition, cutting out the virtual picture corresponding to each direction after selecting a plurality of directions from a left direction view angle, a right direction view angle, an upper direction view angle and a lower direction view angle.
And the second method comprises the following steps:
s434, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into a corresponding virtual picture according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
s444, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation viewing angles corresponding to the coordinate information on the remaining axes in the position reference information, and cutting out a corresponding virtual picture in the virtual scene according to the cutting areas and the orientation viewing angles corresponding to the cutting areas.
Specifically, when the Y coordinate information in the position reference information is 2m or 400dp, if the two left and right directions corresponding to the Y axis are required according to the actual display situation, the virtual frame corresponding to the left direction view angle can be cut out in the virtual scene, the virtual frame corresponding to the right direction view angle can be cut out in the virtual scene, and the virtual frames corresponding to the left and right direction view angles can be cut out in the virtual scene, so that the virtual frames do not cut out the normal frames of the left and right directions in the virtual scene.
When the position reference information contains X coordinate information and Z coordinate information, if the X axis corresponds to the front and rear directions, the Z axis corresponds to the upper and lower directions.
The pictures corresponding to the front azimuth viewing angle, the rear azimuth viewing angle, the upper azimuth viewing angle and the lower azimuth viewing angle are no longer normal pictures, and the normal pictures need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of orientations from a front orientation view angle, a rear orientation view angle, an upper orientation view angle and a lower orientation view angle, cutting out virtual pictures corresponding to all the orientations.
And the third is that:
s454, when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating the cutting areas corresponding to all the azimuth viewing angles; s464 cutting out a corresponding virtual picture in the virtual scene according to each azimuth angle and cutting area.
Specifically, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, the frames corresponding to the front azimuth viewing angle, the rear azimuth viewing angle, the left azimuth viewing angle, the right azimuth viewing angle, the upper azimuth viewing angle, and the lower azimuth viewing angle are no longer normal frames, and the normal frames need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front visual angle, a rear visual angle, a left azimuth visual angle, a right azimuth visual angle, an upper azimuth visual angle and a lower azimuth visual angle, cutting out virtual pictures corresponding to all azimuths.
And fourthly:
when the X coordinate information is on the X-axis central line and the Y coordinate information is also on the Y-axis central line, the virtual picture formed by cutting the virtual scene in each direction (namely any five directions of front, left, right, back, upper and lower) is a normal picture, and cutting is not needed.
S405, the virtual pictures of the four azimuth viewing angles are fused into a scene picture of a virtual scene watched at the watching position, and the scene picture is projected to a four-fold screen field.
In this embodiment, when the virtual picture corresponding to each position is cut out from the virtual scene, the position view angle and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera intercepts the virtual picture corresponding to each position in the virtual scene.
When the corresponding virtual picture is cut out in the virtual scene according to the cutting area and the azimuth angle corresponding to the cutting area, the azimuth angle, the cutting area and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera cuts out the virtual picture corresponding to each azimuth in the virtual scene.
In the above embodiments, when the cropping area corresponding to each azimuth viewing angle is calculated, there are two calculation schemes:
the first calculation scheme is as follows:
calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Specifically, under the condition that the azimuth viewing angle is known, the position reference information contains the viewing distance; the length of the viewing angle region at each orientation at the viewing position can be calculated as a viewing angle picture parameter.
The viewing space parameter corresponding to each orientation refers to the transverse length of the projection surface corresponding to the orientation, and the viewing space parameter is known because the projectable length is fixed. And subtracting the length of the projection plane from the length of the view angle area to obtain a cutting area corresponding to each direction.
The second calculation scheme is as follows:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Specifically, as shown in fig. 10, the virtual screen corresponding to the front view angle has a length of 2s to be cut, the front view angle is FOV, FOV is 2 ∠ θ, and tan θ is (L)12+ s)/y; where L1 is the lateral length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 11, the virtual frame corresponding to the front view angle has a length of 2s to be cut; the rear azimuth viewing angle is the FOV,tanθ=(L1/2+s)/(L2-y); where L2 is the longitudinal length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 12, the virtual screen corresponding to the left azimuth angle has a length of 2p to be cut, FOV 2 ∠α for the left azimuth angle, and tan α (L)22+ p)/x; where L2 is the longitudinal length of the viewing space, p is the longitudinal offset value from the central position of the viewing space, and x is the viewing distance to the left within the viewing space.
According to an embodiment provided by the present invention, a four-fold screen yard comprises: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; and the projection size of at least one projection surface is variable.
Specifically, the specific positions of the four projection surfaces are various, for example: a four-fold screen field consisting of front, lower, left and right projection surfaces; a four-fold screen field consisting of front, right, upper and lower projection surfaces; the four-fold screen field is composed of the front, right, rear and upper projection surfaces. As long as the four projection surfaces are connected with each other to form a whole with a stereoscopic space (as a projection).
Each projection surface is correspondingly provided with one or more projection devices, so that a foundation is provided for projection of subsequent scene pictures. The plurality of projection devices are responsible for one projection surface, so that the projection effect with high definition can be obtained even if the projection surface with long length (for example, 6 meters and 8 meters) is projected. For example: a projection plane 8 meters long is responsible for projecting half by 2 projection devices respectively, one projection plane projects a scene picture 4 meters in front of the projection plane, and the other projection plane projects a scene picture 4 meters behind the projection plane, so that the projection effect is ensured.
There are various ways to vary the size of the quarter-fold screen field, for example: 1. the projection surface/surfaces are movable projection surfaces which can move along the guide rail, and the projection size of the projection surface is changed in a moving mode, so that the projectable range of the four-fold screen field is changed; 2. one or more projection surfaces are movable projection surfaces which can be folded and contracted like folding fans, the corresponding movable projection surfaces are folded and contracted to change the projection size of the projection surfaces according to actual requirements, and the projectable range of the four-fold screen field is adjusted; 3. the screen is characterized in that a plurality of screens are arranged on the projection surface (such as front, left, right and back) of the side surface according to a grid, the screen is arranged above the projection surface according to a grid, the width of the screen is set according to requirements (such as 50cm), when the projection surface in the front, left, right and back directions is required, the screen in the corresponding position can be put down, the specific number of the screens to be put down is determined according to the front, left, right and back dimensions, and taking the projection surface in the front as an example, the length of the projection surface is 3 meters, and a single screen is 50cm, and 6 screens are put down to form the. Of course, other ways of varying the projection size of the four-fold screen field are also included, and are not limited herein.
The projection surface with the variable projection size is a movable wall surface, and the projection size of the projection surface is changed in a moving mode; or the projection surface with the variable projection size is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode; or the projection surface with the variable projection size is a plurality of pull-down curtains, and the projection size of the projection surface is changed in a folding and unfolding mode.
But in the four book curtain place, a projection face is portable, other three projection faces are fixed, also can two projection faces be portable, leave two projection faces fixed, also can three projection faces be portable, leave a projection face and can not remove, also can four projection faces all can move, select according to actual demand.
The four-fold screen field of the embodiment can change the size of the projection picture according to the requirements of users, can be flexibly applied to users with different requirements, and improves the use experience of the users.
According to an embodiment of the present invention, as shown in fig. 5, a projection system based on a four-fold screen field includes: the intelligent device 10, the projection device 20 and the four-fold screen field; the intelligent device 10 is in communication connection with the projection device 20;
the four-fold screen field includes: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; and the projection size of at least one projection surface is variable.
The projection system based on four-fold screen field further comprises: and a position obtaining module 30, configured to obtain position reference information corresponding to the viewing position.
Specifically, the position acquisition module can be arranged in the mobile terminal, and also can be arranged in the intelligent device to acquire the watching position of the user, and if the position acquisition module is arranged in the intelligent device, the intelligent device can move along with the user, so that the projected scene picture can be changed conveniently according to the position of the user.
The smart device 10 includes:
and the scene obtaining module 11 is configured to obtain a virtual scene corresponding to the quadruple screen field of the current projection size.
And the viewing angle calculating module 12 is electrically connected with the scene acquiring module 11 and is used for calculating the azimuth viewing angles of the watching positions in four azimuths by combining the position reference information.
And the picture generation module 13 is electrically connected with the view angle calculation module 12 and is used for generating a virtual picture corresponding to each azimuth view angle according to each azimuth view angle in the virtual scene.
Specifically, a plurality of orthogonal cameras are generated and bound with each other (for example, the coordinates of each orthogonal camera are the same); each orthogonal camera is perpendicular to a plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
And the picture fusion module 14 is configured to fuse the virtual pictures at the four azimuth viewing angles into a scene picture of a virtual scene viewed at the viewing position.
The projection device 20 projects the scene picture onto the four-fold screen field, and the inner side of the four-fold screen field forms a viewing position.
In addition to the above, the present embodiment further includes the following contents:
one way, when the four-fold screen field includes a projection surface located in front, and other three projection surfaces are not located above and below the projection surface in front, the viewing angle calculation module 12, for calculating the viewing position in combination with the position reference information, includes:
the viewing angle calculation module 12 is configured to calculate an azimuth viewing angle of one azimuth of the viewing position in combination with the position reference information; and calculating the azimuth viewing angles of the rest three azimuths according to the calculated angular relationship between the azimuth corresponding to the azimuth viewing angle and the other three azimuths.
In another mode, the angle-of-view calculating module 12, configured to calculate the azimuth angles of the viewing position at four orientations by combining the position reference information, includes: the viewing angle calculation module 12 calculates the viewing angles of the viewing position in the four directions respectively by combining the position reference information and the four-direction viewing angle calculation formulas.
In one mode, the picture generating module 13, configured to generate, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle, includes:
the image generating module 13 is configured to calculate a cutting area corresponding to an azimuth viewing angle when a viewing angle area length area corresponding to the azimuth viewing angle of an azimuth is greater than a projection surface area length corresponding to the azimuth, and cut out a corresponding virtual image in the virtual scene according to the cutting area and the azimuth viewing angle corresponding to the cutting area; and when the length of the view angle area corresponding to the azimuth view angle of one azimuth is not more than the length area of the projection surface corresponding to the azimuth, cutting out a virtual picture corresponding to the azimuth view angle in the virtual scene.
In another mode, the picture generating module 13, configured to generate, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle, includes:
the picture generation module 13 is configured to cut the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene when the X coordinate information is on the X axis central line and the Y coordinate information is not on the Y axis central line; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
The picture generation module 13 is configured to cut the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene when the X coordinate information in the position reference information is not on the X axis central line and the Y coordinate information in the position reference information is on the Y axis central line; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
The picture generation module 13 is used for respectively calculating the cutting areas corresponding to all the azimuth viewing angles when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line; and cutting out a corresponding virtual picture in the virtual scene according to each azimuth angle and cutting area.
In addition, when the X-coordinate information is on the X-axis central line and the Y-coordinate information is also on the Y-axis central line, the virtual picture formed by cutting the virtual scene in each direction (namely, any five directions of front, left, right, back, up and down) is a normal picture, and the cutting is not needed.
One way, calculating a cropping area corresponding to the orientation view specifically includes: calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
In another mode, the calculating the cropping area corresponding to the azimuth viewing angle specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
The position obtaining module 30 is configured to obtain the position reference information corresponding to the viewing position, and specifically includes: converting the viewing position information in the viewing space into virtual position information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the virtual coordinates of the virtual scene; and using the virtual position information as position reference information; or, converting the viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relationship between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
The scene model of the virtual scene is in a specific proportional relationship with the spatial model of the viewing space (i.e., the spatial model of the four-fold screen field). The specific proportion relation is 1: 1.
specifically, the implementation process of this embodiment of the system is the same as that of the above embodiment of the method, and is not described in detail here. The smart device 20 may be a computer.
In the embodiment, the size of the four-fold screen field is variable, the projection range can be changed according to actual requirements, and the four-fold screen field is flexible and changeable and wide in application; the scene picture projected to the quadruple screen field can change along with the change of the watching position, so that a viewer really watches the stereoscopic scene image, and the use experience of a user is greatly improved.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (11)
1. A projection method based on a four-fold screen field is characterized by comprising the following steps: the four-fold screen field with variable projection size consists of four projection surfaces;
the method comprises the following steps:
acquiring position reference information corresponding to a viewing position and a virtual scene corresponding to the quadruple screen field with the current projection size;
calculating azimuth viewing angles of the viewing position in four azimuths by combining the position reference information;
the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle;
and fusing the virtual pictures of the four azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to the four-fold screen field.
2. The four-fold screen yard based projection method of claim 1, wherein: the four-fold screen field comprises a projection surface positioned in front and other three projection surfaces which are not positioned above and below the projection surface in front;
the calculating the azimuth viewing angles of the viewing position at four azimuths by combining the position reference information comprises:
calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information;
and calculating the azimuth viewing angles of the rest three azimuths according to the calculated angle relation between the azimuth corresponding to the azimuth viewing angle and the other three azimuths.
3. The quadruple screen venue-based projection method of claim 1, wherein the calculating azimuthal perspectives of the viewing position in four azimuths in combination with the position reference information comprises:
and respectively calculating the azimuth viewing angles of the viewing position in the four azimuths by combining the position reference information and the four azimuth viewing angle calculation formulas.
4. The method for projecting based on a quadruple screen field as claimed in claim 1, wherein the step of generating a virtual frame corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the length of a visual angle area corresponding to the azimuth visual angle of an azimuth is larger than the length of a projection plane corresponding to the azimuth, calculating a cutting area corresponding to the azimuth visual angle, and cutting out a corresponding virtual picture in a virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area;
and when the length of a view angle area corresponding to the azimuth view angle of an azimuth is not more than the length of a projection plane corresponding to the azimuth, cutting out a virtual picture corresponding to the azimuth view angle in a virtual scene.
5. The screen yard projection method of claim 4, wherein said calculating the cropping zone corresponding to the azimuth view specifically comprises:
calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction;
and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
6. The method for projecting based on a quadruple screen field as claimed in claim 1, wherein the step of generating a virtual frame corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene;
and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
7. The method for projecting based on a quadruple screen field as claimed in claim 1, wherein the step of generating a virtual frame corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
8. The method for projecting based on a quadruple screen field as claimed in claim 1, wherein the step of generating a virtual frame corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles;
and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
9. The utility model provides a four-fold curtain place which characterized in that includes: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; and the projection size of at least one projection surface is variable.
10. The four-fold screen yard of claim 9, wherein:
the projection surface with the variable projection size is a movable wall surface, and the projection size of the projection surface is changed in a moving mode;
or the projection surface with the variable projection size is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode;
or the projection surface with the variable projection size is a plurality of pull-down curtains, and the projection size of the projection surface is changed in a folding and unfolding mode.
11. A projection system based on four-fold screen field, comprising: the system comprises intelligent equipment, projection equipment and a four-fold screen field;
the four-fold screen field comprises: each projection surface is connected with at least two projection surfaces in the other three projection surfaces; the projection size of at least one projection surface is variable;
the projection system based on the four-fold screen field further comprises:
the position acquisition module is used for acquiring position reference information corresponding to the watching position;
the smart device includes:
the scene acquisition module is used for acquiring a virtual scene corresponding to the quadruple screen field with the current projection size;
the viewing angle calculation module is used for calculating the azimuth viewing angles of the watching position in four azimuths by combining the position reference information;
the image generation module is used for generating a virtual image corresponding to each azimuth angle according to each azimuth angle in the virtual scene;
the image fusion module is used for fusing the virtual images of the four azimuth visual angles into a scene image for watching the virtual scene at the watching position;
the projection equipment projects the scene picture to the four-fold screen field, and the inner side of the four-fold screen field forms the watching position.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811186078.1A CN111050156A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on four-fold screen field and four-fold screen field |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811186078.1A CN111050156A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on four-fold screen field and four-fold screen field |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111050156A true CN111050156A (en) | 2020-04-21 |
Family
ID=70229231
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811186078.1A Pending CN111050156A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on four-fold screen field and four-fold screen field |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111050156A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115022615A (en) * | 2022-05-10 | 2022-09-06 | 南京青臣创意数字科技有限公司 | Projection-based virtual perception system and method |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN2898873Y (en) * | 2005-12-16 | 2007-05-09 | 伍炳康 | Amplitude-variable projection screen |
| CN107168534A (en) * | 2017-05-12 | 2017-09-15 | 杭州隅千象科技有限公司 | It is a kind of that optimization method and projecting method are rendered based on CAVE systems |
| CN107193372A (en) * | 2017-05-15 | 2017-09-22 | 杭州隅千象科技有限公司 | From multiple optional position rectangle planes to the projecting method of variable projection centre |
| CN107239143A (en) * | 2017-06-06 | 2017-10-10 | 北京德火新媒体技术有限公司 | A kind of CAVE using small spacing LED screen shows system and method |
| CN107678722A (en) * | 2017-10-11 | 2018-02-09 | 广州凡拓数字创意科技股份有限公司 | Multi-screen splices method, apparatus and multi-projection system giant-screen |
-
2018
- 2018-10-11 CN CN201811186078.1A patent/CN111050156A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN2898873Y (en) * | 2005-12-16 | 2007-05-09 | 伍炳康 | Amplitude-variable projection screen |
| CN107168534A (en) * | 2017-05-12 | 2017-09-15 | 杭州隅千象科技有限公司 | It is a kind of that optimization method and projecting method are rendered based on CAVE systems |
| CN107193372A (en) * | 2017-05-15 | 2017-09-22 | 杭州隅千象科技有限公司 | From multiple optional position rectangle planes to the projecting method of variable projection centre |
| CN107239143A (en) * | 2017-06-06 | 2017-10-10 | 北京德火新媒体技术有限公司 | A kind of CAVE using small spacing LED screen shows system and method |
| CN107678722A (en) * | 2017-10-11 | 2018-02-09 | 广州凡拓数字创意科技股份有限公司 | Multi-screen splices method, apparatus and multi-projection system giant-screen |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115022615A (en) * | 2022-05-10 | 2022-09-06 | 南京青臣创意数字科技有限公司 | Projection-based virtual perception system and method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101843107B (en) | OSMU (One Source Multiple Use) Stereo Camera and Method for Making Stereo Video Content | |
| US9983546B2 (en) | Display apparatus and visual displaying method for simulating a holographic 3D scene | |
| US7983477B2 (en) | Method and apparatus for generating a stereoscopic image | |
| CN102572486B (en) | Acquisition system and method for stereoscopic video | |
| CN107705241A (en) | A kind of sand table construction method based on tile terrain modeling and projection correction | |
| AU2018249563B2 (en) | System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display | |
| EP2408191A1 (en) | A staging system and a method for providing television viewers with a moving perspective effect | |
| CN111045286A (en) | Projection method and system based on double-folding screen field and double-folding screen field | |
| JP2007318754A (en) | Virtual environment experience display device | |
| US20190121217A1 (en) | Information processing device, information processing method, and program | |
| CN111050148A (en) | Three-folding-screen-site-based projection method and system and three-folding-screen site | |
| CN111179407A (en) | Virtual scene creating method, virtual scene projecting system and intelligent equipment | |
| CN111050156A (en) | Projection method and system based on four-fold screen field and four-fold screen field | |
| CN111050144A (en) | Projection method and system based on six-fold screen field and six-fold screen field | |
| CN111050147A (en) | Projection method and system based on five-fold screen field and five-fold screen field | |
| CN110060349B (en) | Method for expanding field angle of augmented reality head-mounted display equipment | |
| Andersen et al. | A hand-held, self-contained simulated transparent display | |
| CN111050145B (en) | Multi-screen fusion imaging method, intelligent device and system | |
| CN113345113A (en) | Content presentation method based on CAVE system | |
| JP2007323093A (en) | Display device for virtual environment experience | |
| WO2012035927A1 (en) | Remote video monitoring system | |
| CN108734791B (en) | Panoramic video processing method and device | |
| CN111131726B (en) | Video playing method, intelligent device and system based on multi-screen fusion imaging | |
| CN111050146B (en) | Single-screen imaging method, intelligent equipment and system | |
| Ikeda et al. | Panoramic movie generation using an omnidirectional multi-camera system for telepresence |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200421 |