HK1177274A - Viewer-centric user interface for stereoscopic cinema - Google Patents
Viewer-centric user interface for stereoscopic cinema Download PDFInfo
- Publication number
- HK1177274A HK1177274A HK13104069.3A HK13104069A HK1177274A HK 1177274 A HK1177274 A HK 1177274A HK 13104069 A HK13104069 A HK 13104069A HK 1177274 A HK1177274 A HK 1177274A
- Authority
- HK
- Hong Kong
- Prior art keywords
- parameter
- scene
- user interface
- parameters
- vergence
- Prior art date
Links
Description
Background
Stereoscopic movies, also known as 3D movies, again become relatively popular, providing viewers with a viewing experience that traditional movies cannot provide. The viewing experience that viewers obtain from stereoscopic movies results from a combination of factors, including camera parameters, viewing position, projector screen configuration, and other (e.g., psychological) factors. Depending on these factors, the viewing experience of a given viewer may be pleasant, distracted, or even uncomfortable (e.g., due to eye strain created by certain stereoscopic effects).
Stereo film producers and photographers have learned various heuristics for avoiding or deliberately enhancing known stereo effects, such as "cardboard", "pinch", "upsizing", and "miniaturizing" effects. However, until a scene is captured, the director is effectively unable to visualize how the scene is presented to the viewer. As such, obtaining desired results and/or effects relative to capturing a 3D scene typically requires a significant amount of planning, retaking, replanning, retaking, and so forth.
Disclosure of Invention
This summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein relate to a technique by which a user interface displays a representation of a stereoscopic scene and includes an interactive mechanism for changing parameter values that determine the perceived appearance of the scene. In one implementation, the interactive mechanism includes a point/handle that is interactively movable relative to the scene, where their positions correspond to the values of the parameters.
In one aspect, the scene is modeled as if viewed from above, including a representation of the eyes of the viewer, a representation of the screen, and an indication simulating what the viewer perceives on the screen. The interactive mechanism may be positioned within this "overhead view" of the scene.
The user may use the user interface to plan scenarios, for example, by starting from test scenarios and determining the effect of changing them by manipulating parameter values. Once decided, the parameter values may be applied to a stereo camera to capture the actual scene.
The user may edit an existing scene using the user interface. The parameter values can be interactively changed and the resulting video scene with the modified parameters can be previewed to determine the effect of the parameter change.
Other advantages may become apparent from the following detailed description of the invention, taken in conjunction with the accompanying drawings.
Drawings
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
FIG. 1 is a block diagram representing example components for implementing a viewer-centric user interface for editing and/or planning a stereoscopic cinema scene.
Fig. 2 is a flow chart representing example steps for planning a stereoscopic cinema scene through a user interface, including interaction with parameters.
Fig. 3 is a flow chart representing example steps for editing a stereoscopic cinema scene through a user interface, including interaction with parameters.
FIG. 4 is a representation of one implementation of a user interface, including a perspective view from above (top view), showing how a scene is perceived from a viewer-centric viewpoint.
FIG. 5 is a representation showing how parameters that are coupled together are visually indicated on a user interface.
FIG. 6 is a representation of one implementation of a user interface including a panel for controlling how an object is perceived when transitioning between shots.
FIG. 7 shows an illustrative example of a computing environment into which various aspects of the present invention may be integrated.
Detailed Description
Various aspects of the technology described herein relate generally to a user interface for editing and/or planning a stereoscopic (3D) movie. In one aspect, the user interface allows an editor to adjust a set of parameters that affect the movie viewing experience. At the same time, the user interface is viewer-centric in that the interface models the perceived 3D experience as if from the perspective of a movie viewer. More specifically, the user interface shows the world as perceived from above (i.e., from an "above" perspective), which may enable a movie editor/planner to see the perceived 3D depth of objects in the movie relative to the movie viewer and screen.
It should be understood that any of the examples herein are non-limiting. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, examples, structures, or functionalities described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in computing and video technology in general.
Fig. 1 shows a block diagram that includes components for capturing and interacting with a stereoscopic video 102 (e.g., a set of one or more video scenes). The left and right stereo cameras 104 and 105 photograph a scene. The user interface 106 running on the computer system 108 is used to plan and/or edit parameters associated with the stereoscopic video 102 via the planner and/or editor 110, 112, respectively. As such, the user interface facilitates planning and "after-fact" digital manipulation of the perceived scene. Note that the shoot plan and "after the fact" digital manipulation techniques are not limited to video, but are also applicable to stereoscopic film recording; for example, contemporary film cameras have "video help" feeds that can be used to feed the present system and can apply output parameters to the film camera (so that they can be applied to the video help output from the camera, allowing the feedback loop with the software to continue).
Parameters include vergence or horizontal image movement (corresponding to how much the cameras are rotated relative to each other), interocular distance (distance between cameras), moving camera (how far the camera is from the target), field of view of the camera (self-explanatory), and stage exit archways (associated with compensation for situations where one eye of one viewer can see something that is not visible to the other eye due to being off-screen). Manipulation of each of these parameters via the user interface 106 will be described below.
The result of the planning and/or editing is a modified video 114 with modified parameters 116. Upon editing, the modified video 114 is retained if the results are as desired. When planning, the modified parameters 116 are used to re-photograph the corresponding scene (e.g., with reconfigured camera calibration parameters determined during planning).
Note that the planning requires knowledge of the initial camera parameters, as represented by block 118 in fig. 1. However, as can be readily appreciated from the mathematical framework described below, editing does not require the absolute values of these parameters, as relative adjustments can be made to the parameters.
Another type of data that may be entered corresponds to theater parameters 120, such as changing the size/position of the screen relative to the viewer. If the theater parameters 120 are changed, other parameters may be automatically adjusted to provide new viewing parameters.
With respect to planning, taking 3D movies is difficult, as it is a challenge to imagine how the experience of the audience differs from the director's vision. The user interface 106 addresses this problem by providing a way to plan a shot, given a rough shot and/or still image of the scene. By adjusting the parameters, the desired camera parameters may be output, for example, as a ratio of parameters used to generate a rough shot, through the point cloud in the top-down view of the interface (as described below with reference to fig. 4).
Fig. 2 shows the steps of a typical planning operation. Step 202 represents shooting a test scene, maintaining camera calibration parameters that are a subset of the parameters described above, including camera interocular distance, camera field/focus, and vergence angle. The planner selects the stereo depth (step 204) and manipulates the parameter values through the user interface (step 206) as described below. (note that the stereo depth is pre-calculated using a separate process, e.g., using the techniques described in the paper "High-quality video interpolation using a layered rendering" by c.l. zinik, s.b. kang, m.uyttendaele, s.windower, and r.szeliski. (sigbraph 2004)), these parameter values are maintained in an appropriate manner (step 208) and used to calibrate the camera to re-photograph the test scene (step 214) until the parameter values provide the desired scene, so that it is not necessary to re-photograph the scene (or another test scene) (step 212). Depending on what is being shot, the last test scene may be used as the actual scene, or the actual scene may be shot using the desired parameter values now known (e.g., with the actual actor).
Fig. 3 shows the steps of an editing operation. Steps 302, 304 and 306 are similar to steps 202, 204 and 206 of fig. 2 and will not be described again, except to note that the actual scenario is being edited instead of the test scenario and that, as described below, no initial parameters are required.
Step 308 allows the editor to re-render the scene with the new parameter values whenever needed, either after a single parameter value change or after multiple parameter value changes. Step 310 repeats the manipulation/re-rendering until the desired result is obtained by the editor.
To facilitate the parameter manipulation described below, the user interface implements a mathematical framework that abstracts techniques for converting user interactions into values of stereoscopic parameters. More specifically, the framework abstracts the camera projector-screen-viewer geometry into ratios, allowing simple manipulation by the user.
The framework assumes that certain parameters associated with the viewer' S experience are known or configured by the user, including the screen width SwDistance S from viewer to screenzAnd the distance B between the eyes of the viewere. In one implementation, the parameters share the same units, with the origin of the world coordinate space centered between the eyes of the viewer. Thus, the positions of the left and right eyes are { -Be2, 0, 0 and Be/2,0,0}。
The left and right image widths are represented by W, using the ratio Sr=SwW maps the pixel location to a physical screen location.
Suppose that the pair of corresponding points across the left and right images is p, respectivelyL=(cL,rL) And pR=(cR,rR). Since both images are corrected, rL=rR. After projecting the two images onto the screen, the corresponding screen position is pLS=(cLS,rLS) And pRS=(cRS,rRLS). Note that pLSAnd pRSAre specified in units of pixels.
When an image is placed on the screen, two approaches may be taken, i.e., a vergence configuration or a parallel configuration. Small screens typically use a vergence configuration in which the center of the image is placed at the center of the screen. Larger screens typically use a parallel configuration, where the image center is offset by the assumed inter-eye distance. The following equations are the same for both, except where indicated. By d ═ cR-cL) Image differences are given. Screen difference dS=(cRS-cLS) D for vergence configuration, or d for parallel configurationS=d+Be/Sr. In both cases, the perceived depth ZeThe method comprises the following steps:
the perceived X coordinate from the perspective of the viewer, X, is calculated as followse:
The perceived Y coordinate is calculated in a similar manner. Note that for simplicity below, the formula for the Y coordinate is not presented, as it is similar to the X coordinate.
The above formula can be extended for vertical (forward-backward) movement of the viewer, since this implies SzNew value of (2). Horizontal (lateral) movement of the viewer does not change the perceived depth ZeBecause the motion is parallel to the screen. However, due to the X-coordinate XeResulting in tilt-like distortion of the scene shape. If K isxIs the horizontal movement of the viewer, the terms will be correctedX added to formula (2)e。
The user interface allows the user to change the parameter values, including the camera field of view θcInter-eye distance B of cameracVergence V of video cameracAnd moving camera ZSTo change the viewer's perception of the scene. Assuming that the scene is far enough away from the camera to change vergence, it amounts to moving the image globally along the X-axis direction (which changes the image disparity). This approximation is only accurate if the cameras are in a parallel configuration. Vergence V for simplicitycCan be described as horizontal pixel movement, e.g. given VcFor example, the left image may be shifted to the left by Vc2 pixels, the right image is shifted to the right by VcAnd/2 pixels.
The change in the field of view and vergence values correspond to the zooming and shifting of the image, respectively. However, manipulating the interocular distance and moving the camera requires the scene to be re-rendered. This is because changing the inter-eye distance and moving the camera causes the camera to shift, which must account for scene parallax.
For user-specified/edited parameter-based values, i.e. field angle θcVergence V of video cameracAnd moving cameraImage ZSTo calculate new pixel positions, changes to these values may be applied in an order corresponding to the photographer performing the same changes at video capture time, i.e., moving camera ZSInter-ocular distance BcField of view thetacThen, vergence Vc。
When manipulating V as it iscThe ratio theta as the original camera parameterc0、Bc0And Zs0To manipulate three other parameters:
Bc=αBBc0,and Zs=αZZs0. (3)
according to definition, Vc00. According to equation (3), αθZoom the image around its center, alphaBIs the relative change in the camera baseline, and alphaZUsing unit distance Zs0"normalized" motion capture. Zs0Calculated as a function of viewer to screen depth as re-projected in camera space:
providing these quantities as ratios is useful in situations where camera parameters are difficult to quantify or unknown. In fact, if only post-production effects are desired, no camera parameters are needed. However, to plan a shot, the original camera parameters need to be known. By directly manipulating the stereoscopic effect, the user indirectly changes the camera parameter values that result in such an effect. For example, the frame is at an interocular distance ratio α to the cameraBThe scene is scaled in an inversely proportional manner. This achieves a dramatic and miniaturisation effect by changing the scene shape (equivalent to changing the camera baseline).
The frame uses equations (1) and (2), using the original screen column position cLSAnd a pixel pLSScreen difference d ofSTo calculate the original X before any manipulationeAnd ZeAnd (4) coordinates. Providing a new set of 3D perceived coordinates using changes in camera interocular distance and motion captureAnd
next, the transformed points are projected onto a movie screen to find a new set of screen coordinatesDifference from screen
Can be calculated similarlyAfter which the framework can calculate new differencesThen, the field of view and vergence changes are applied to solve for the new screen coordinates (c'LS,r′LS) And warped screen disparity d'S:
Equation (7) assumes a vergence configuration. If a parallel configuration is used, the image is additionally moved in the X-axis direction before and after scaling (movement B)e/(2Sr))。
Turning to the user interface aspect, as represented in FIG. 4, the user can manipulate the shape of the world directly as perceived by the viewer. This is enabled by the top down, "top view" of the point cloud of the perceived scene associated with scan line 461, as shown by the area (panel) labeled 440. Note that the irregular lines and curves in this panel correspond to the front of the object located in the scene. Given the edited parameters, the image differences are automatically generated using known algorithms and a new set of stereo images is rendered.
In one implementation, a box window widget (or simply box 442) is provided as part of the interface to allow a user to easily manipulate the shape of the world being perceived. More specifically, block 442 (or other suitable two-dimensional map) is overlaid on the perceived scene point. The user manipulates portions of the box 442 to implement particular changes. To this end, points/handles are provided corresponding to the parameters (symbols are provided to help the user know which point controls which parameter; in one practical implementation, the points have different colors to help the user identify which parameter is being manipulated).
With respect to point manipulation, a mouse pointer is shown manipulating point/handle 444 to change the vergence parameter value. The user can change the perceived scene shape (and subsequently re-render a new stereoscopic image) by manipulating the frame in various ways using the point/handle. Note that in general, presentation is deferred until a later time.
The shape of the box is meaningful because it outlines the stereoscopic effect present in the rendered image. For example, when this box happens to be a square, it represents zero distortion for the viewer. As another example, the papering or clamping corresponds to flattening or lengthening of the frame, respectively. Note that fig. 6 shows a non-rectangular frame.
One such way to manipulate the frame (change the parameter values) is to add/enhance the cardboard and grip effect by changing the field of view parameters. The user may change the field of view by moving (e.g., dragging) a point/handle 446 on the side of the frame; this also changes the original camera focal length. The distortion of the frame mirrors the clamping effect that occurs with a wider field of view, for example.
Dragging vergence point/handle 444 translates the image left or right. As described above, a portion of the scene with zero disparity appears to be located at the depth of the screen. Changing vergence has the effect of changing some parts of the scene that appear to be located in the screen. The user changes the vergence by moving up or down the point 444 at the top of the box 442. This causes the left and right stereo frames to move in the X-axis direction. Note that this action unevenly distorts the 3D scene shape.
Moving the camera spot/handle 448 translates the scene forward or backward. The user moves the camera (i.e., changes the perceived camera-scene distance) by dragging the point 448 that is at the center of the square. As the scene gets closer to the viewer, the virtual camera moves closer to the scene. Moving the camera does not cause distortion because it accounts for parallax effects (which are depth dependent). The degree to which the user can move the camera depends on the quality of the stereo data. Although small movements are also possible, they can lead to a large change in the stereoscopic experience.
By dragging the point/handle 450 at the corner of the box, the user changes the interocular distance parameter value, which makes the scene appear larger or smaller. This effect will change the camera baseline and provide the known effects of miniaturization and upscaling.
The user interface shown in fig. 4 also includes two other areas or panels. One panel is the video player panel 460 that plays the scene while it is currently being parameterized; the point cloud (top view in 440) is a depth distribution associated with pixels along line 461. Another panel 462 shows current values corresponding to some or all of the parameters. In one implementation, the user may directly input at least some of these values in order to provide very accurate values that the user cannot take through the drag point/handle, for example.
In addition to adjusting the parameters individually, one or more parameters may be coupled such that changing the value of one parameter changes the value of another one or more parameters. A checkbox or the like may be provided to allow the user to "lock" different camera parameters together to create a new stereoscopic effect. One example of using coupling is the stereo equivalent of the well-known "Hitchcock Hickkock Kocker's magnification" effect (named in the name of the well-known movie director), where by adjusting the camera focal length, the foreground object remains the same size, while moving the camera carriage moves the camera closer or further away from the object, changing the size of the background object. This effect is achieved in the user interface by coupling the moving camera, field of view and vergence parameters together. Note that if a parallel configuration is used, this effect can be achieved by only coupling the moving camera and the field of view parameters together. Note that this may be visually indicated to the user, as indicated by the dashed lines in fig. 5, which show which parameters are coupled together by their handles, e.g., three handles 444, 446 and 448 are coupled together by line segments, in this example, forming a triangle 550; (if only two are coupled, one line segment is shown). Further note that although dashed lines are shown, in one implementation, color/solid lines are used.
In general, one of the key points of the Hirschk effect is to stabilize an object when the background moves behind the object. This effect is extended in one implementation to make the depth of the object constant, while the depth of the background varies. As such, the manipulation has an object depth. Since depth is depicted in the user interface with a point cloud, one way in which the user can invoke the Hirschhick effect is to click on a particular depth (object depth) in the point cloud and drag up or down. The object depth is kept constant while the depth of the rest of the scene is changed.
Turning to another aspect, the stage bay arch may be moved using the user interface. More specifically, in many stereoscopic lenses with objects appearing in front of the screen, there is often a region at the edge of the screen that can only be seen by one eye. For example, taking the perspective corresponding to the viewer's eyes in fig. 4, anything between the two oblique lines labeled 458 and 459, such as the object labeled 472, may be seen by the left eye but not the right eye of the viewer.
Such regions do not coincide with the edges of the scene and may cause eye fatigue for some viewers. The stage bay arch parameters are used to mask a portion of the stereoscopic frame to substantially move the perceived edge of the screen closer to the viewer. The black dots (depth marks) 470 and 471 are adjusted in length by moving along the line of sight. When the stage floor archways are properly positioned, it is easier for the viewer's eyes/brain to fuse objects close to the image edges.
Turning to the creation of post-production effects, a movie editor needs to cut between shots in order to somehow tell a story by switching between contrasting scenes. Recent trends include the use of very fast, multiple shears; however, for stereoscopic content, such fast cropping gives the user the possibility of severe visual discomfort because of lag time in the viewer's ability to blend scenes at different depths.
One way to mitigate visual discomfort is to mix vergences before and after shearing so that the associated objects have the same depth at the shear. For example, consider an object in a scene that looks like attention behind the screen. If the scene is suddenly switched to a scene where the object of attention is now in front of the screen, the viewer perceives an unadjusted "jerk". However, the slight shift in vergence before and after shearing can prevent depth jerking and result in a more visually pleasing transformation; such subtle vergence changes can generally be performed without being noticed by the viewer.
FIG. 6 shows an interface panel 660 that an editor may use to control the movement of vergence between two shots (shot A and shot B). The dashed line indicates that the depth of the noticed object in shot a will be transformed to the depth of the noticed object in shot B without vergence movement, i.e. as a "momentary" jump perceived as fast as the user can handle it. Using the user interface, the editor/planner can obtain the depth from one object in the previous clip to another object in the next clip; the movement may be according to any of a variety of mixing functions (e.g., linear, cubic spline, etc.). A mechanism such as slider 662 or other timing control mechanism may be used to control the times at which transitions begin and end. As such, the editor/planner may select the vergence to be mixed before and after the cut so that the two objects have the same difference at the cut. This results in a more visually pleasing transformation.
Vergence may also be used to draw attention to the subject. Note that vergence changes are not typically seen by the viewer, and a complete image fusion occurs in the viewer after a short time lag (different from person to person). If the scene is cut back and forth faster than this time lag, then objects with similar differences to the currently fused region are fused first. Thus, the viewer's attention may be drawn by alternatively using vergence changes to adjust the area of similar disparity.
Exemplary operating Environment
FIG. 7 illustrates an example of a suitable computing and networking environment 700 on which examples and implementations of any of FIGS. 1-6 may be implemented. The computing system environment 700 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 700.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to FIG. 7, an exemplary system for implementing various aspects of the invention may include a general purpose computing device in the form of a computer 710. The components of computer 710 may include, but are not limited to: a processing unit 720, a system memory 730, and a system bus 720 that couples various system components including the system memory to the processing unit 721. The system bus 721 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
Computer 710 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 710 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 710. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
The system memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM)731 and Random Access Memory (RAM) 732. A basic input/output system 733(BIOS), containing the basic routines that help to transfer information between elements within computer 710, such as during start-up, is typically stored in ROM 231. RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720. By way of example, and not limitation, FIG. 7 illustrates operating system 734, application programs 735, other program modules 736, and program data 737.
The computer 710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 7 illustrates a hard disk drive 741 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 752 that reads from or writes to a removable, nonvolatile magnetic disk 751, and an optical disk drive 756 that reads from or writes to a removable, nonvolatile optical disk 755 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 741 is typically connected to the system bus 721 through a non-removable memory interface such as interface 740, and magnetic disk drive 751 and optical disk drive 755 are typically connected to the system bus 721 by a removable memory interface, such as interface 750.
The drives and their associated computer storage media discussed above and illustrated in FIG. 7, provide storage of computer readable instructions, data structures, program modules and other data for the computer 710. In FIG. 7, for example, hard disk drive 741 is illustrated as storing operating system 744, application programs 745, other program modules 746, and program data 747. Note that these components can either be the same as or different from operating system 734, application programs 735, other program modules 736, and program data 737. Operating system 744, application programs 745, other program modules 746, and program data 747 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 710 through input devices such as a tablet or electronic digitizer 764, a microphone 763, a keyboard 762 and pointing device 761, commonly referred to as a mouse, trackball or touch pad. Other input devices not shown in FIG. 7 may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 720 through a user input interface 760 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a Universal Serial Bus (USB). A monitor 791 or other type of display device is also connected to the system bus 721 via an interface, such as a video interface 790. The monitor 791 may also be integrated with a touch-screen panel or the like. In addition, a monitor 791 may also be provided to allow stereoscopic images to be displayed through the use of polarized glasses or other mechanisms. Note that the monitor and/or touch screen panel can be physically coupled to a housing in which the computing device 710 is incorporated, such as in a tablet-type personal computer. In addition, computers such as the computing device 710 may also include other peripheral output devices such as speakers 795 and printer 796, which may be connected through an output peripheral interface 794 or the like.
The computer 710 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 780. The remote computer 780 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 710, although only a memory storage device 781 has been illustrated in FIG. 7. The logical connections depicted in fig. 7 include one or more Local Area Networks (LAN)771 and one or more Wide Area Networks (WAN)773, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 710 is connected to the LAN 771 through a network interface or adapter 770. When used in a WAN networking environment, the computer 710 typically includes a modem 772 or other means for establishing communications over the WAN 773, such as the Internet. The modem 772, which may be internal or external, may be connected to the system bus 721 via the user input interface 760, or other appropriate mechanism. A wireless networking component 774 such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 710, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 7 illustrates remote application programs 785 as residing on memory device 781. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
An auxiliary subsystem 799 (e.g., for auxiliary display of content) may be connected via the user interface 760, allowing data such as program content, system status, and event notifications to be provided to a user even though the main portion of the computer system is in a low power state. The auxiliary subsystem 799 may be connected to the modem 772 and/or network interface 770 to allow communication between these systems while the main processing unit 720 is in a low power state.
Conclusion
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
Claims (15)
1. In a computing environment, a system includes, a user interface (106) configured to manipulate one or more parameter values associated with a stereoscopic video scene, the user interface including an overhead view representation (440) of at least a portion of the scene showing a viewpoint of a viewer relative to a screen location, the representation being interactive (444, 446, 448, 450, 470, 471) with respect to obtaining input for manipulating the one or more parameter values.
2. The system of claim 1, wherein one of the parameter values corresponds to a vergence parameter, one of the parameter values corresponds to an interocular distance parameter, one of the parameter values corresponds to a motion capture parameter, one of the parameter values corresponds to a field of view parameter, or one of the parameter values corresponds to a stage mouth parameter.
3. The system of claim 1, wherein there are at least two parameters and are coupled together such that a change to one parameter value changes the parameter value of each of the other parameters coupled to the parameter corresponding to the changed value.
4. The system of claim 3, wherein the moving camera parameters and the field of view parameters are coupled or the vergence parameters, moving camera parameters, and field of view parameters are coupled.
5. The system of claim 1, wherein the representation is interactive with respect to receiving input via a movable handle, wherein movement of the handle changes a corresponding parameter value.
6. The system of claim 5, wherein the representation includes a box whose shape changes as the movable handle is moved to change the corresponding parameter value.
7. The system of claim 1, wherein the user interface comprises an interactive panel showing a value of at least one parameter, or wherein the user interface comprises a video player panel for viewing the stereoscopic scene, or wherein the user interface comprises an interactive panel showing a value of at least one parameter, and wherein the user interface comprises a video player panel for viewing the stereoscopic scene.
8. The system of claim 1, wherein the viewpoint of the viewer comprises two points representing eyes of the viewer, and the representation comprises information representing a perspective from each point.
9. The system of claim 1, wherein the user interface includes means for blending vergence values before and after cropping, or means for changing vergence to draw attention to objects in the scene, or both means for blending vergence values before and after cropping and means for changing vergence to draw attention to objects in the scene.
10. In a computing environment, a method comprising:
providing a user interface (106) that displays a representation (440) of a stereoscopic scene and includes interactive mechanisms for changing parameter values, including a mechanism (444) for changing vergence parameter values, a mechanism (448) for changing panning parameter values, a mechanism (446) for changing field of view parameter values, and a mechanism (450) for changing parameter values for inter-eye distances;
receiving data corresponding to an interaction with one of the mechanisms, an
Changing the parameter value corresponding to the mechanism.
11. The method of claim 10, further comprising, based on the parameter value change, outputting new stereoscopic video, outputting new parameter values for re-shooting a video scene, or changing at least some of the parameter values to adjust a changed theater parameter value.
12. The method of claim 10, wherein the user interface includes a mechanism for changing a value of a stage bay arch parameter, and further comprising obscuring a portion of the stereoscopic scene based on the value of the stage bay arch parameter.
13. In a computing environment, a user interface (106) including, means for interacting (444, 446, 448, 450) to change parameter values corresponding to parameters of a stereoscopic scene, including a vergence parameter, a panning parameter, a field of view parameter, and an interocular distance parameter, the user interface (106) providing an overhead view of the scene, including a representation of a viewer's eyes, a representation of a screen, and an indication simulating what each of the viewer's eyes perceives on the screen.
14. The user interface of claim 13, further comprising a stage arch parameter having a value that is changeable through interaction with the user interface.
15. The user interface of claim 13, wherein the user interface is configured to interact through handles displayed within the top view of the scene, each handle being associated with a parameter and being movable relative to the scene, and wherein movement of a handle changes its position relative to the scene and correspondingly changes the parameter value of the associated parameter of that handle.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/485,179 | 2009-06-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1177274A true HK1177274A (en) | 2013-08-16 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9275680B2 (en) | Viewer-centric user interface for stereoscopic cinema | |
| US8957892B2 (en) | Stereo composition based on multiple camera rigs | |
| US8330802B2 (en) | Stereo movie editing | |
| US8605136B2 (en) | 2D to 3D user interface content data conversion | |
| US6747610B1 (en) | Stereoscopic image display apparatus capable of selectively displaying desired stereoscopic image | |
| US12212731B2 (en) | Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery | |
| US9031356B2 (en) | Applying perceptually correct 3D film noise | |
| US9025007B1 (en) | Configuring stereo cameras | |
| KR20190138896A (en) | Image processing apparatus, image processing method and program | |
| CN101189643A (en) | 3D image generation and display system | |
| Koppal et al. | A viewer-centric editor for 3D movies | |
| JP2006178900A (en) | Stereo image generator | |
| US20150179218A1 (en) | Novel transcoder and 3d video editor | |
| CN101743756A (en) | Method for processing a spatial image | |
| Smolic et al. | Disparity-aware stereo 3d production tools | |
| Tseng et al. | Automatically optimizing stereo camera system based on 3D cinematography principles | |
| JP4902012B1 (en) | Zoomable stereo photo viewer | |
| HK1177274A (en) | Viewer-centric user interface for stereoscopic cinema | |
| Devernay | Image and geometry processing for 3-D cinematography | |
| US20210400255A1 (en) | Image processing apparatus, image processing method, and program | |
| Yun et al. | Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools | |
| JP2012239024A (en) | Zoomable 3D photo viewer |