US20160232704A9 - Apparatus and method for displaying an image of an object on a visual display unit - Google Patents
Apparatus and method for displaying an image of an object on a visual display unit Download PDFInfo
- Publication number
- US20160232704A9 US20160232704A9 US13/467,644 US201213467644A US2016232704A9 US 20160232704 A9 US20160232704 A9 US 20160232704A9 US 201213467644 A US201213467644 A US 201213467644A US 2016232704 A9 US2016232704 A9 US 2016232704A9
- Authority
- US
- United States
- Prior art keywords
- display unit
- visual display
- image
- orientation
- light source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/006—Electronic inspection or testing of displays and display drivers, e.g. of LED or LCD displays
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0693—Calibration of display systems
Definitions
- the invention relates to an apparatus and method for displaying an image of an object on a visual display unit.
- Such an apparatus and method is commonly known from the prior art and employed in the form of television sets, computer screens and similar devices.
- a problem with these known apparatuses and methods is that the impression rendered by the image or images of the object that is/are shown on the visual display unit barely ever provides a real-life sensation of the experience that looking at the true object provides. This particularly applies when showing images of non-Lambertian surface materials of an object, which may give an impression depending on the angle at which light impacts it and also depending on the angle at which one looks at the surface.
- the image of an object that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- One preferred embodiment has the feature that the image of the object that is shown on the visual display unit is calculated from a representation of the object, the calculation taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- the image of the object that is shown on the visual display unit is selected from a database comprising a series of images of the object, wherein the selected image provides a best fit with seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- the image of the object that is shown on the visual display unit is calculated as an interpolation of images from the object that come closest to seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- Such an apparatus for displaying an image of an object is known to comprise a handheld computer with an integrated visual display unit. It is also known from the prior art that such a computer may be provided with (first) means to detect its 3-D orientation.
- such an apparatus is embodied in a way that the computer is loaded with software that cooperates with said (first) means for detecting the 3-D orientation of the visual display unit to arrange that the image of the object that is shown on the visual display unit depends on the 3-D orientation of the visual display unit.
- the apparatus is provided with second means to establish a position of a viewer's head or eyes in relation to the visual display unit, and that the software cooperates with said second means to arrange that the image of the object that is shown on the visual display unit depends on the established position of a viewer's head or eyes in relation to the visual display unit.
- the apparatus is provided with third means to estimate a position of a light source or light sources at a location where the visual display unit is located, and that the software cooperates with said third means to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of the light source or light sources.
- the software operates in a continuous loop at a frequency of approximately 30 Hz.
- the operating frequency is 60 Hz.
- FIG. 1 shows a viewer looking at a tablet computer embodied with software in accordance with the invention
- FIG. 2 shows a flow diagram embodying the method of the invention that may be implemented in the software for the handheld computer
- FIG. 3 shows graphs representing some mathematical considerations pertaining to a possible implementation of a method to draw an image based on a light source configuration and a viewer's position, which method forms part of the method according to the flow diagram of FIG. 2 ;
- FIG. 4 a shows a scheme for the collection of photographs of materials to be displayed on the handheld computer
- FIG. 4 b shows the perspective transformation of a photograph of a sample material attached to a flat board to a fronto-parallel view of the board of FIG. 4 a;
- FIG. 4 c shows computing the normal of the board that holds the material sample
- FIG. 4 d shows a Delaunay triangulation of the space of boards normal, used for interpolating between photographs in a possible implementation to provide a best match taking care of the device orientation, light configuration and viewer position.
- the apparatus of the invention for displaying an image of an object is shown and indicated with reference 1 .
- This apparatus is preferably embodied as a handheld computer 1 with an integrated visual display unit at which a viewer 3 may be looking in a manner that is known per se.
- the computer 1 is preferably provided with first means to detect its 3-D orientation, which means are symbolized with the part that is carrying reference 2 .
- the handheld computer 1 is further loaded with software that cooperates with said first means 2 for detecting the 3-D orientation of the visual display unit that forms part of the computer 1 , in order to arrange that the image of the object that is shown on the visual display unit will depend on the 3-D orientation of the visual display unit of the computer 1 .
- the handheld computer 1 is provided with second means 4 to establish a position of a viewer's 3 head or eyes in relation to the visual display unit of the computer 1 , and that the software cooperates with said second means 4 to arrange that the image of the object that is shown on the visual display unit depends on the established position of a viewer's 3 head or eyes in relation to the visual display unit/the computer 1 .
- the computer 1 is provided with third means 5 to estimate a position of a light source 6 or light sources at a location where the computer's visual display unit is located, and that the software cooperates with said third means 5 to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of the light source 6 or light sources. This provides the possibility to improve the lighting and shading effects in the image shown.
- the image that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- square 7 relates to the determination of the 3-D orientation of the computer 1 and its visual display unit making use of the first means 2 as elucidated with reference to FIG. 1 .
- diamond 8 it is established whether it is also possible to keep track of the viewer's 3 head or eyes making use of the second detecting means 4 shown in FIG. 1 .
- the position of the viewer's head or eyes can be taken into account in square 9 when determining the relative position of the visual display unit in relation to the viewer 3 .
- a fixed head position is assumed.
- This camera 4 embodies the second means to establish the position of a viewer's 3 head or eyes in relation to the visual display unit of the computer 1 facing the user.
- This technique is implemented in the OpenCV library http://sourceforge.net/projects/opencvlibrary/ [Lit. 14].
- a different technique which also tracks the eye position and gaze direction is described in e.g. Visual Gaze Estimation by Joint Head and Eye Information [Lit. 15].
- diamond 10 concerns the question whether the third means 5 shown in FIG. 1 are enabled for establishing or estimating the position of a light source 6 . If the third means 5 are not enabled then the software operates as if a predetermined fixed position of a virtual light source applies as indicated in square 11 . If however the third means 5 are enabled, square 12 indicates that account is being taken of the position of this light source 6 in the displaying of the image of the object on the visual display unit of the computer 1 .
- a type of camera which is commonly integrated in known handheld computers.
- Such a camera is used to observe the intensity of the illumination of the environment, and this illumination intensity observation may be used to light the virtual material on the display of the handheld computer 1 .
- a fish-eye camera with a near 180 degree field of view is used. If such a camera is not available on the handheld computer 1 , individual images can be stitched into a panorama of the environment following e.g. Image Alignment and Stitching [Lit. 16].
- high dynamic range imaging techniques are used such as e.g. described in High Dynamic Range Imaging [Lit. 17].
- cameras integrated into the front and into the back of the handheld computer 1 should be used to get a full 360 degree representation of the environment.
- Light coming from the back-side of the handheld computer 1 may even be used to realize transparency effects as well as sub-surface scattering effects.
- the panorama of the surroundings of the handheld computer 1 can be used to light the virtual material that is displayed visual display unit of the computer 1 .
- the simplest environmental lighting effect is reflection of the environment in the virtual material on the display, but more advanced effects are possible such as those described in High Dynamic Range Imaging [Lit. 17].
- Diamond 13 deals with the selection of the operational method in which the image to be displayed on the visual display unit of the handheld computer 1 is determined.
- Square 14 relates to the embodiment in which the image that is shown on the visual display unit is calculated from a representation of the object, the calculation taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- Square 14 of the flow diagram in FIG. 2 may be realized in different ways. In the following an example of a possible algorithm is described in detail. The described method concerns the display of a plastic-like material with a relief (for example, the shape of a user interface button) embossed into it. It is possible however to implement Square 14 with many other known algorithms, see e.g. [Lit. 6, 7].
- the lighting computations may be based on the Blinn-Phong shading model [Lit. 1]. This model is known as the default shading model used in computer graphics software libraries OpenGL [Lit. 2] and Direct3D [Lit. 3].
- RGB red, green and blue
- FIG. 3 a The local geometry involved in the shading calculations for each pixel is illustrated in FIG. 3 a .
- a single light source 6 is assumed that is positioned above the material 17 to be visualized.
- the light is characterized by a unit direction vector L where its direction is from the material 17 towards the light 6 .
- the position of the viewer 3 is characterized by a unit direction vector V, where its direction is from the material 17 towards the viewer 3 .
- the embossing in the plastic is represented by a normal map [Lit. 4]. This map defines the surface normal of the material to be displayed at each pixel on the visual display unit of the computer 1 .
- FIG. 3 b is an illustration of a cross section of a normal map representing a rounded button.
- the orientation of the handheld computer 1 is represented by a 3 ⁇ 3 rotation matrix M.
- FIG. 3 c shows the axes of the coordinate frame with reference to the handheld computer 1 .
- the z-axis is orthogonal to the display of the computer 1 , the x and y axes are aligned with the edges of the computer 1 .
- a new image to be displayed on the handheld computer 1 is calculated continuously. Each calculation then comprises the following steps:
- first means 2 (see FIG. 1 ) to detect its 3-D orientation which is integrated into the handheld computer 1 .
- Said first means 2 to detect said 3-D orientation may be a gyroscope or an accelerometer.
- Id max ( N ⁇ L, 0).
- Is pow(max( N ⁇ H, 0), s ).
- Ii wa C+wd Id C+ws Is W.
- Square 15 relates to the embodiment in which the image that is shown on the visual display unit of the computer 1 is selected from a database comprising a series of images of the object to be displayed, wherein the selected image provides a best fit with seeing the object in real life, taking into account again a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit of the computer 1 , a position of a viewer's head or eyes in relation to said visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- the image of the object that is shown on the visual display unit is calculated as an interpolation of images from the object that come closest to seeing the object in real life.
- the calculus loop is closed with line 16 which reflects that the software embodying the method of the invention operates preferably in a continuous loop at a frequency of approximately 30 Hz, and more preferably 60 Hz.
- Square 15 image based rendering
- This example consists of three steps: obtaining a representative set of photographs of the sample material to be displayed (such as a piece of fabric), pre-processing the photographs, and displaying the processed photographs on the display of the handheld computer 1 .
- FIG. 4 a shows a high-resolution photo-camera 18 .
- the photo-camera 18 must be calibrated (see e.g. [Lit. 9]), as the focal length, sensor size, sensor center and distortion of the camera 18 must be known in order to correctly process the photographs.
- a 3 ⁇ 3 camera calibration matrix K [Lit. 10] represents the focal length and sensor center of the camera 18 .
- the sample material 19 is attached to a flat board 20 .
- the board 20 is fixed to a motorized device 21 that can rotate the material 19 into the desired orientations.
- a motorized device 21 can be a generic robot arm, or a purpose-built device.
- the camera 18 is placed on a tripod 22 facing the board 20 .
- Studio lighting 23 is used to light the sample material 19 as desired.
- a computer 24 controls the camera 18 , the motorized device 21 and the illumination of the scene.
- Software running on the computer 24 instructs the device 21 to rotate the board 20 to each of the desired orientations, after which the camera 18 makes a photograph.
- the photographs are stored for subsequent processing as described below.
- the processing step described in this section extracts an area from each of the photographs, and aligns the extracted areas, as follows.
- the board 20 has high-contrast square edges 25 and high-contrast markers, as these will simplify subsequent-processing.
- the square is imaged as a quadrilateral.
- the four corners ci of this quadrilateral are detected.
- the corners ci are represented by 2D homogeneous points (3-vectors).
- the quadrilateral is then mapped to a square, fronto-parallel view of the sample 19 area surrounded by the high contrast markers.
- a homography (a 3 ⁇ 3 matrix) H is computed that satisfies H ci ⁇ ti, where ti represents the homogeneous coordinates of a four corners square image T (the symbol ⁇ denotes proportional to).
- the computed homography is used to transform each pixel in the sample area to the fronto-parallel view.
- the resulting image is stored in a texture image T.
- the four corners ci are also used to compute the normal of the board in the photo-camera frame [Lit. 10], see FIG. 4 c for illustration.
- the four corners define two orthogonal directions in the plane of the board.
- the two vanishing points v 1 and v 2 of these directions are computed.
- the processed images Ti including the normal ni of each image are stored.
- the following step describes how the material is displayed using these stored images.
- the view of the material displayed on the visual display unit of the handheld computer 1 is updated frequently, for instance 60 times per second.
- the computer rotation is restricted to the x- and y-axes of the device (see FIG. 3 c for an illustration of the computer coordinate frame). This implies that the device normal (the z-axis) can be used to encode the orientation of the handheld computer 1 .
- Each display step starts with measuring the handheld computer orientation M, a 3 ⁇ 3 rotation matrix.
- the last column in this matrix is the normal d (i.e., the z-axis in FIG. 3 c ) of the computer 1 in the world coordinate frame.
- the normal is used to retrieve the three neighbouring images Ti, Tj and Tk from the database of images. This can be done using a Delaunay triangulation [Lit. 11] of the space of board normals, see FIG. 4 d.
- the Delaunay triangulation connects the board normals corresponding to each photograph to form a mesh of triangles. It is assumed the handheld computer normal d is contained by one of these triangles. This triangle is bounded by the normal ni, nj and nk corresponding to images Ti, Tj and Tk.
- a weight w in the range [0, 1] is assigned to each image T based on the distance of each normal n to the device normal d, using the barycentric coordinates [Lit. 12].
- the images Ti, Tj and Tk are retrieved from memory and interpolated using the weights (i.e., wi Ti+wj Tj+wk Tk).
- the interpolated image is subsequently presented on the display of the handheld computer 1 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Method and apparatus for displaying an image of an object on a visual display unit, wherein the image that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
Description
- The invention relates to an apparatus and method for displaying an image of an object on a visual display unit.
- Such an apparatus and method is commonly known from the prior art and employed in the form of television sets, computer screens and similar devices. A problem with these known apparatuses and methods is that the impression rendered by the image or images of the object that is/are shown on the visual display unit barely ever provides a real-life sensation of the experience that looking at the true object provides. This particularly applies when showing images of non-Lambertian surface materials of an object, which may give an impression depending on the angle at which light impacts it and also depending on the angle at which one looks at the surface.
- It is therefore an object of the invention to provide a method and apparatus in which the image of the object that is shown on the visual display unit provides an accurate match with looking at the true life object directly.
- To promote the object of the invention a method and apparatus are proposed in accordance with one or more of the appended claims.
- In a first aspect of the invention the image of an object that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof. Surprisingly it has proven to be possible to provide convincing images already by taking account of the 3-D orientation of the visual display unit. Improved results are attainable when also account is taken of a position of a viewer's head or eyes in relation to the visual display unit, and best results are achievable when still further account is taken of a position of a light source or light sources at a location where the visual display unit is located.
- Whenever in this description mention is made of a light source or light sources this includes image based lighting, in which the entire environment is deemed to constitute a light source. Also reflections from the environment form a part thereof.
- There are several viable ways in which the method of the invention can be implemented. One preferred embodiment has the feature that the image of the object that is shown on the visual display unit is calculated from a representation of the object, the calculation taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- In yet another embodiment the image of the object that is shown on the visual display unit is selected from a database comprising a series of images of the object, wherein the selected image provides a best fit with seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- If in this embodiment one desires to limit the number of stored images, it is preferable that the image of the object that is shown on the visual display unit is calculated as an interpolation of images from the object that come closest to seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- As mentioned above the invention is embodied in a method and in an apparatus that operates in accordance with said method. Such an apparatus for displaying an image of an object, is known to comprise a handheld computer with an integrated visual display unit. It is also known from the prior art that such a computer may be provided with (first) means to detect its 3-D orientation.
- In accordance with the invention such an apparatus is embodied in a way that the computer is loaded with software that cooperates with said (first) means for detecting the 3-D orientation of the visual display unit to arrange that the image of the object that is shown on the visual display unit depends on the 3-D orientation of the visual display unit.
- Preferably the apparatus is provided with second means to establish a position of a viewer's head or eyes in relation to the visual display unit, and that the software cooperates with said second means to arrange that the image of the object that is shown on the visual display unit depends on the established position of a viewer's head or eyes in relation to the visual display unit.
- Still further preferably the apparatus is provided with third means to estimate a position of a light source or light sources at a location where the visual display unit is located, and that the software cooperates with said third means to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of the light source or light sources.
- It has shown possible to already provide smooth images of an object with a true live experience when the software operates in a continuous loop at a frequency of approximately 30 Hz. Preferably the operating frequency is 60 Hz.
- The invention will hereinafter be further elucidated with reference to the drawing of an exemplary embodiment of the invention which is not limiting the appended claims.
- In the drawing:
-
FIG. 1 shows a viewer looking at a tablet computer embodied with software in accordance with the invention; -
FIG. 2 shows a flow diagram embodying the method of the invention that may be implemented in the software for the handheld computer; -
FIG. 3 shows graphs representing some mathematical considerations pertaining to a possible implementation of a method to draw an image based on a light source configuration and a viewer's position, which method forms part of the method according to the flow diagram ofFIG. 2 ; -
FIG. 4a shows a scheme for the collection of photographs of materials to be displayed on the handheld computer; -
FIG. 4b shows the perspective transformation of a photograph of a sample material attached to a flat board to a fronto-parallel view of the board ofFIG. 4 a; -
FIG. 4c shows computing the normal of the board that holds the material sample; and -
FIG. 4d shows a Delaunay triangulation of the space of boards normal, used for interpolating between photographs in a possible implementation to provide a best match taking care of the device orientation, light configuration and viewer position. - Whenever in the figures the same reference numerals are applied, these numerals refer to the same parts
- With reference first to
FIG. 1 , the apparatus of the invention for displaying an image of an object is shown and indicated withreference 1. This apparatus is preferably embodied as ahandheld computer 1 with an integrated visual display unit at which aviewer 3 may be looking in a manner that is known per se. Thecomputer 1 is preferably provided with first means to detect its 3-D orientation, which means are symbolized with the part that is carryingreference 2. Thehandheld computer 1 is further loaded with software that cooperates with saidfirst means 2 for detecting the 3-D orientation of the visual display unit that forms part of thecomputer 1, in order to arrange that the image of the object that is shown on the visual display unit will depend on the 3-D orientation of the visual display unit of thecomputer 1. - Preferably the
handheld computer 1 is provided with second means 4 to establish a position of a viewer's 3 head or eyes in relation to the visual display unit of thecomputer 1, and that the software cooperates with said second means 4 to arrange that the image of the object that is shown on the visual display unit depends on the established position of a viewer's 3 head or eyes in relation to the visual display unit/thecomputer 1. - Still further preferably the
computer 1 is provided withthird means 5 to estimate a position of alight source 6 or light sources at a location where the computer's visual display unit is located, and that the software cooperates with saidthird means 5 to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of thelight source 6 or light sources. This provides the possibility to improve the lighting and shading effects in the image shown. - Making reference now to
FIG. 2 , the method of the invention according to which the software preferably operates will now be elucidated. - In this method the image that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
- As a first step square 7 relates to the determination of the 3-D orientation of the
computer 1 and its visual display unit making use of thefirst means 2 as elucidated with reference toFIG. 1 . Optionally then in diamond 8 it is established whether it is also possible to keep track of the viewer's 3 head or eyes making use of the second detecting means 4 shown inFIG. 1 . In the affirmative case the position of the viewer's head or eyes can be taken into account insquare 9 when determining the relative position of the visual display unit in relation to theviewer 3. In the negative case a fixed head position is assumed. - It is possible to track the head or eyes of the
user 3 if a camera 4 facing theuser 3 is integrated in thedisplay device 1. This camera 4 embodies the second means to establish the position of a viewer's 3 head or eyes in relation to the visual display unit of thecomputer 1 facing the user. As an example for the manner in which the camera 4 can be used to detect the users face and keep track thereof thereof, reference is made to Rapid Object Detection using a Boosted Cascade of Simple Features [Lit. 13]. This technique is implemented in the OpenCV library http://sourceforge.net/projects/opencvlibrary/ [Lit. 14]. A different technique which also tracks the eye position and gaze direction is described in e.g. Visual Gaze Estimation by Joint Head and Eye Information [Lit. 15]. - As a
further option diamond 10 concerns the question whether the third means 5 shown inFIG. 1 are enabled for establishing or estimating the position of alight source 6. If thethird means 5 are not enabled then the software operates as if a predetermined fixed position of a virtual light source applies as indicated insquare 11. If however thethird means 5 are enabled,square 12 indicates that account is being taken of the position of thislight source 6 in the displaying of the image of the object on the visual display unit of thecomputer 1. - For environmental lighting it is possible to use a type of camera which is commonly integrated in known handheld computers. Such a camera is used to observe the intensity of the illumination of the environment, and this illumination intensity observation may be used to light the virtual material on the display of the
handheld computer 1. Preferably a fish-eye camera with a near 180 degree field of view is used. If such a camera is not available on thehandheld computer 1, individual images can be stitched into a panorama of the environment following e.g. Image Alignment and Stitching [Lit. 16]. Preferably, high dynamic range imaging techniques are used such as e.g. described in High Dynamic Range Imaging [Lit. 17]. - If available, cameras integrated into the front and into the back of the
handheld computer 1 should be used to get a full 360 degree representation of the environment. Light coming from the back-side of thehandheld computer 1 may even be used to realize transparency effects as well as sub-surface scattering effects. - The panorama of the surroundings of the
handheld computer 1 can be used to light the virtual material that is displayed visual display unit of thecomputer 1. The simplest environmental lighting effect is reflection of the environment in the virtual material on the display, but more advanced effects are possible such as those described in High Dynamic Range Imaging [Lit. 17]. -
Diamond 13 deals with the selection of the operational method in which the image to be displayed on the visual display unit of thehandheld computer 1 is determined. -
Square 14 relates to the embodiment in which the image that is shown on the visual display unit is calculated from a representation of the object, the calculation taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof. - Square 14 of the flow diagram in
FIG. 2 may be realized in different ways. In the following an example of a possible algorithm is described in detail. The described method concerns the display of a plastic-like material with a relief (for example, the shape of a user interface button) embossed into it. It is possible however to implementSquare 14 with many other known algorithms, see e.g. [Lit. 6, 7]. - To simulate interaction of light with the plastic material, the lighting computations may be based on the Blinn-Phong shading model [Lit. 1]. This model is known as the default shading model used in computer graphics software libraries OpenGL [Lit. 2] and Direct3D [Lit. 3].
- It is preferred to use such models that combine ambient, diffuse and specular shading terms. These terms are weighted by scalar factors wa, wd and ws, respectively. Other parameters are the red, green and blue (RGB) color vector C of the material and the scalar shininess s of the material. The RGB color vector W represents the color of the illumination, which for reasons of simplicity we assume to be white.
- The local geometry involved in the shading calculations for each pixel is illustrated in
FIG. 3a . For simplicity, a singlelight source 6 is assumed that is positioned above thematerial 17 to be visualized. The light is characterized by a unit direction vector L where its direction is from the material 17 towards thelight 6. The position of theviewer 3 is characterized by a unit direction vector V, where its direction is from the material 17 towards theviewer 3. - The embossing in the plastic is represented by a normal map [Lit. 4]. This map defines the surface normal of the material to be displayed at each pixel on the visual display unit of the
computer 1.FIG. 3b is an illustration of a cross section of a normal map representing a rounded button. - The orientation of the
handheld computer 1 is represented by a 3×3 rotation matrix M.FIG. 3c shows the axes of the coordinate frame with reference to thehandheld computer 1. The z-axis is orthogonal to the display of thecomputer 1, the x and y axes are aligned with the edges of thecomputer 1. - A new image to be displayed on the
handheld computer 1 is calculated continuously. Each calculation then comprises the following steps: - 1) Measure the device orientation M using first means 2 (see
FIG. 1 ) to detect its 3-D orientation which is integrated into thehandheld computer 1. Said first means 2 to detect said 3-D orientation may be a gyroscope or an accelerometer. - 2) For each pixel i on the visual display unit of the
computer 1, compute the intensity Ii as follows: -
- i) Retrieve the normal R corresponding to the pixel i from the normal map.
- ii) Transform the normal to world coordinates:
-
N=M R. -
- iii) Compute diffuse intensity:
-
Id=max (N·L, 0). -
- iv) Compute the unit halfway vector:
-
H=(V+L)/|V+L|. -
- v) Compute specular intensity:
-
Is=pow(max(N·H, 0), s). -
- vi) Compute the pixel intensity by weighting and summing the terms:
-
Ii=wa C+wd Id C+ws Is W. -
- vii) Store the pixel intensity in image.
- 3) Present the computed image on the visual display unit of the
handheld computer 1. -
Square 15 relates to the embodiment in which the image that is shown on the visual display unit of thecomputer 1 is selected from a database comprising a series of images of the object to be displayed, wherein the selected image provides a best fit with seeing the object in real life, taking into account again a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit of thecomputer 1, a position of a viewer's head or eyes in relation to said visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof. - Preferably in this embodiment the image of the object that is shown on the visual display unit is calculated as an interpolation of images from the object that come closest to seeing the object in real life.
- It is remarked that the calculus loop is closed with
line 16 which reflects that the software embodying the method of the invention operates preferably in a continuous loop at a frequency of approximately 30 Hz, and more preferably 60 Hz. - One possible way of implementing Square 15 (image based rendering) of the flow diagram in
FIG. 2 is presented below. This example consists of three steps: obtaining a representative set of photographs of the sample material to be displayed (such as a piece of fabric), pre-processing the photographs, and displaying the processed photographs on the display of thehandheld computer 1. - For image-based rendering, a set of representative photographs of the sample material to be displayed is required. These photographs capture the material under a large number of viewing angles and lighting conditions. It is remarked that such a set of photos is just one possible way to obtain the bi-directional texture function [Lit. 8] of the material to be displayed. In this example, only the viewing angle is varied. For reasons of simplicity, the lighting setup is kept static.
- To obtain the photographs,
FIG. 4a shows a high-resolution photo-camera 18. The photo-camera 18 must be calibrated (see e.g. [Lit. 9]), as the focal length, sensor size, sensor center and distortion of thecamera 18 must be known in order to correctly process the photographs. A 3×3 camera calibration matrix K [Lit. 10] represents the focal length and sensor center of thecamera 18. - For obtaining photographs, the
sample material 19 is attached to aflat board 20. Theboard 20 is fixed to amotorized device 21 that can rotate the material 19 into the desired orientations. Such amotorized device 21 can be a generic robot arm, or a purpose-built device. Thecamera 18 is placed on atripod 22 facing theboard 20.Studio lighting 23 is used to light thesample material 19 as desired. Acomputer 24 controls thecamera 18, themotorized device 21 and the illumination of the scene. - Software running on the
computer 24 instructs thedevice 21 to rotate theboard 20 to each of the desired orientations, after which thecamera 18 makes a photograph. The photographs are stored for subsequent processing as described below. - The processing step described in this section extracts an area from each of the photographs, and aligns the extracted areas, as follows.
- First, possible pin cushion or barrel distortion (known from the camera calibration) is corrected for in each photograph. In this example, the
board 20 has high-contrast square edges 25 and high-contrast markers, as these will simplify subsequent-processing. - In the photograph, the square is imaged as a quadrilateral. The four corners ci of this quadrilateral (see
FIG. 4b ) are detected. The corners ci are represented by 2D homogeneous points (3-vectors). The quadrilateral is then mapped to a square, fronto-parallel view of thesample 19 area surrounded by the high contrast markers. For this purpose, a homography (a 3×3 matrix) H is computed that satisfies H ci□ti, where ti represents the homogeneous coordinates of a four corners square image T (the symbol □ denotes proportional to). The computed homography is used to transform each pixel in the sample area to the fronto-parallel view. The resulting image is stored in a texture image T. - Even though the corners of the square edge can be detected with high precision, alignment of different photographs may not be perfect. Thus high-contrast markers within the
square edge 25 are used to further align the fronto-parallel views displayed in the photographs. To optimize all photographs at once, a bundle adjustment optimization procedure [Lit. 10] can be used. - The four corners ci are also used to compute the normal of the board in the photo-camera frame [Lit. 10], see
FIG. 4c for illustration. Here, the four corners define two orthogonal directions in the plane of the board. The two vanishing points v1 and v2 of these directions are computed. The horizon of the board-plane is then computed as h=v1×v2, from which the normal ni of the board is computed as ni=transpose (K) h, where K is the 3×3 camera calibration matrix. - The processed images Ti including the normal ni of each image are stored. The following step describes how the material is displayed using these stored images.
- The view of the material displayed on the visual display unit of the
handheld computer 1 is updated frequently, for instance 60 times per second. In this example the computer rotation is restricted to the x- and y-axes of the device (seeFIG. 3c for an illustration of the computer coordinate frame). This implies that the device normal (the z-axis) can be used to encode the orientation of thehandheld computer 1. - Each display step starts with measuring the handheld computer orientation M, a 3×3 rotation matrix. The last column in this matrix is the normal d (i.e., the z-axis in
FIG. 3c ) of thecomputer 1 in the world coordinate frame. The normal is used to retrieve the three neighbouring images Ti, Tj and Tk from the database of images. This can be done using a Delaunay triangulation [Lit. 11] of the space of board normals, seeFIG. 4 d. The Delaunay triangulation connects the board normals corresponding to each photograph to form a mesh of triangles. It is assumed the handheld computer normal d is contained by one of these triangles. This triangle is bounded by the normal ni, nj and nk corresponding to images Ti, Tj and Tk. - A weight w in the range [0, 1] is assigned to each image T based on the distance of each normal n to the device normal d, using the barycentric coordinates [Lit. 12]. The weights are such that wi+wj+wk=1.
- The images Ti, Tj and Tk are retrieved from memory and interpolated using the weights (i.e., wi Ti+wj Tj+wk Tk). The interpolated image is subsequently presented on the display of the
handheld computer 1. -
- [Lit. 1] James F. Blinn (1977). Models of light reflection for computer synthesized pictures. Proc. 4th annual conference on computer graphics and interactive techniques.
- [Lit. 2] Dave Shreiner, The Khronos OpenGL ARB Working Group: OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 3.0 and 3.1, 7th Edition, Addison-Wesley, Jul. 21, 2009
- [Lit. 3] Rob Glidden, Graphics Programming with Direct3D, Addison Wesley Longman, 1997
- [Lit. 4] Krishnamurthy and Levoy, Fitting Smooth Surfaces to Dense Polygon Meshes, SIGGRAPH 1996
- [Lit. 6] C. James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Computer Graphics: Principles and Practice in. Addison-Wesley Professional; 2nd edition, 1995.
- [Lit. 7] Tomas Akenine-Moller, Eric Haines, Naty Hoffman. A K Peters, Real-Time Rendering, 3rd edition, 2008.
- [Lit. 8] Julie Dorsey, Holly Rushmeier, François Sillion. Digital Modeling of Material Appearance. The Morgan Kaufmann Series in Computer Graphics, Dec. 20, 2007.
- [Lit. 9] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000.
- [Lit. 10] Richard Hartley and Andrew Zisserman. Multiple View Geometry. 2003, second edition, Cambridge University Press.
- [Lit. 11] Mark de Berg, Otfried Cheong, Marc van Kreveld, Mark Overmars. Computational Geometry: Algorithms and Applications. 3rd edition. Springer 2008.
- [Lit. 12] Christer Ericson. Real-Time Collision Detection. The Morgan Kaufmann Series in Interactive 3-D Technology, Jan. 5, 2005.
- [Lit. 13] Rapid Object Detection using a Boosted Cascade of Simple Features. Paul Viola and Michael Jones. IEEE Conference on Computer Vision and Pattern Recognition, 2001.
- [Lit. 14] OpenCV software library. http://sourceforge.net/projects/opencvlibrary/
- [Lit. 15] Visual Gaze Estimation by Joint Head and Eye Information. Roberto Valenti and Theo Gevers. IEEE Conference on Computer Vision and Pattern Recognition, 2008.
- [Lit. 16] Image Alignment and Stitching: a Tutorial. Richard Szeliski (Microsoft Research), 2006.
- [Lit. 17] High Dynamic Range Imaging, Second Edition: Acquisition, Display, and Image-Based Lighting. Erik Reinhard, Wolfgang Heidrich, Paul Debevec, Sumanta Pattanaik, Greg Ward, Karol Myszkowski. Morgan Kaufmann; 2nd edition (Jun. 8, 2010).
Claims (12)
1. Method for displaying an image of an object on a visual display unit, characterized in that the image of the object that is shown on the visual display unit depends on a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
2. Method according to claim 1 , characterized in that the image of the object that is shown on the visual display unit is calculated from a representation of the object, the calculation taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
3. Method according to claim 1 , characterized in that the image of the object that is shown on the visual display unit is selected from a database comprising a series of images of the object, wherein the selected image of the object provides a best fit with seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
4. Method according to claim 3 , characterized in that the image of the object that is shown on the visual display unit is calculated as an interpolation of images from the object that come closest to seeing the object in real life, taking into account a parameter or parameters that is/are selected from the group comprising the 3-D orientation of the visual display unit, a position of a viewer's head or eyes in relation to the visual display unit, and a position of a light source or light sources at a location where the visual display unit is located, or a combination thereof.
5. Apparatus for displaying an image of an object, comprising a handheld computer with an integrated visual display unit, wherein said computer is provided with first means to detect its 3-D orientation, characterized in that the computer is loaded with software that cooperates with said first means for detecting the 3-D orientation of the visual display unit to arrange that the image of the object that is shown on the visual display unit depends on the 3-D orientation of the visual display unit.
6. Apparatus according to claim 5 , characterized in that it is provided with second means to establish a position of a viewer's head or eyes in relation to the visual display unit, and that the software cooperates with said second means to arrange that the image of the object that is shown on the visual display unit depends on the established position of a viewer's head or eyes in relation to the visual display unit.
7. Apparatus according to claim 5 , characterized in that is provided with third means to estimate a position of a light source or light sources at a location where the visual display unit is located, and that the software cooperates with said third means to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of the light source or light sources.
8. Apparatus according to claim 5 , characterized in that the software operates in a continuous loop at a frequency of approximately 30 Hz.
9. Apparatus according to claim 6 , characterized in that is provided with third means to estimate a position of a light source or light sources at a location where the visual display unit is located, and that the software cooperates with said third means to arrange that the image of the object that is shown on the visual display unit depends on the estimated position of the light source or light sources.
10. Apparatus according to claim 6 , characterized in that the software operates in a continuous loop at a frequency of approximately 30 Hz.
11. Apparatus according to claim 7 , characterized in that the software operates in a continuous loop at a frequency of approximately 30 Hz.
12. Apparatus according to claim 9 , characterized in that the software operates in a continuous loop at a frequency of approximately 30 Hz.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2007516 | 2011-05-11 | ||
NL2006762A NL2006762C2 (en) | 2011-05-11 | 2011-05-11 | Apparatus and method for displaying an image of an object on a visual display unit. |
NLNL2006762 | 2011-05-11 | ||
NLNL2007516 | 2011-09-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130009982A1 US20130009982A1 (en) | 2013-01-10 |
US20160232704A9 true US20160232704A9 (en) | 2016-08-11 |
Family
ID=47438396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/467,644 Abandoned US20160232704A9 (en) | 2011-05-11 | 2012-05-09 | Apparatus and method for displaying an image of an object on a visual display unit |
Country Status (1)
Country | Link |
---|---|
US (1) | US20160232704A9 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017015507A1 (en) | 2015-07-21 | 2017-01-26 | Dolby Laboratories Licensing Corporation | Surround ambient light sensing, processing and adjustment |
US10999528B2 (en) * | 2017-12-28 | 2021-05-04 | Gopro, Inc. | Image capture device with interchangeable integrated sensor-optical component assemblies |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020113865A1 (en) * | 1997-09-02 | 2002-08-22 | Kotaro Yano | Image processing method and apparatus |
US6243076B1 (en) * | 1998-09-01 | 2001-06-05 | Synthetic Environments, Inc. | System and method for controlling host system interface with point-of-interest data |
US6445762B1 (en) * | 2001-11-26 | 2002-09-03 | Ge Medical Systems Global Technology Company, Llc | Methods and apparatus for defining regions of interest |
US7792389B2 (en) * | 2005-08-10 | 2010-09-07 | Seiko Epson Corporation | Image processing content determining apparatus, computer readable medium storing thereon image processing content determining program and image processing content determining method |
US8086330B1 (en) * | 2007-04-25 | 2011-12-27 | Apple Inc. | Accessing accelerometer data |
JP5342761B2 (en) * | 2007-09-11 | 2013-11-13 | プロメテック・ソフトウェア株式会社 | Surface construction method of fluid simulation based on particle method, program thereof, and storage medium storing the program |
WO2009121775A2 (en) * | 2008-04-01 | 2009-10-08 | Bauhaus-Universität Weimar | Method and illumination device for optical contrast enhancement |
JP4435867B2 (en) * | 2008-06-02 | 2010-03-24 | パナソニック株式会社 | Image processing apparatus, method, computer program, and viewpoint conversion image generation apparatus for generating normal line information |
US9092053B2 (en) * | 2008-06-17 | 2015-07-28 | Apple Inc. | Systems and methods for adjusting a display based on the user's position |
-
2012
- 2012-05-09 US US13/467,644 patent/US20160232704A9/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US20130009982A1 (en) | 2013-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6560480B2 (en) | Image processing system, image processing method, and program | |
US9489775B1 (en) | Building a three-dimensional composite scene | |
JP5430565B2 (en) | Electronic mirror device | |
Gruber et al. | Real-time photometric registration from arbitrary geometry | |
EP3057066B1 (en) | Generation of three-dimensional imagery from a two-dimensional image using a depth map | |
JP5093053B2 (en) | Electronic camera | |
EP2546806B1 (en) | Image based rendering for ar - enabling user generation of 3d content | |
TWI496108B (en) | AR image processing apparatus and method | |
CN106133796A (en) | Method and system for representing virtual objects in a view of a real environment | |
JP7006810B2 (en) | 3D measuring device, mobile robot, push wheel type moving device and 3D measurement processing method | |
JP7473558B2 (en) | Generating Textured Models Using a Moving Scanner | |
JP5361758B2 (en) | Image generation method, image generation apparatus, and program | |
AU2019201822A1 (en) | BRDF scanning using an imaging capture system | |
US20190066366A1 (en) | Methods and Apparatus for Decorating User Interface Elements with Environmental Lighting | |
CN111340959B (en) | Three-dimensional model seamless texture mapping method based on histogram matching | |
US20160232704A9 (en) | Apparatus and method for displaying an image of an object on a visual display unit | |
JP2004030408A (en) | Three-dimensional image display device and display method | |
US11562537B2 (en) | System for rapid digitization of an article | |
JPH09138865A (en) | Three-dimensional shape data processor | |
JP7463697B2 (en) | Gloss acquisition state calculation device, gloss acquisition state calculation method, gloss acquisition state calculation program, terminal, and gloss acquisition state display program | |
Debevec et al. | Digitizing the parthenon: Estimating surface reflectance under measured natural illumination | |
JP2004170277A (en) | 3-dimensional measurement method, 3-dimensional measurement system, image processing apparatus, and computer program | |
Wen et al. | A low-cost, user-friendly, and real-time operating 3D camera | |
Jiddi | Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality | |
WO2022266719A1 (en) | A method for generating a shimmer view of a physical object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EUCLID VISION TECHNOLOGIES B.V., NETHERLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FONTIJNE-DIJKMAN, DANIEL;REEL/FRAME:033681/0274 Effective date: 20140722 |
|
AS | Assignment |
Owner name: QUALCOMM TECHNOLOGIES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EUCLID VISION TECHNOLOGIES B.V.;REEL/FRAME:036230/0775 Effective date: 20150730 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |