[go: up one dir, main page]

CN112634339B - Commodity object information display method and device and electronic equipment - Google Patents

Commodity object information display method and device and electronic equipment Download PDF

Info

Publication number
CN112634339B
CN112634339B CN201910906733.4A CN201910906733A CN112634339B CN 112634339 B CN112634339 B CN 112634339B CN 201910906733 A CN201910906733 A CN 201910906733A CN 112634339 B CN112634339 B CN 112634339B
Authority
CN
China
Prior art keywords
original
view angle
change value
information
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910906733.4A
Other languages
Chinese (zh)
Other versions
CN112634339A (en
Inventor
高博
王立波
李晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910906733.4A priority Critical patent/CN112634339B/en
Publication of CN112634339A publication Critical patent/CN112634339A/en
Application granted granted Critical
Publication of CN112634339B publication Critical patent/CN112634339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
    • G06Q30/0643Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a commodity object information display method, a commodity object information display device and electronic equipment, wherein the method comprises the following steps: acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information; determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data; and generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image. Through the embodiment of the application, the interaction with the user can be realized at lower cost.

Description

Commodity object information display method and device and electronic equipment
Technical Field
The present application relates to the field of information display technologies, and in particular, to a method and an apparatus for displaying information of a commodity object, and an electronic device.
Background
In a commodity object information service system, a commodity object information page is usually provided, and a commodity object display form in the page mainly comprises pictures, videos, three-dimensional models and the like, wherein the pictures are most common. In a specific implementation, the commodity object information in the display form can be provided by a merchant, or the merchant can provide commodity object objects for professional shooting staff in the system to shoot, and then release the commodity object information to pages such as detail pages for display.
The picture and the video are easy to shoot, only the common camera equipment is needed to shoot the commodity object, but after shooting, the content which can be presented is fixed, and the user cannot interact with the commodity object. However, in the display form based on the three-dimensional model, although interaction with the user can be realized, for example, the user can change the viewing angle by rotating the terminal device or the like, and view the details of the commodity object from multiple angles in all directions, the mode generally needs to use professional equipment for shooting, and also needs to use three-dimensional materials or the like, so that the manufacturing cost is high, and the popularization is not easy.
Therefore, how to implement interaction with a user at lower cost becomes a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application provides a commodity object information display method, a commodity object information display device and electronic equipment, which can realize interaction with a user at lower cost.
The application provides the following scheme:
a merchandise object information display method, comprising:
acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
And generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image.
A merchandise object information display method, comprising:
Obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
selecting a plurality of different view angles by shifting the original view angles;
generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
In the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
A scene information presentation method, comprising:
obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
In the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
A merchandise object information display device, comprising:
A motion data obtaining unit for obtaining motion data of the associated terminal device in the process of displaying the original image information of the commodity object; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
a viewing angle change value determining unit, configured to determine a viewing angle change value of a current viewing angle of a user with respect to an original viewing angle according to the motion data;
And the new view angle image generation unit is used for generating and displaying the new view angle image of the commodity object according to the view angle change value and the depth information.
A merchandise object information display device, comprising:
An original image obtaining unit, configured to obtain original image information of a commodity object, where the original image information is obtained by collecting a physical object of the commodity object under an original viewing angle, and the original image information includes depth information;
The visual angle selecting unit is used for selecting a plurality of different visual angles in a mode of shifting the original visual angle;
An image generation unit configured to generate images at the different perspectives according to the perspective offsets of the different perspectives with respect to the original perspective, respectively, and the depth information;
And the image display unit is used for providing the original visual angle image and the images under the plurality of different visual angles in the information page of the commodity object.
A scene information presentation apparatus comprising:
An initial image obtaining unit, configured to obtain original image information of a target scene, where the original image information is obtained by acquiring the target scene under an original viewing angle, and includes depth information;
The motion data obtaining unit is used for obtaining motion data of the associated terminal equipment in the process of displaying the target scene information;
a viewing angle change value determining unit, configured to determine a viewing angle change value of a current viewing angle of a user with respect to an original viewing angle according to the motion data;
And the new view angle image generation unit is used for generating and displaying a new view angle image of the target scene according to the view angle change value and the depth information.
An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
And generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image.
An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
Obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
selecting a plurality of different view angles by shifting the original view angles;
generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
In the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
In the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
According to the specific embodiment provided by the application, the application discloses the following technical effects:
according to the embodiment of the application, the commodity object image shot at a certain specific angle and the depth information in the image are acquired, so that the three-dimensional structure of the object can be recovered according to the simple two-dimensional image, the interaction with a user is further realized, specifically, the user can change the view angle by rotating the terminal equipment and the like, and the system can generate images under more view angles for the user. It can be seen that the embodiments of the present application enable interactions with users at a lower cost.
Of course, it is not necessary for any one product to practice the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 3 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 4 is a flow chart of a second method provided by an embodiment of the present application;
FIG. 5 is a flow chart of a third method provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a third apparatus provided by an embodiment of the present application;
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
In the embodiment of the application, in order to realize interaction with a user under the condition of low cost, firstly, an image of a specific object (for example, a real object corresponding to a commodity object) can be shot and a depth map in the image can be obtained, so that the three-dimensional structure of a scene can be recovered based on the depth information. In this way, a new view angle image within a certain variation range can be generated by using the single Zhang Tupian shot at a certain view angle, so that a three-dimensional effect is obtained, and interaction with a user is realized. That is, the user can change the viewing angle by turning the mobile terminal device such as a mobile phone, and view the image at more viewing angles.
Specifically, the solution provided in the embodiment of the present application may be applied to various specific application systems, for example, in a merchandise object information service system, as shown in fig. 1, a client (including an independent application program, or existing in the form of a web page, etc.) and a server may be generally provided for a user, and specific merchandise object information may be issued through the server and displayed to the user through the client. In the scheme adopted by the embodiment of the application, the commodity object image information released in the server side can be the image information comprising the depth information, and the image only needs to be a single picture of the commodity object under a certain visual angle and obtain the depth information contained in the single picture; in addition, a function of restoring a three-dimensional structure of a scene based on depth information in an image may be implemented in a client. In this way, in the process that the user views the image information of the specific commodity object through the client, the user can initiate interaction by rotating the terminal equipment such as the mobile phone, the client can determine the visual angle change value of the user according to the motion data of the terminal equipment, and further recover the three-dimensional structure of the commodity object according to the specific visual angle change value and the depth information in the original image, and display the image under the new visual angle, so that the interaction with the user is realized.
The following describes in detail the specific implementation manner provided by the embodiment of the present application.
Example 1
First, the first embodiment provides a method for displaying information of a commodity object, referring to fig. 2, the method specifically may include:
S210: acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
Wherein, the original image information of the commodity object can be shot by a publisher user (such as a merchant user and the like) of the commodity object and published into the system; or the merchant user can also provide the real object of the specific commodity object to a background staff in the system, and the background staff shoots an original image and then issues the original image into the system. Of course, in the embodiment of the present application, the requirement for capturing the original image is relatively low, so that in general, the user of the business may take the image. The specific original image shooting modes can be various. For example, in one mode, a commodity object may be photographed by a hardware device such as a binocular camera or a Time of flight (TOF) depth camera. Specifically, the binocular camera obtains a depth map by calculating the parallax map of the left and right binocular images by utilizing the principle that binocular parallax is inversely proportional to depth, for example, a dual-camera mobile phone which is popular in the market at present can shoot and collect the depth map, so that a merchant user can shoot a physical object of a commodity object by directly selecting a certain visual angle through the mobile phone camera, and original image data meeting the condition can be obtained. The TOF depth camera directly measures the depth of a scene by using the time of flight, so that depth information in a captured image can also be obtained. In addition, even if the depth information in the image is obtained not by means of a binocular camera or a TOF depth camera but by using a general monocular camera, specifically, depth prediction can be performed by a monocular depth estimation method based on depth learning. That is, the target object may also be photographed by a general monocular camera, and then depth information may be predicted by a deep learning method, and so on. In summary, image information with depth information can be obtained in a variety of ways. It should be noted that, no matter what kind of hardware device is specifically adopted to take a picture, in the embodiment of the present application, only one picture needs to be taken from a certain view angle, and three-dimensional modeling is not needed, so that although depth information is included in the picture, the picture belongs to a two-dimensional picture, and in the subsequent steps, the three-dimensional structure of the object is reconstructed on the basis of the two-dimensional picture, so as to provide images under more view angles.
In particular implementations, a particular originally captured image may include both foreground and background images, and typically, the user is more concerned with the foreground image. Thus, in a preferred embodiment of the present application, the background or insignificant areas of the original image may also be subjected to blurring preprocessing, thereby improving the effect of the restored three-dimensional image. Specifically, in the embodiment of the present application, blurring processing of an image background may be performed based on depth information, for example, first, global gaussian blurring may be performed on an RGB color map, and then, a front background area is divided by setting a front background threshold by using the depth map, so as to obtain a Mask (Mask) map of the front background. And fusing the original image and the global fuzzy graph together according to the Mask graph to obtain a background blurring graph, namely, using the global fuzzy graph for a background area, using the original image for a foreground area, and finally smoothing the front background transition area to ensure that the overall effect is more natural. After the background blurring process is performed on the original photographed image, the image subjected to the background blurring process can be released in the system as the original image of the commodity object.
After the original image information of the commodity object is released into the system, the commodity object can be displayed through a client in the mobile terminal equipment of the user. In particular, in one implementation, thumbnail information of the original image associated with the commodity object and operation options for interacting with the original image may be provided in an information page associated with the commodity object. For example, in the detail information page of the commodity object, a main image of the commodity object is generally provided, and in the embodiment of the present application, a thumbnail of the original image may be used as the main image of the commodity object. Meanwhile, prompt information may be provided at a position near the main map, for example, the user may be prompted to interact by clicking on the main map, etc. to view more detailed image information, etc. In this way, after receiving the operation request through the operation options, the original image can be displayed in a full screen mode, and the acquisition of the motion data of the associated terminal equipment is started. In this way, a new view angle image of the commodity object can be generated and displayed according to the view angle change value detected in real time in a full-screen state.
Specifically, after receiving a request for displaying details of the original image, the user can start to interact with the original image, and a specific interaction mode can be that the user changes the view angle by rotating mobile terminal equipment such as a mobile phone, and correspondingly, the client can present images under more view angles, and the images under more view angles are not actually shot in advance, but are calculated in real time after three-dimensional reconstruction is performed through two-dimensional images under the original view angle.
Wherein, the visual angle change information in the process of user interaction can be obtained through the motion data of the terminal equipment. For example, specific motion data may include: the pose angle data of the terminal device in the current space, which is obtained by a gyroscope sensor arranged in the terminal device, the expression of a gravity acceleration vector obtained by an acceleration sensor in a reference coordinate system of the current device, the instantaneous acceleration of the device in all aspects, the instantaneous rotation acceleration of the device in all axial directions, and the like.
S220: determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
In the embodiment of the application, the image information under other visual angles within a certain angle range can be provided for a user on the basis of the original two-dimensional image to present the three-dimensional structure of the object, so that the visual angle change value of the user can be obtained first, and then the image under the corresponding visual angle is generated on the basis.
The specific view angle change value may be obtained by converting motion data of the terminal device. For example, the pose angle of the terminal device output by the gyroscopic sensor with respect to the current space may be used to characterize the change in user perspective. Specifically, when the user turns the device up and down in the vertical direction, the change in the angle of view in the vertical direction can be represented by the user pitch angle. When the user turns the device left and right in the horizontal direction, the change in the angle of view in the horizontal direction can be represented by the user roll angle roll. Combining these two component sets represents view angle values of arbitrary direction and size in the plane:
It should be noted that, in a specific implementation, a user may continuously rotate the device during the interaction process, and may not change the direction and angle of the device, so that the motion data of the device is continuously changed, and accordingly, the viewing angle is also continuously changed, and the change value of the viewing angle with the initial viewing angle can be calculated each time the viewing angle is changed.
In addition, the user may have a problem that the visual angle change is not smooth due to phenomena such as artificial shake in the process of rotating the device, so if the image generation under the corresponding visual angle is directly performed according to the converted visual angle change value, the specifically generated image change situation may also have a shake phenomenon, and the image display effect is affected. For this reason, in the preferred embodiment of the present application, an attenuation term may be added to the current viewing angle variation value to adjust the viewing angle variation value.
Specifically, assume that the initial pose (viewing angle) of the terminal device is:
The real-time pose of the motion of the terminal equipment is as follows:
The attenuation term is:
The real-time viewing angle change value is:
That is, the current viewing angle change value may be adjusted by the attenuation term to attenuate the change acceleration so that the viewing angle change is smooth and natural. In addition, the attenuation term can also be increased along with the increase of the change acceleration by setting parameters such as a reset threshold value, a reset rate and the like. That is, the value of the attenuation term is not fixed, but may be adjusted according to the acceleration of the change in the actual viewing angle, for example, The value of the initial instant of (2) may be (0, 0), and may be based on the value of the previous instant/>Update the value of the current moment/>
Wherein,Reset threshold,/>To reset the rate, by updating/>Thereby weakening the view angle change value of the next frame and enabling the view angle change to be smooth and natural. When the user changes a posture (i.e. changes to a target viewing angle) and keeps still for a certain time under the posture, as the acceleration of the change of the viewing angle becomes great, the attenuation term can be set to be equal to or equivalent to (basically equal to) the change value of the viewing angle of the target viewing angle relative to the original viewing angle, i.e. the change value of the viewing angle after the adjustment of the attenuation term tends to 0, thereby realizing the reset of the viewing angle and returning to the original viewing angle again to display the original image. That is, in the process of the user rotating the device to interact, if the user stops rotating after rotating to a certain gesture, it means that the user may complete the interaction process, so that the user may return to the original image under the original view angle again to display, and the subsequent user may continue to interact based on the gesture. According to the embodiment of the application, through the setting of the attenuation item, the reset threshold value, the reset rate and the like, the effect of automatically resetting the visual angle can be achieved while the smooth change of the visual angle is realized.
S230: and generating and displaying the new view angle image according to the view angle change value and the depth information.
After determining the specific view angle change value, the new view angle image can be generated according to the specific view angle change value and the depth information in the image. For example, as shown in fig. 3, assuming that the left side is an original image of a commodity object of a certain shoe under an original view angle, after the user rotates the terminal device to the right along the vertical axis by a certain angle, this corresponds to the user changing the view angle, at this time, the image of the commodity object under the new view angle may be displayed in the terminal device, that is, as can be seen from the figure, after the user changes the view angle, the commodity image displayed in the terminal device also changes correspondingly. To facilitate viewing of this variation, the right side of fig. 3 shows the device in a rotated front view.
Of course, in the embodiment of the present application, after the user changes the viewing angle, the corresponding image under the new viewing angle is not required to be captured in advance under the new viewing angle, but is generated by calculation based on one image captured under the original viewing angle. Specifically, there may be various specific ways of generating the new view angle image, for example, in one way, first, target position information of pixels in the original image in the new view angle image to be generated may be determined, and then, mapping of pixels is performed according to the target position information, so as to generate the new view angle image. That is, in the case of knowing an image at one viewing angle, when the viewing angle is switched to another viewing angle in the vicinity, the image at the new viewing angle is substantially the same as the pixels in the original viewing angle image, except that a certain change in position may occur. The embodiment of the application provides an implementation scheme for recovering the three-dimensional structure based on the original image on the basis of the principle.
In particular, because the number of pixels in the original image is numerous, if the positions of the pixels in the new view angle image are determined one by one, the calculated amount is relatively large, and the performance requirement on the terminal equipment is relatively high. Therefore, in an optional manner, the key point pixels in the original image can be obtained by performing texture sampling on the original image, the target position information of the key point pixels in the new view angle image to be generated is determined, and then the target position information of the other pixels in the new view angle image to be generated is determined by a point propagation manner. When a new view angle image is specifically generated, mapping of the pixels of the key points can be performed according to the target position information, and adjacent pixels of the key points are filled in a point propagation mode, so that the new view angle image can be generated. The specific keypoint pixels may include: and the corner points in the original image and the points of the separation areas between the foreground and the background.
Specifically, as known from the binocular parallax imaging principle, for a pixel point P0 in an original image, assuming that the corresponding texture coordinate is (x 0, y 0), the corresponding depth value is z, the position of the pixel point in a new view angle image after the view angle is changed is P1, and the focal length of the camera is f, the following formula is satisfied:
So that:
Therefore, the generation of the new view angle image can be regarded as the rendering problem of the texture map through the mode, the texture sampling is carried out in a fragment shader of a rendering pipeline such as OpenGL according to the principle, and the new view angle image can be generated in real time through the adjacent pixel filling processing of the cavity area.
In particular, in the process of interaction through the original view angle image and the image of the new view angle generated in real time, more interactions can be performed from the commodity object information level. For example, if a long press or the like is performed on the commodity object image at any view angle, more information about the commodity object may be provided, including, for example, a price attribute, selling point information (whether it belongs to a hot-sold product, whether to participate in a certain preferential activity, or the like), and the like.
It can be seen that, in the embodiment of the present application, through capturing an image of a commodity object under a certain specific angle and obtaining depth information in the image, the three-dimensional structure of the object can be recovered according to the simple two-dimensional image, so as to implement interaction with a user, specifically, the user can change the viewing angle by rotating the terminal device, etc., and the system can generate images under more viewing angles for the user. It can be seen that the embodiments of the present application enable interactions with users at a lower cost.
In particular, in order to avoid distortion of the generated new view angle image, the new view angle image may be generated within a range of view angle variation values (for example, within 20 degrees or the like) around the original view angle. That is, a small-angle three-dimensional structure restoration can be achieved in the vicinity of the original viewing angle.
The specific view angle change range can be configured by a user according to the requirement of the user, that is, the view angle change value range can be determined according to the configuration information of the user. For example, a parallax intensity variation adjustment interface may be provided for the user, and the intensities may be classified into "strong", "medium", "weak", and the like, with the larger the intensity, the more pronounced the parallax variation.
In addition, the visual effect during the change of the angle of view can also be changed by setting the focal position. For example, front, back, or middle focus may be included, and so forth. If the focus is at the front, the background area in the image changes obviously in the visual angle changing process, and if the focus is at the rear, the foreground area of the image changes obviously; if the focus is in the neutral position, the foreground and background may change simultaneously, and so on. In particular, the setting of the focal position can be configured by the user as well. That is, the user may also be provided with an entry for configuring the focal position, and may select a specific focal position according to his own needs or preferences.
Example two
In the first embodiment, after receiving the interaction request of the user, the image with more viewing angles is generated and displayed in real time according to the motion data of the terminal device. In the second embodiment, more images at a plurality of different viewing angles may be generated in advance according to the original image, and then directly displayed in the information page associated with the commodity object. Therefore, the merchant user and the like only need to provide the commodity object image under the original visual angle, and the system can generate and display images under more visual angles for the commodity object image, so that the commodity image information is supplemented. In addition, because the images with different visual angles can be directly displayed in the commodity object information page in the mode, the user can view the images with more visual angles and simultaneously reduce the requirement on the performance of the terminal equipment (the terminal equipment is not required to be provided with a hardware device such as a gyroscope). Specifically, referring to fig. 4, the second embodiment provides a method for displaying information of a commodity object, where the method specifically may include:
S410: obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
s420: selecting a plurality of different view angles by shifting the original view angles;
s430: generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
The specific viewing angle offset may be determined according to specific needs, and of course, in order to avoid excessive distortion of the image, the viewing angle offset may be controlled within a preset offset range, for example, within 20 degrees of the original viewing angle, and so on.
S440: in the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
And particularly, when the original visual angle image and the images at more different visual angles are provided in the commodity object information page, the images at various different visual angles can be respectively displayed. Of course, in practical application, in order to avoid too much occupation of information page resources by too many images with similar content, and also to better compare images between different views, the original view image and the images under the multiple different views may be further combined into the same display diagram for displaying in the information page.
Example III
In the first and second embodiments, a specific merchandise object information display method is provided for a specific scene, such as a merchandise object information service system, in which three-dimensional structure restoration based on an original two-dimensional image can be achieved. In practical application, the specific scheme for realizing three-dimensional structure restoration based on the original two-dimensional image can be applied to other scenes. Such as museums, scenic spots, and even medical scenes, entertainment scenes, etc.
In particular, in a medical scenario, it may mainly relate to an on-line medical scenario, and in a process that a patient online carries out a patient condition consultation to a doctor, it may generally be necessary to submit some picture information, including a photo of a specific affected part, etc., so that the doctor can diagnose by looking at the photo of the affected part. In the embodiment of the application, after a patient submits a picture, the picture with more visual angles can be generated and sent to a doctor together, so that the doctor can check the details of the affected part from more visual angles; or the interaction options can be provided at the client side of the doctor, after the user clicks and views the picture provided by the patient, full-screen display can be performed, and the picture under more viewing angles can be viewed by rotating terminal equipment such as a mobile phone, so that the situation of the affected part can be more comprehensively known, and the like.
As another example, in entertainment scenarios, such as some photo processing type tools, the basic function is often to process photos taken by a user, including beautification, adding some material, and so forth. By using the scheme provided by the embodiment of the application, interaction of viewing more images through converting the viewing angles can be provided for the photos shot by the user. Specifically, after a user loads a photo into a photo processing tool, a specific interaction entrance may be provided, through which the user may view images under more viewing angles by rotating the mobile phone, or store images under more viewing angles locally on the mobile phone, or the like.
In summary, in the third embodiment of the present application, a method for displaying scene information is further provided, and referring to fig. 5, the method may specifically include:
s510: obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
The original image information of a specific scene can also be obtained in various ways, including photographing the target scene from an initial view angle using a binocular camera or a TOF depth camera, photographing the target scene using a common monocular camera, estimating depth information therein by a depth learning algorithm, and so on.
S520: in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
S530: determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
S540: and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
The specific implementation manner of obtaining the motion data of the device, determining the change value of the angle of view, generating the new angle of view image, and the like may be the same as that in the first embodiment, so the implementation may be performed with reference to the first embodiment, and will not be described herein.
Corresponding to the first embodiment, the embodiment of the present application further provides a merchandise object information display device, referring to fig. 6, the device may include:
A motion data obtaining unit 610 for obtaining motion data of the associated terminal device during the process of displaying the original image information of the commodity object; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
A viewing angle change value determining unit 620, configured to determine a viewing angle change value of a current viewing angle of the user with respect to an original viewing angle according to the motion data;
And a new view image generating unit 630, configured to generate and display a new view image of the commodity object according to the view change value and the depth information.
Wherein the original image information is received from a client associated with a publisher user of the merchandise object.
In particular, the apparatus may further include:
An operation option providing unit for providing thumbnail information of the original image associated with the commodity object in an information page associated with the commodity object, and an operation option for interacting with the original image;
And the interaction starting unit is used for carrying out full-screen display on the original image after receiving the operation request through the operation options, and starting the acquisition of the motion data of the associated terminal equipment so as to generate and display a new view angle image of the commodity object according to the view angle change value determined in real time.
In particular, the apparatus may further include:
and the background blurring processing unit is used for carrying out background blurring processing on the original image according to the depth information.
Wherein, the viewing angle change value determining unit may specifically be configured to:
and determining the visual angle change value of the user according to the pose angle data of the terminal equipment output by the sensor equipped by the terminal equipment.
In addition, the apparatus may further include:
And the visual angle change value adjusting unit is used for adjusting the current visual angle change value by setting an attenuation item so as to weaken the visual angle change acceleration.
Wherein the attenuation term increases as the viewing angle change acceleration increases.
In addition, the apparatus may further include:
and the visual angle resetting unit is used for setting the attenuation item to be equal to or equivalent to the visual angle change value of the target visual angle relative to the original visual angle if the visual angle resetting unit is kept static after rotating to the target visual angle, so that the visual angle change value approaches zero, resetting the visual angle to the original visual angle and displaying the original image.
The new view angle image generating unit may specifically include:
a position determining subunit, configured to determine target position information of pixels in the original image in a new view angle image to be generated;
and the mapping subunit is used for mapping pixels according to the target position information and generating the new view angle image.
The location determining subunit may specifically include:
A key point pixel determining subunit, configured to obtain a key point pixel in the original image by performing texture sampling on the original image;
A key point position determining subunit, configured to determine target position information of the key point pixel in a new view angle image to be generated;
and the other point position determining subunit is used for determining target position information of the other pixels in the new view angle image to be generated in a point propagation mode.
Wherein the keypoint pixel comprises: the corner points in the original image and the points of the region separating the foreground and the background (the points of the region where the depth information changes more significantly).
In another embodiment, the new view angle image may be generated within a range of view angle variation values around the original view angle.
The visual angle change value range can be determined according to configuration information of a user.
In addition, the apparatus may further include:
and the visual effect control unit is used for determining the visual effect in the visual angle change process according to the focal position.
The focal position may be determined according to configuration information of a user.
Correspondingly, the embodiment of the application also provides a commodity object information display device, referring to fig. 7, the device specifically may include:
an original image obtaining unit 710 for obtaining original image information of a commodity object, the original image information being obtained by collecting a physical object of the commodity object under an original viewing angle, wherein the original image information includes depth information;
a view angle selecting unit 720, configured to select a plurality of different view angles by shifting the original view angle;
an image generation unit 730 for generating images at the different perspectives according to the perspective offsets of the different perspectives with respect to the original perspective and the depth information, respectively;
the image display unit 740 is configured to provide the original view image and the images at the plurality of different views in the information page of the commodity object.
Wherein the viewing angle offset is within a preset offset range.
In particular, the image display unit may be specifically configured to:
And synthesizing the original view image and the images at the plurality of different views into the same display view for displaying in the information page.
Corresponding to the embodiment, the embodiment of the application also provides a scene information display device, referring to fig. 8, the device may include:
an initial image obtaining unit 810 for obtaining original image information of a target scene, the original image information being obtained by acquiring the target scene at an original viewing angle, wherein the original image information includes depth information;
A motion data obtaining unit 820, configured to obtain motion data of an associated terminal device in a process of displaying the target scene information;
A viewing angle change value determining unit 830, configured to determine a viewing angle change value of a current viewing angle of the user with respect to an original viewing angle according to the motion data;
and a new view image generating unit 840, configured to generate and display a new view image of the target scene according to the view change value and the depth information.
Furthermore, an embodiment of the present application also provides an electronic device, including:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
And generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image.
An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
Obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
selecting a plurality of different view angles by shifting the original view angles;
generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
In the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
And another electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
In the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
Fig. 9, among other things, illustrates an architecture of an electronic device, for example, device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 9, device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the methods provided by the disclosed subject matter. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and the like. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 906 provides power to the various components of the device 900. Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 900.
The multimedia component 908 comprises a screen between the device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 900 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the device 900. For example, the sensor assembly 914 may detect the on/off state of the device 900, the relative positioning of the components, such as the display and keypad of the device 900, the sensor assembly 914 may also detect the change in position of the device 900 or one component of the device 900, the presence or absence of user contact with the device 900, the orientation or acceleration/deceleration of the device 900, and the change in temperature of the device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the device 900 and other devices, either wired or wireless. The device 900 may access a wireless network based on a communication standard, such as a WiFi,2G, 3G, 4G/LTE, 5G, etc., mobile communication network, or a combination thereof. In one exemplary embodiment, the communication part 916 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the device 900 to perform the methods provided by the disclosed subject matter. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The method, the device and the electronic equipment for displaying commodity object information provided by the application are described in detail, and specific examples are applied to the description of the principle and the implementation mode of the application, and the description of the examples is only used for helping to understand the method and the core idea of the application; also, it is within the scope of the present application to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the application.

Claims (23)

1. A merchandise object information display method, comprising:
acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
the method comprises the steps of adjusting a current viewing angle change value through setting an attenuation item to weaken the viewing angle change acceleration, wherein if the current viewing angle change value is kept static after rotating to a target viewing angle, the attenuation item is set to be equal to or equivalent to the viewing angle change value of the target viewing angle relative to an original viewing angle, so that the viewing angle change value approaches zero, and the viewing angle is reset to the original viewing angle, so that the original image is displayed;
And generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The original image information is received from a client associated with a publisher user of the merchandise object.
3. The method according to claim 1, characterized in that the method is preceded by:
providing thumbnail information of the original image associated with the commodity object and operation options for interaction with the original image in an information page associated with the commodity object;
And after receiving an operation request through the operation options, carrying out full-screen display on the original image, and starting acquisition of motion data of the associated terminal equipment so as to generate and display a new view angle image of the commodity object according to the view angle change value determined in real time.
4. The method as recited in claim 1, further comprising:
And carrying out background blurring processing on the original image according to the depth information.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining the visual angle change value of the user according to the motion data comprises the following steps:
and determining the visual angle change value of the user according to the pose angle data of the terminal equipment output by the sensor equipped by the terminal equipment.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The attenuation term increases as the viewing angle change acceleration increases.
7. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The generating the new view angle image includes:
determining target position information of pixels in the original image in a new view angle image to be generated;
and mapping pixels according to the target position information to generate the new view angle image.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
The determining the target position information of the pixels in the original image in the new view angle image to be generated comprises the following steps:
obtaining key point pixels in the original image by performing texture sampling on the original image;
determining target position information of the key point pixels in a new view angle image to be generated;
And determining target position information of other pixels in the new view angle image to be generated by a point propagation mode.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
The keypoint pixel includes: and the corner points in the original image and the points of the separation areas between the foreground and the background.
10. The method of claim 1, wherein the step of determining the position of the substrate comprises,
And generating the new view angle image in a view angle change value range near the original view angle.
11. The method of claim 10, wherein the step of determining the position of the first electrode is performed,
And the visual angle change value range is determined according to the configuration information of the user.
12. The method as recited in claim 1, further comprising:
The visual effect during the change of the viewing angle is determined from the focal position.
13. The method of claim 12, wherein the step of determining the position of the probe is performed,
And the focus position is determined according to the configuration information of the user.
14. A merchandise object information display method, comprising:
Obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
selecting a plurality of different view angles by shifting the original view angles;
The method comprises the steps of adjusting a current view angle change value through setting an attenuation item to weaken acceleration of view angle change, wherein if the current view angle change value is kept static after rotating to a target view angle, the attenuation item is set to be equal to or equivalent to the view angle change value of the target view angle relative to an original view angle, so that the view angle change value approaches zero, and the view angle is reset to the original view angle, so that the original image is displayed;
generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
In the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
15. The method of claim 14, wherein the step of providing the first information comprises,
The viewing angle offset is within a preset offset range.
16. The method of claim 14, wherein the step of providing the first information comprises,
The providing the original view image and the images at the plurality of different views includes:
And synthesizing the original view image and the images at the plurality of different views into the same display view for displaying in the information page.
17. A scene information presentation method, characterized by comprising:
obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
In the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
the method comprises the steps of adjusting a current viewing angle change value through setting an attenuation item to weaken the viewing angle change acceleration, wherein if the current viewing angle change value is kept static after rotating to a target viewing angle, the attenuation item is set to be equal to or equivalent to the viewing angle change value of the target viewing angle relative to an original viewing angle, so that the viewing angle change value approaches zero, and the viewing angle is reset to the original viewing angle, so that the original image is displayed;
and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
18. A merchandise object information display device, comprising:
A motion data obtaining unit for obtaining motion data of the associated terminal device in the process of displaying the original image information of the commodity object; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
a viewing angle change value determining unit, configured to determine a viewing angle change value of a current viewing angle of a user with respect to an original viewing angle according to the motion data;
The visual angle change value adjusting unit is used for adjusting the current visual angle change value through setting an attenuation item so as to weaken the visual angle change acceleration, wherein if the visual angle change acceleration is kept static after the visual angle is rotated to a target visual angle, the attenuation item is set to be equal to or equivalent to the visual angle change value of the target visual angle relative to an original visual angle, so that the visual angle change value approaches zero, and the visual angle is reset to the original visual angle so as to display the original image;
And the new view angle image generation unit is used for generating and displaying the new view angle image of the commodity object according to the view angle change value and the depth information.
19. A merchandise object information display device, comprising:
An original image obtaining unit, configured to obtain original image information of a commodity object, where the original image information is obtained by collecting a physical object of the commodity object under an original viewing angle, and the original image information includes depth information;
The visual angle selecting unit is used for selecting a plurality of different visual angles in a mode of shifting the original visual angle;
The visual angle change value adjusting unit is used for adjusting the current visual angle change value through setting an attenuation item so as to weaken the acceleration of the visual angle change, wherein if the visual angle change value is kept static after the visual angle is rotated to a target visual angle, the attenuation item is set to be equal to or equivalent to the visual angle change value of the target visual angle relative to an original visual angle, so that the visual angle change value approaches zero, and the visual angle is reset to the original visual angle so as to display the original image;
An image generation unit configured to generate images at the different perspectives according to the perspective offsets of the different perspectives with respect to the original perspective, respectively, and the depth information;
And the image display unit is used for providing the original visual angle image and the images under the plurality of different visual angles in the information page of the commodity object.
20. A scene information presentation apparatus, comprising:
An initial image obtaining unit, configured to obtain original image information of a target scene, where the original image information is obtained by acquiring the target scene under an original viewing angle, and includes depth information;
The motion data obtaining unit is used for obtaining motion data of the associated terminal equipment in the process of displaying the target scene information;
a viewing angle change value determining unit, configured to determine a viewing angle change value of a current viewing angle of a user with respect to an original viewing angle according to the motion data;
The visual angle change value adjusting unit is used for adjusting the current visual angle change value through setting an attenuation item so as to weaken the visual angle change acceleration, wherein if the visual angle change acceleration is kept static after the visual angle is rotated to a target visual angle, the attenuation item is set to be equal to or equivalent to the visual angle change value of the target visual angle relative to an original visual angle, so that the visual angle change value approaches zero, and the visual angle is reset to the original visual angle so as to display the original image;
And the new view angle image generation unit is used for generating and displaying a new view angle image of the target scene according to the view angle change value and the depth information.
21. An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
acquiring motion data of associated terminal equipment in the process of displaying original image information of commodity objects; the original image information is obtained by collecting the real objects of the commodity object under an original visual angle, wherein the original image information comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
the method comprises the steps of adjusting a current viewing angle change value through setting an attenuation item to weaken the viewing angle change acceleration, wherein if the current viewing angle change value is kept static after rotating to a target viewing angle, the attenuation item is set to be equal to or equivalent to the viewing angle change value of the target viewing angle relative to an original viewing angle, so that the viewing angle change value approaches zero, and the viewing angle is reset to the original viewing angle, so that the original image is displayed;
And generating a new view angle image of the commodity object according to the view angle change value and the depth information, and displaying the new view angle image.
22. An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
Obtaining original image information of a commodity object, wherein the original image information is obtained by collecting a physical object of the commodity object under an original view angle, and comprises depth information;
selecting a plurality of different view angles by shifting the original view angles;
The method comprises the steps of adjusting a current view angle change value through setting an attenuation item to weaken acceleration of view angle change, wherein if the current view angle change value is kept static after rotating to a target view angle, the attenuation item is set to be equal to or equivalent to the view angle change value of the target view angle relative to an original view angle, so that the view angle change value approaches zero, and the view angle is reset to the original view angle, so that the original image is displayed;
generating images at the different viewing angles according to the viewing angle offset of the different viewing angles relative to the original viewing angle and the depth information;
In the information page of the commodity object, the original view angle image and the images under the plurality of different view angles are provided.
23. An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
obtaining original image information of a target scene, wherein the original image information is obtained by acquiring the target scene under an original view angle and comprises depth information;
In the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
The method comprises the steps of adjusting a current view angle change value through setting an attenuation item to weaken acceleration of view angle change, wherein if the current view angle change value is kept static after rotating to a target view angle, the attenuation item is set to be equal to or equivalent to the view angle change value of the target view angle relative to an original view angle, so that the view angle change value approaches zero, and the view angle is reset to the original view angle, so that the original image is displayed;
and generating a new view angle image of the target scene according to the view angle change value and the depth information, and displaying the new view angle image.
CN201910906733.4A 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment Active CN112634339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906733.4A CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906733.4A CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112634339A CN112634339A (en) 2021-04-09
CN112634339B true CN112634339B (en) 2024-05-31

Family

ID=75282861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906733.4A Active CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112634339B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891057A (en) * 2021-11-18 2022-01-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN114418665A (en) * 2021-12-13 2022-04-29 珠海格力电器股份有限公司 Virtual display method, device, equipment and medium of shopping mall products in home scene

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006011153A2 (en) * 2004-07-30 2006-02-02 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 A Fast Image Drawing Method Based on Depth Map
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
WO2013039470A1 (en) * 2011-09-12 2013-03-21 Intel Corporation Using motion parallax to create 3d perception from 2d images
CN105096180A (en) * 2015-07-20 2015-11-25 北京易讯理想科技有限公司 Commodity information display method and apparatus based augmented reality
CN107945282A (en) * 2017-12-05 2018-04-20 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) The synthesis of quick multi-view angle three-dimensional and methods of exhibiting and device based on confrontation network
CN108198044A (en) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN108234985A (en) * 2018-03-21 2018-06-29 南阳师范学院 The filtering method under the dimension transformation space of processing is rendered for reversed depth map
CN109218706A (en) * 2018-11-06 2019-01-15 浙江大学 A method of 3 D visual image is generated by single image
CN109584340A (en) * 2018-12-11 2019-04-05 苏州中科广视文化科技有限公司 New Century Planned Textbook synthetic method based on depth convolutional neural networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8619071B2 (en) * 2008-09-16 2013-12-31 Microsoft Corporation Image view synthesis using a three-dimensional reference model
US9658688B2 (en) * 2013-10-15 2017-05-23 Microsoft Technology Licensing, Llc Automatic view adjustment
KR20190012068A (en) * 2017-07-26 2019-02-08 삼성전자주식회사 Head up display and method of operating of the apparatus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006011153A2 (en) * 2004-07-30 2006-02-02 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 A Fast Image Drawing Method Based on Depth Map
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
WO2013039470A1 (en) * 2011-09-12 2013-03-21 Intel Corporation Using motion parallax to create 3d perception from 2d images
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
CN105096180A (en) * 2015-07-20 2015-11-25 北京易讯理想科技有限公司 Commodity information display method and apparatus based augmented reality
CN107945282A (en) * 2017-12-05 2018-04-20 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) The synthesis of quick multi-view angle three-dimensional and methods of exhibiting and device based on confrontation network
CN108198044A (en) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN108234985A (en) * 2018-03-21 2018-06-29 南阳师范学院 The filtering method under the dimension transformation space of processing is rendered for reversed depth map
CN109218706A (en) * 2018-11-06 2019-01-15 浙江大学 A method of 3 D visual image is generated by single image
CN109584340A (en) * 2018-12-11 2019-04-05 苏州中科广视文化科技有限公司 New Century Planned Textbook synthetic method based on depth convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
数字视频稳像技术综述;魏闪闪;谢巍;贺志强;;计算机研究与发展;20170915(第09期);全文 *
正面人脸图像合成方法综述;赵林;高新波;田春娜;;中国图象图形学报;20130116(第01期);全文 *

Also Published As

Publication number Publication date
CN112634339A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
KR102497683B1 (en) Method, device, device and storage medium for controlling multiple virtual characters
EP3511864A1 (en) Method and apparatus for synthesizing virtual and real objects
CN110321048B (en) Three-dimensional panoramic scene information processing and interacting method and device
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN108495032B (en) Image processing method, device, storage medium and electronic device
JP2013162487A (en) Image display apparatus and imaging apparatus
CN103945045A (en) Method and device for data processing
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN115379195B (en) Video generation method, device, electronic device and readable storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN115937379A (en) Special effect generation method and device, electronic equipment and storage medium
US20250053287A1 (en) Systems, methods, and computer program products for digital photography
CN112581358A (en) Training method of image processing model, image processing method and device
CN112634339B (en) Commodity object information display method and device and electronic equipment
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113721874A (en) Virtual reality picture display method and electronic equipment
CN115967854B (en) Photographing method and device and electronic equipment
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN109308740B (en) 3D scene data processing method and device and electronic equipment
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
KR20190129592A (en) Method and apparatus for providing video in potable device
CN114143455B (en) Shooting method and device and electronic equipment
US20150281351A1 (en) Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
CN112672059B (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant