[go: up one dir, main page]

WO2016191685A1 - Graphical processing of data, in particular by mesh vertices comparison - Google Patents

Graphical processing of data, in particular by mesh vertices comparison Download PDF

Info

Publication number
WO2016191685A1
WO2016191685A1 PCT/US2016/034663 US2016034663W WO2016191685A1 WO 2016191685 A1 WO2016191685 A1 WO 2016191685A1 US 2016034663 W US2016034663 W US 2016034663W WO 2016191685 A1 WO2016191685 A1 WO 2016191685A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewer
user
mesh
creator
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2016/034663
Other languages
French (fr)
Inventor
Shaohong Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2016191685A1 publication Critical patent/WO2016191685A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar

Definitions

  • the invention relates to graphical processing pipelines and the graphical presentation of data.
  • General depictions of data may be, for comparison purposes, embodied by charts and graphs, such as bar and pie charts and the like.
  • data has been visualized more graphically, such as by the rendering of 3-D CG objects.
  • graphical engines such as Unity, Unreal Engine, and Cry Engine provide significant functionality in the way of 3-D object presentation.
  • the construction of 3-D data objects is also well known, such as by the use of graphical 3-D design applications such as Maya, Blender, 3DS Max, and the like.
  • Particular architectural projects may be designed with Google SketchUp, along with others of the applications noted above.
  • the invention is directed towards a method of configuring a server to provide a media experience, including: on a first server, providing a first user interface operable to allow a creator to construct a virtual environment, the virtual environment including at least one target mesh; on the first server, or on a second server in network communication with the first server, providing a second user interface operable to allow a first viewer to log on to the virtual environment and move through and interact with the virtual environment in a viewing session; where the second user interface is further operable to allow the first viewer to cause an invitation to be sent to a second viewer to share the viewing session, where upon acceptance of the invitation, the second viewer is enabled to share the viewing session and move through and interact with the virtual environment along with the first viewer.
  • Implementations of the invention may include one or more of the following.
  • the second user interface On the first server or on the second server, the second user interface may be further operable to allow the first viewer to construct a subject mesh for use in the virtual environment.
  • the subject mesh may be an avatar or may be a virtual environment, such as a room or building.
  • the second viewer Upon acceptance of the invitation, the second viewer may be presented with the second user interface.
  • the second user interface may be further operable to allow the second viewer to construct a subject mesh for use in the virtual environment.
  • the second user interface may be further operable to allow the first viewer to move the subject mesh relative to the target mesh.
  • the target mesh may have metadata associated therewith, the meta-data indicating a scale.
  • the subject mesh may have metadata associated therewith, the metadata indicating a scale, and the target mesh and the subject mesh may be configured to have the same scale, whereby a size and appearance of models in the subject mesh may be correctly displayed against the target mesh.
  • the first viewer may access the virtual environment using a virtual reality device or an augmented reality device, as may the second viewer, although commonly the second viewer will access the virtual environment using a virtual reality device.
  • the invention is directed towards a non-transitory computer readable medium, including instructions for causing a computing environment to perform the method above.
  • the invention is directed towards a method of providing a media experience, the media experience modifiable on a server side by a creator, including: accessing a database, the database including data corresponding to an inventory of available items; exposing a user interface, the user interface operable to allow a creator to situate items in the inventory in a virtual environment; where the user interface is operable to allow the creator to situate the items by allowing the creator to create a 3-D model of the item or allowing the creator to import a 3-D model of the item.
  • Implementations of the invention may include one or more of the following.
  • the allowing the creator to create a 3-D model of the item may include allowing the creator to import an image corresponding to the item to a 3-D model generation engine.
  • the method may further include exposing a user interface where a first viewer can access the virtual environment.
  • the exposed user interface for viewer access of the virtual environment may further allow the first viewer to invite a second user to access the virtual environment.
  • Advantages of the invention may include, in certain embodiments, one or more of the following.
  • 3-D, 2-D, AR, VR may be conveniently constructed by creators and used by users for various purposes, including where viewers employ a subject mesh which is situated in a position relative to a target mesh for various purposes within a virtual environment.
  • any of the features of an embodiment of any of the aspects is applicable to all other aspects and embodiments identified herein, including but not limited to any embodiments referred to above.
  • any of the features of an embodiment of the various aspects is independently combinable, partly or wholly with other embodiments described herein in any way, e.g., one, two, or three or more embodiments may be combinable in whole or in part.
  • any of the features of an embodiment of the various aspects may be made optional to other aspects or embodiments.
  • Any aspect or embodiment of a method can be performed by a system or apparatus of another aspect or embodiment, and any aspect or embodiment of a system or apparatus can be configured to perform a method of another aspect or embodiment, including but not limited to any embodiments referred to above.
  • Fig. 1 is a schematic diagram of an implementation according to present principles.
  • FIG. 2 illustrates exemplary AR/VR/3D application components, according to present principles.
  • FIG. 3 illustrates exemplary AR/VR/3D engine components, according to present
  • FIG. 4 is a flowchart showing an implementation of a method according to present principles.
  • FIG. 5 is another flowchart showing another implementation of a method according to present principles.
  • Fig. 6 illustrates an objective or subjective comparison of a subject mesh against a target mesh, according to present principles.
  • Fig. 7 illustrates another objective or subjective comparison of a subject mesh against the target mesh, according to present principles.
  • FIG. 8 illustrates the interface architecture in modular format, according to present principles.
  • FIG. 9 illustrates a system architecture in modular format, according to present
  • FIG. 10 illustrates a system architecture for augmented reality (AR) applications, in modular format, according to present principles.
  • FIG. 11 is a flowchart for avatar construction, according to present principles.
  • FIG. 12 is a flowchart for a virtual environment creation, according to present
  • Fig. 13 is a flowchart for model creation, according to present principles.
  • Fig. 14 is another flowchart for model creation, according to present principles.
  • Fig. 15 is a flowchart for service model creation, according to present principles.
  • FIG. 16 illustrates ways in which machine leaming can be employed in the creation of
  • Fig. 17 is a flowchart for business logo creation, according to present principles.
  • Fig. 18 is a flowchart for integration with existing transaction backend systems
  • Fig. 19 illustrates an augmented reality user process for transactions, according to present principles.
  • Fig. 20 is a flowchart showing 3-D, 3-D animation, or VR product display on a
  • Fig. 21 is a flowchart for a 3-D model, 3-D animation, or VR product display on a transaction site back end, according to present principles.
  • Fig. 22 is a flowchart for shopper/user interaction with products in 3-D, VR, or AR, according to present principles.
  • FIG. 23 is a flowchart for shopper or user interaction in 3-D, VR, or AR, illustrating group shopping or social shopping, according to present principles.
  • Fig. 24A is a flowchart for fitting clothes or outfits in 3-D or VR, according to present principles.
  • Fig. 24B is a flowchart for fitting clothes or outfits in AR, according to present
  • Fig. 25 is a flowchart illustrating steps in providing real-time intelligent assistance in 3- D, VR, or AR, according to present principles.
  • Fig. 26 is a flowchart illustrating steps for themed shopping in 3-D, VR, and AR, according to present principles.
  • Fig. 27 is another flowchart illustrating steps for themed shopping in 3-D, VR, and AR, according to present principles.
  • Fig. 28 is a flowchart illustrating steps for shopping in augmented reality in malls or shopping centers or in individual stores, according to present principles.
  • Fig. 29 is a flowchart illustrating steps for shopping including AR functionality in physical stores, according to present principles.
  • Fig. 30 is a flowchart illustrating furniture, home decor, and appliance products fitting to a 3-D, VR, or AR space or room, according to present principles.
  • FIG. 31 is a flowchart illustrating AR ID and location systems, according to present principles.
  • FIG. 32 is a flowchart illustrating implementing real estate transaction functionality within VR, AR, or 3-D, according to present principles.
  • Fig. 33 is a flowchart illustrating steps in a travel application using 3-D or VR
  • Fig. 34 is a flowchart illustrating steps in a travel application using AR functionality, according to present principles.
  • Fig. 35 is a flowchart illustrating steps in education/classroom learning using 3-D or VR functionality, according to present principles.
  • Fig. 36 is a flowchart illustrating steps in education/classroom learning using AR
  • Fig. 37 is a flowchart illustrating product/service integration into 3-D or VR games, according to present principles.
  • Fig. 38 is a flowchart showing steps in a first interface according to the integration flowchart of Fig. 37, according to present principles.
  • Fig. 39 is a flowchart showing steps in a second interface according to the integration flowchart of Fig. 37, according to present principles.
  • Fig. 40 is a flowchart showing steps in a third interface according to the integration flowchart of Fig. 37, according to present principles.
  • Fig. 41 is a flowchart illustrating automatic configuration and programming of 3-D models, wherein 3-D models may be provided with attributes providing additional functionality for 3-D, VR, or AR applications, according to present principles.
  • Systems and methods according to present principles relate to processing and presentation of data from a database, transmitted in a pushed or pulled fashion from the database and rendered to present a 3-D video or image to a viewer.
  • the 3-D video or image may be rendered as a 3-D CG object on a display via a, e.g., GPU or video graphics card, or integrated chipset, or rendered on a specialized device such as a virtual or augmented reality headset so as to be perceived by a viewer as a 3-D visualization.
  • a 3- D/VR/AR device may enable viewers, a.k.a. end users, to visualize a computer simulated 3-D world.
  • 2-D content is viewed with a web browser, with a generally poor user experience for applications such as online shopping, education, travel, or entertainment.
  • a 3D VR system provides significant technological benefits to viewers, and allows viewers to traverse such transactional settings , e.g., virtual and/or online environments, in a convenient way, saving computing cycles, battery power, and the like.
  • An AR system similarly overcomes various technological obstacles to viewer transactional experiences, allowing end-users to receive, process, and display signals relevant to traversing terrain such as real-world locations, and allowing access to products and information about products in a significantly more convenient fashion.
  • AR displays may be 3-D or 2- D.
  • Such systems also allow for the invitation of friends to VR or AR experiences, allowing friends to take part in the experience.
  • a 3-D or VR cloud or platform enables 3-D or VR experiences, including transactional experiences, e.g., shopping, traversing real estate for purchase/sale/rental, educational and learning experiences, entertainment experiences, or travel/venture experiences.
  • Systems and methods may typically be employed in VR, but may default to 3-D, in the absence of a VR device such as a headset.
  • a 3-D, VR, or AR system generally includes one or more cloud servers with software and databases incorporating load-balancing and security, connected with end-users through a network (e.g., private or the Internet) to one or more desktop computers, laptops, tablets, mobile phones, or any other user devices with appropriate processing power.
  • Users may or may not need a glass or device that allows them to enter a virtual or augmented reality environment, the same generally simulating the real or a virtual world.
  • the glasses are devices may connect to end-user's computing environments, e.g., desktops, laptops, tablets, or mobile phones, or may also be embodied as dedicated standalone virtual reality devices that simulate the real or a virtual world for end-users. End-users will generally have their computers, tablets, or mobile phones or other devices connected to the Internet or private networks. Such devices generally can download software and communicate with servers.
  • user VR devices can connect to end-user devices or may also be standalone.
  • a system 50 is illustrated with a number of client devices lOa-lOd in network communication with a server 12 (server i).
  • the server 12 may be associated with a particular type of application, e.g., coming from a particular source, and it will be understood that client devices may access a number of servers and different applications.
  • the server 12 is illustrated with components including a security module 14, a database 18, a module 22 for load-balancing, a module 16 including AR/VR application components, and a module 24 including AR/VR engine components.
  • the module 16 for AR VR application components may include a UI module 26 and a renderer module 28, the same for rendering the results of the graphical processing in 3-D, and/or VR/AR.
  • a module 32 for a user interface includes (in a nonexhaustive list): a module 32 for a user interface, a renderer module 34 for rendering in 3-D, e.g., VR/AR, a 3-D modeling tool module 36, a build/creation tool 38, a media tool 42, a communications tool 44, a social network module 46, the same including an API for
  • a first server in a first step of configuring a server to provide a media experience for VR/AR/3-D as described herein, a first server is configured or is operable to provide a first user interface operable to allow a creator to create or input a target mesh (step 11).
  • a creator either creates the target mesh or causes one to be entered or input into the system.
  • the target mesh may be created in an outside 3-D modeling program, and imported into the graphical processing system.
  • the target mesh may then be situated in an appropriate location in the environment (step 13).
  • the creator may choose various locations in which to situate the target mesh.
  • the creator may also place various textures or shading on the target mesh, or provide other materials thereon.
  • the target mesh may have no particular scale associated with it.
  • the creator may simply be creating a scene for viewing by a viewer.
  • the target mesh may have a particular scale associated with it. For example, where a target mesh from a creator is to be aligned or measured up against a subject mesh from a viewer, then it is important that the two meshes have the same scale. For example, if a target mesh is a piece of furniture or an article of clothing, the same would have an associated scale, so that the same could be measured up against the room of a viewer, or an accurate body avatar of the viewer, respectively.
  • the first server or a second server in network communication with the first server, is configured or operable to provide a second user interface which in turn allows a first viewer to create or input a subject mesh (step 15), which may then generally be compared in some way against the target mesh entered by the creator.
  • the viewer may use a 3-D modeling program to create the subject mesh, or more commonly the subject mesh may be created for the viewer, based on data entered by the viewer. For example, the viewer may enter a room size, in the same may then constitute the subject mesh against which a creator configured target mesh is compared, e.g., the subject mesh being a room and the target mesh being a piece of furniture to be situated within the room.
  • the viewer described above is referred to as a first viewer and the first viewer invites a second viewer to take part in the media experience
  • the second user interface is configured to allow the first user to cause an invitation to be sent to a second viewer to share the viewing session, e.g., by email, text, instant message, or the like.
  • both the first viewer and the second viewer are enabled to share a viewing session and move through and interact with the virtual environment
  • a next step is that a "fit" of the subject mesh to the target mesh is determined (step 17). For example, one may be compared against another, and on objective or subjective determination made as to whether the fit is satisfactory. For example, where the subject mesh is a room,
  • a target mesh of a piece of furniture again appropriately textured to look like a piece of furniture the viewer is interested in, may be situated within the room and the viewer may gauge whether the result is desirable.
  • the result is binary, e.g., a yes or no determination.
  • the viewer may be enabled to change textures on the piece of furniture, i.e., to envision or visualize a different type of furniture within the room. For example, the user may switch out a couch having a leather texture for a couch having a suede texture. Other implementations will also be understood.
  • user input may be received about the fit (step 19).
  • the user input may accept the target and subject meshes, and the result may be optionally rendered into a file that may be subsequently saved (step 21).
  • the user may like the way the couch appears in the room, and may save a visualization of the same to show to their family and friends.
  • the viewer may post the same on a social network using an appropriately configured API.
  • the viewer may modify the target or subject mesh (or both) (step 23). This would be the case of the viewer modifying a material (textures/shaders) of the couch, to determine if a different couch would appear better in the room.
  • a new subject mesh or target mesh may be provided and a subsequent comparison performed (step 25). For example, the viewer may try an entirely different type of couch, or may try the same couch, but in a different location or a different room.
  • Other implementations will also be understood. For example, it will be understood that the definitions of target and subject mesh are arbitrary, and the same may be definitionally switched.
  • Fig. 4 may be applied in VR or AR, but is generally preferred for VR implementations.
  • Fig. 5 refers to an implementation that is more specific to AR
  • a first step is to receive or create a subject mesh (step 27).
  • This subject mesh may be either created by a viewer or by a creator. For example, if a viewer wishes to visualize a couch in their actual house, the viewer may wear an AR headset in their house and look at a wall against which the couch is to be placed. The subject mesh may have been created by a creator, but instantiated by the subsequent viewer for viewing against a desired wall. Consequently, a next step is to apply the subject mesh against the environment (step 29). New or additional subject meshes may then be received or entered, or a current subject mesh may be modified (step 31). The new or added subject mesh, or modified subject mesh, may then be again applied against the environment (step 29). In this way, the viewer may visualize how the subject mesh fits against the environment.
  • Fig. 6 gives an illustration of a subj ect mesh 120 being applied against a target mesh 1 10, particularly in the case where an object such as a piece of furniture is being situated within an environment.
  • Fig. 7 gives an example of a subject mesh 140 being applied against a target mesh 130.
  • the target mesh 130 is an avatar of a viewer, constructed so as to accurately represent the body of the viewer.
  • the subject mesh 140 is an article of clothing, and the article of clothing is being applied against the target mesh 130.
  • the subject mesh 140 is generally received from a clothing supplier, and such meshes are generally available, even from manufacturers, as clothing design is usually performed using 3-D modeling tools.
  • a creator will create a 3-D environment without regard to whether the same will be matched up with, compared against, or in any other way aligned with a viewer created subject mesh.
  • 3-D environment such as theaters, educational applications, transactions including shopping or retail transactions, and the like.
  • These implementations may still include, e.g., a social networking component, where the experience by the viewer is shared with another viewer, typically a friend or teacher. It will be understood that some degree of "matching" may occur, but the same will only entail matching the subject mesh of a viewer avatar against, e.g., a theater seat, classroom seat, laboratory stool, shop, store, mall, or the like. In these cases, the avatar may be appropriately sized to appear to be of normal size given the size of the creator configured target mesh.
  • Target and subject meshes may be created in various ways. For example, user data entry, e.g., of an appropriate height and weight, may lead to a avatar being created of approximately accurate dimensionality relative to the user. The viewer may be provided with a user interface through which they may configure the avatar for, e.g., fine adjustments. In more precise ways, scanning techniques or camera image inputs may be employed to create better meshes, e.g., of products, users (in the creation of avatars), buildings, structures, rooms, malls, shops, stores, clothes, jewelry, or the like. Various other implementations may also be provided, including where a user selects a desired mesh or portion of a mesh from a menu of options.
  • details about the makeup may be employed in the creation of not only a mesh but also (or alternatively) in some cases a texture, material, or shader to be applied against the face or other location of makeup application.
  • the avatar mesh may be appropriately modified according to the appearance characteristics of the applied makeup.
  • Google's Sketchup or other such 3-D building creation tools may be used, including by the use of 360° cameras which can create a model of the interior of a room (or, indeed, of even an outdoor location).
  • a bridesmaid (modeled by an avatar or subject mesh) may be configured to wear a bridesmaid dress (first target mesh) and may appear in a church or other location for a wedding (second target mesh)
  • Systems and methods according to present principles include various components to allow creators to conveniently create environments, as well as for end-users or viewers to either traverse the creator configured environments or to apply subject meshes against the creator configured environments.
  • the creator which may typically include a developer using a creator system, may create a data file corresponding to a target mesh, which is generally a model including textures/materials/shaders.
  • a target mesh which is generally a model including textures/materials/shaders.
  • multiple models may be created and provided in a given environment.
  • a viewer or end-user may be provided a tool in which a subject mesh may be created, which is also a data file, and which may of be a model including textures/ materials/ shaders .
  • systems and methods according to present principles may be implemented as a "front end", and implemented on a server or even on the same server in which the database is located. In any case, the front end will be in network communication with the database.
  • the front-end is implemented in a virtual reality (VR) system to provide a VR environment for transactional and other experiences.
  • VR virtual reality
  • Such may be implemented as a storefront with various products, allowing users to roam about and to select items. Such items may be interacted with by the viewer, e.g., placed in a shopping cart, and appropriate animations provided in some cases to make such interactions more interesting.
  • a storefront or other transactional detail are not required in all implementations.
  • the system may simply allow visualizations of products and services that the viewer is interested in from one or more sources.
  • VR or AR may be implemented on devices such as the Oculus Rift, HTC Vive, Sony PlayStation VR, Samsung Gear VR, Google Cardboard, Microsoft Hololens, and the like.
  • augmented reality or AR implementations may also be provided to allow viewers to visualize certain image or CG data against a real-life backdrop.
  • an AR overlay may be employed when a viewer is in a physical "brick-and-mortar" store.
  • An AR overlay may also be used when a viewer is in their own home or office, e.g., to visualize furniture placement, kitchen appliances, and so on.
  • the same visualization may simply appear on a screen, e.g., in 3-D, but without reference or use of a specific VR or AR device.
  • other inputs to the system include data from accelerometers, allowing the creation of a VR environment, GPS data, which may be particularly important in AR implementations, allowing user navigation around an environment such as a store.
  • Other input data of note include user entered data, e.g., pertaining to their avatar or the environment. For example, user data entry of "Yosemite" may cause a VR or AR implementation of that national park to be instantiated.
  • Processing of the input data is generally performed by an attached computer, within the VR or AR device, or in the cloud. Outputs are provided by the VR or AR screen, or by a computer screen where VR/AR are not employed.
  • Various technological benefits inure to described implementations including a significantly more intuitive interface, one that is easier to navigate, as well as an environment in which it is easier to navigate. Viewers may conduct transactions with friends, and transactions may occur as if the friend were only a few feet away. Such functionality depends on implementation within a specially designed and configured computer system, as realistic mutual transactions are otherwise impossible.
  • the above-described objective or subjective determination of the suitability of a subject or target mesh vis-a-viz the other may be particularly useful in the context of mutual shopping, where a viewer is shopping with a friend.
  • one user may be employing AR, testing a subject mesh against a real world environment, while a friend, connected via a social engine, provides commentary in a VR environment.
  • the VR environment may include a video feed of the view seen by the viewer using an AR device.
  • Advantages of such systems and methods include that systems and methods take reduced computing cycles and battery power to achieve the same benefits for users. For example, viewers may be assisted in the navigation of difficult environments, and product location within those environments may be provided to the viewer. Viewers may also be enabled to conduct transactions with others, in a way previously requiring more complicated videoconferencing, which in turn previously required complicated and bulky equipment, and so on.
  • An exemplary interface architecture is shown in a modular fashion in the system architecture diagram 100 of Fig. 8.
  • Various applications 54a-54f are shown interacting with the interface architecture, and various end-users 82a-82f are shown also interacting with the architecture.
  • the applications 54i may pertain to different environments or the like, and are generally embodied on one or more different servers, although in a given implementation multiple environments may be situated on one server or in a cloud architecture.
  • the various applications 54i may pertain to, e.g., a merchant application, a real estate agent/owner application, a travel application, a hotel or resort owner application, an entertainer application, an education application, a social application, and so on. Specific implementation details of these are described below.
  • End-users 82i generally communicate with the architecture through network communications, which may be on a smart phone, desktop computer, laptop computer, tablet computer, or the like.
  • the interface architecture 100 includes a security module 56, a user
  • the security module 56 provides security to the system, and works in combination with the user management module 58 to securely log viewers into a particular application.
  • the user management module 58 allows the creation of user accounts, storage of user profile and preferences information, and so on.
  • the entity management module 62 provides management of various entities, particularly those operating applications 54i, and serves as a management module for business users stores for products or services, hhe data storage module 66 stores data associated with the applications and users. Access to the database or data storage module is generally by one or more database accessors, which are executable statements associated with getting data, setting data, and so on, from a database.
  • the media import/export module 68 allows media importation and exportation, the media pertaining to target meshes, subject meshes, virtual environments, meshes representing projects, tools for importing 3-D and 2-D models and images, and the like.
  • the communications management module 72 provides network communications functionality between creator servers, servers implementing systems and methods according to current principles, and end-user systems, as well as designers, 3-D artists, and so on. For example, the module 72 may provide voice, text, and video communications.
  • the transaction management module 74 provides functionality related to conducting transactions for users, e.g., providing shopping cart functionality, credit or debit processing, or the like.
  • the open API module 76 provides access to systems and methods according to present principles by various social networking sites or the like. In addition, the open API allows appropriate access of social networking feeds, such that users can access the same to promote virtual environments, promote products, send invitations to friends for mutual shopping ventures, or the like.
  • the open API further provides an API for developers to build applications on the platform.
  • an open API is a publicly available application programming interface that provides developers with programmatic access to a proprietary software application. APIs are sets of requirements that govern how one application can communicate and interact with another.
  • the VR, AR, and 3-D engines 78 provide functionality for the creation of the virtual environment. For example, the engine 78 may provide tools for the creator to easily and conveniently implement a VR/AR/3-D storefront. The engine 78 may further provide tools for the viewer to easily and conveniently create a viewer avatar, or a viewer subject mesh pertaining to an item which the viewer desires to visualize within the 3-D environment of a creator.
  • FIG. 9 illustrates a modular depiction 150 of a system architecture according to present principles. As may be seen, various interfaces are provided, such as for an application owner 1
  • interface 116 who may be a retailer, merchant, or other storefront offering real-life products but portraying the same as 3-D models within a virtual environment.
  • An application owner 2 is provided with an interface 122, and the same may provide access by a travel agent, teacher, educator, hotel owner, entertainer, or the like.
  • An end user interface 118 is also provided for end-users or other viewers to access desired environments, and such allow communications with the computing environments of end-users, e.g., computers or mobile devices, to exchange information, enable downloads, and install or update software to end-user devices.
  • An environment layer 1 17 may then be provided in which the creator or end user interfaces are implemented, and the same may be an environment which is 3-D, VR, or AR.
  • One level up are various objects which the viewer or creator may view and/or manipulate, including 3-D models 98, video files 102, video streaming files 108, and the like.
  • Other 2-D and 3-D images and textual obj ects are provided in this layer by a module 106.
  • a 3-D model generation engine 104 Accessible by viewers and creators, depending on implementation, is a 3-D model generation engine 104, which can be employed to generate 3-D models from text or can convert 2-D images to 3-D models.
  • Various components may be "gamified" by a game engine 96, which may also provide appropriate interaction components for, e.g., a target mesh and a subject mesh. For example, through the game engine, colliders may be placed on the meshes to allow a subject mesh to abut a target mesh without passing through the same.
  • the build/create layer 94 may be employed by a creator to create one or more underlying target meshes to be displayed to viewers. The same may further allow a viewer to create a subject mesh.
  • a social engine 92 may be employed to allow invitations to friends for mutual shopping, sharing functionality, e.g., sharing a potential purchase with a friend, or "buddy shopping". The social engine may enable end-users to invite friends to mutual experiences, e.g., stores, theaters, travel locations, or to share items in the virtual reality environment. For example, users may shop together in a store.
  • a communication engine 88 may be employed to allow voice, video, texting or chat, and other communications with other viewers, or even with the creator. Such may also afford videoconferencing capabilities. Where transactions are occurring, a transaction and/or payment layer 86 may be employed to facilitate payment by one party and credits to another.
  • an application engine layer 85 may be employed to accomplish required functionality to allow the layers below it to communicate with applications, e.g., backend database functionality of a given source, e.g., the back end of a storefront, the back end of a travel site or game site, and so on.
  • applications 84a-84f may include, e.g., stores, malls, houses, apartments, theaters, games, e.g., multiplayer arcades, classrooms, resorts, hotels, adventures and other travel applications, demonstrations, e.g., of products, processes, how a product works, how a device is put together, and the like.
  • applications 84a-84f may include, e.g., stores, malls, houses, apartments, theaters, games, e.g., multiplayer arcades, classrooms, resorts, hotels, adventures and other travel applications, demonstrations, e.g., of products, processes, how a product works, how a device is put together, and the like.
  • these layers may be situated in different locations so as to accomplish different goals as desired by the designer.
  • a media display and video streaming engine to display text, image, or video, as well as to allow video streaming.
  • a block assembly engine may be employed to provide a user interface whereby a viewer or creator may assemble building blocks to build real-life items, e.g., a car, boat, store, lab, and so on.
  • a template or overlay may be placed on top of a game engine such as Unity, Unreal Engine, or the like, to allow storeowners to create stores and to allow viewers to move 3-D models around in the virtual retail environment.
  • a game engine such as Unity, Unreal Engine, or the like
  • viewers may be enabled to see items being animated and/or rendered as being placed in a shopping cart, and so on.
  • Fig. 10 illustrates an exemplary implementation of an architecture 200, with particular regard to augmented reality applications. Certain modules of the architecture are the same as that of Fig. 9 and their description is not repeated here.
  • the applications at one end of the architecture denoted applications 124a-124f, are generally specific to AR although in some cases recourse may be had to a repurposed (and reconfigured or reprogrammed) VR application.
  • a location-based service engine 126 is used to provide location and other services, and the same may interact with a service application engine 127, the service application engine 127 using the location-based services data from the service engine 126 and providing the same to the applications 124i.
  • the applications 124 may also provide data going the opposite direction in return.
  • a location capture engine 132 is provided to obtain location-based information, e.g., via pattern recognition, GPS, Wi-Fi, telemetry, and so on.
  • An environment layer 144 provides an interface, and in particular an interface to the "real-world".
  • the environment layer may be provided, e.g., by the lens of a glass, by a camera feed, and so on.
  • a media layer 142 is provided on which to render visual components in AR, e.g., CG objects.
  • An interaction engine layer 138 is provided atop the layer 142, such that the viewer is enabled to interact, with other viewers or with creators, with regard to the objects portrayed on the AR display and the underlying physical environment beyond.
  • a user with a VR device will enter the virtual environment, which may be configured for transactions (commerce), a showroom, or a virtual experience. Without a VR device, the viewer will generally be in a 3-D environment.
  • systems and methods may be implemented in the cloud or using a system including servers and software to enable end-users to have experiences including 3-D VR, 3-D gameplay, as well as to enable user avatar creation through entering text information or importing a 2-D/3-D self image to the virtual reality system, in which case the system may automatically generate a 3-D model for the user if the user enters a 2-D image.
  • Other potential experiences include a virtual shopping experience, a building/creation experience, whereby a user may be enabled to build items such as a car, a boat, a room, an office, a store, a lab, a business, and so on, by using components available in the virtual reality environment, or imported by users.
  • a new user may click a "new user" button (step 228) which leads the new user to a registration/login screen (step 232).
  • a new user in this case an end user or viewer, may be asked to create a new avatar (step 234).
  • the user or viewer will have the option of creating a new subject avatar (step 236) by entering their body measurement information on a provided UI (step 244).
  • the user may import their image (step 246).
  • the user may even be enabled to choose a different image, but with the same measurements. For example, a user may use the face of a movie star, but with their own body measurements. The body
  • measurements taken will generally involve a chest measurement, a waist measurement, a hip measurement, a thigh measurement, an inseam measurement, a sleeve length measurement, and in some cases an upper arm measurement.
  • the user may use that garment as a source of measurements rather than their body.
  • scanning a code associated with the garment may bring up data about the garment which may be automatically entered into the system for use in avatar creation. In any case, based on the
  • the user may choose an existing avatar (step 238), optionally modifying the same according to their body measurements.
  • the user may load their own avatar, again optionally modifying the same with body measurements (step 242).
  • body measurements such may be particularly useful in the virtual "trying on" of clothes to be purchased, as well as the virtual wearing of jewelry, application of makeup, and so on.
  • Fig. 12 is a flowchart 300 by which creators can conveniently provide create 3-D environments for consumers, i.e., viewers.
  • a creator may click on a "new user" button (step 182) and thereby be enabled to register with the system (step 184).
  • a user interface may then be instantiated in which the visitor to the site is asked if they are a creator, e.g., (step 186). In the case where they are not, thus clicking a "no" button or the like, the user may be redirected to the end user login (step 188), as implemented by, e.g., the method of Fig. 11.
  • step 192 If they click that they are a creator, e.g., merchant, they may be directed to a screen in which an option is presented to create an environment, e.g., store (step 192).
  • a creator e.g., merchant
  • Various ways are provided in which to build an environment or a store. Where a store or environment is not premade but built by the creator (step 194), then creators, which may include business owners, can import premade 3-D models of their home or office or business, or be enabled to build such (by the provision of an appropriate UI) within the 3-D VR system (step 196). Creators may also choose an option of selecting an existing store from a library of stores (step 198). In this case, various functionality may be provided to edit or otherwise personalize the selected premade store (step 202).
  • steps may be included of importing 2-D images (step 204), importing 3-D images (step 206), importing 3-D models (step 216) of products or the like, importing video files (step 208), or importing streaming video (step 212).
  • Text may also be provided to display information about the environment descriptively. Such options will, it is understood, also be provided for premade stores, as part of an editing or
  • Such steps provide the creator with a means for provisioning the store, and placing 3-D models of products in desired locations, e.g., shelves, for viewing and purchase by a viewer.
  • the creator generally accesses a database, the database including data
  • a server may expose a user interface, the user interface operable to allow a creator to situate items in the inventory in a virtual environment, by allowing or enabling the creator to create 3- D models of the items, or allowing for enabling the creator to import 3-D models of the items.
  • a user interface may then be exposed for viewers to access the virtual environment, e.g., first and second viewers, and further to allow viewers to invite others to access the virtual environment.
  • Creators may also build items besides structures, e.g., a car, a boat, a chair, or the like. In these ways described, creators may build a personal home or room or office space, or any environment, to simulate the real world using premade building blocks (or custom building blocks) in the 3-D or VR environment. 3-D components may also be imported by users. [000114] The store engine may then be built and finalized (step 218) for a given product (step 222) or service (step 224).
  • such steps may include determining and creating ways for a viewer to traverse the store or environment, providing software permissions for where viewers are allowed to traverse, providing rules for purchase or traversal, as may depend on the identity of the viewer, as may be based on login credentials, or the like.
  • the environment is primarily for a viewer to traverse a scene, just for enjoyment or amusement, it may not be required that the environment have a particular scale vis-a-viz the viewer.
  • the scale of the environment may be selected to simply be comfortable to the viewer, to not cause nausea, and so on. In some cases, scale may be particularly important and taken account of.
  • items in the 3-D environment such as furniture may have a scale associated therewith, such that when the furniture is placed by the viewer in a virtual environment such as a 3-D representation of their living room, the scale of the living room and the scale of the furniture are the same, thus allowing an accurate picture of how the furniture will appear in the room.
  • a subject mesh e.g., the couch
  • a target mesh e.g., the living room
  • Colliders may be appropriately employed such that, as the user moves one around the other, the CGI objects or meshes do not interpenetrate, causing unphysical visualizations.
  • the scale may be stored as metadata with the target and/or subject mesh, or via another means.
  • creators may be enabled to quickly and easily create appropriate 3-D environments, including online environments and storefronts. Benefits of such implementations include that creators are not burdened with requiring significant software or modeling expertise to create such environments.
  • Fig. 13 is a flowchart 400 indicating one implementation of 3-D model creation or importation, which may apply to the creation of most of the 3-D and/or virtual environments described.
  • An initial step is registration and/or login (step 252).
  • the creator may be asked if they wish to use their own product or model (step 254). If the answer is yes, the creator may import images or 3- D models (step 258), e.g., via a 3-D model import engine, to enable 3-D models to be imported to the VR engine.
  • Such imported images or 3-D models may then be converted to 3-D models (step 262) in the case of 2-D images or used directly as 3-D models (step 264).
  • the models may then be moved around to appropriate locations as desired by the designer (step 266).
  • Textual descriptors may be entered or imported to describe the models (step 268), e.g.,SKUs, barcodes, QR codes or the like may be entered in cases where the models represent items for sale.
  • the models may then be tied to a backend database (step 272) via an appropriate API, so as to allow control and tracking of items sold in store inventory systems, accounting systems, and the like. It is noted that, where the creator has no models of their own, a market may be provided and visited for a creator to select (and optionally purchase) models for use in their store (step 256).
  • Fig. 14 is a flowchart 450 illustrating a more detailed method of 3-D product creation.
  • a creator has a 2-D or 3-D image (step 274)
  • the user may register and/or login to a system according to present principles (step 276).
  • the user may indicate a desire to create a 3-D product model (step 278).
  • a new product model is not created, but a user selects a premade model (step 284). If the user indicates a desire to create a new product model, then the product model may be created (step 282).
  • the 2-D or 3-D product image may be imported (step 286), followed by the 3-D model generation engine creating the actual model for placement in the environment based on the product model created in step 282 and the 2-D/3-D product image (step 288).
  • the 3-D model generation engine may take the created product model and wrap the 2-D or 3-D product image around the same, texturing the model, and may further apply one or more shaders so as to achieve a desired model of a product.
  • Fig. 15 illustrates a flowchart 500 for creation of an environment in which services are showcased.
  • the creator may be asked whether they intend to use their own 3-D service model (step 298).
  • a service model may include animations of services provided, modeled images or textual descriptors of services provided, or the like.
  • a market may be provided from which the creator may select premade or prefabricated models (step 302).
  • the creator does have their own models, such are imported in step 304.
  • the creation may be of actual 3-D models or 2-D images. Where the imported objects are 3-D models, the same may be used directly (step 308).
  • the 2-D images may be converted to 3-D models in an appropriate fashion, e.g., via the model generation engine (step 306). Once all the models are created or imported, the designer may move them around and otherwise showcase the services in a desired fashion (step 312).
  • Fig. 16 illustrates a flowchart 550 of a more detailed method of creating models for products or services.
  • an image may be entered into the system (step 316).
  • the image may be a 2-D image such as a photograph, or 3- D image, e.g., a set of stereoscopic photographs.
  • Machine learning (step 318) may be employed to allow the system to learn over time and improve its estimation and calculation/creation of 3-D models.
  • Machine vision may also be employed to review and analyze images for depth data, so as to reconstruct 3-D objects from 2-D images. Steps involved may include one or more of pattern recognition/depth calculation (step 322), 3-D model generation (step 324), and finally creating the 3-D model (step 326), which may include applying textures and shaders to the created 3-D model.
  • Fig. 17 illustrates a flowchart 600 for creation of a desired creator logo.
  • a creator generally starts with a file corresponding to a 2-D or 3-D image (step 328), and commences by registering or logging into the system (step 332). The creator may be asked if they wish to create a new 3-D logo (step 334). If the answer is "no", the creator may be allowed to select a premade logo (step 338). If the answer is affirmative, then a create logo subroutine may be commenced (step 336). For example, the creator may take the 2-D or 3-D image file from step 328 and import the same to a logo creation engine (step 342).
  • step 344 may then serve as the basis for a 3-D model as created by a 3-D model generation engine (step 344).
  • the user may select an existing premade 3-D logo and provide edits thereto (step 346), again resulting in a usable 3-D logo model (step 348).
  • Fig. 18 illustrates a flowchart 650 related to integration with existing e-commerce backend systems.
  • products may be integrated into the 3-D or VR/AR displays according to present principles, and/or information may be pulled from backend databases.
  • a 3-D product model may be input into the system (step 352).
  • a product ID may be assigned to the 3-D product model (step 354).
  • exemplary product IDs may include SKUs, UPC barcodes, RFIDs, EAN-13 identifications, or any other identifier.
  • the 3-D models may then be imported into an appropriate engine (step 356).
  • the 3-D models may be entered into a transaction engine, game engine, or the like.
  • the engine may then be interfaced with a transaction database through an appropriate API (step 358).
  • the API may be employed to pass product information to and from the engine (step 362). Such information may include, a product description, price, stocks, and so on.
  • the engine may then employ the product information and display or render the 3-D model and the virtual environment, e.g., in 3-D, VR, AR, or the like.
  • the flowchart 700 of Fig. 19 illustrates an exemplary user process for a viewer in AR.
  • a viewer registers and/or logs into the system (step 366). Variations may occur at this point. For example, if a viewer has found a product online (step 367), the viewer may select a store/location using an appropriate user interface (step 368), e.g., by gazing at a store location and having the gaze tracked. A 3-D version of the store may show up on the screen (step 372), and once the viewer is in the virtual store, the AR may be employed to direct the user to the product location (step 374).
  • a user interface may be employed which when activated allows variations in the product to be shown (step 376). Similarly, similar products may be displayed (step 378).
  • an in-store or online coupon or other promotion may be activated (step 382). The viewer may if desired add the product to a virtual fitting room to allow their avatar, if accurately sized, to virtually "try on” the product, if the same is an article of clothing (step 388). The viewer may then check out (step 392) and allow the product to be shipped.
  • step 366 the viewer may find the physical product in a store (step 369).
  • the product code, RFID, or other identifier may then be scanned (step 384).
  • a 3-D version of the product may then appear on the AR display (step 386). Similar variations may occur as above, including activating to show product variations, activating to show similar products, or the use of in- store or online coupons or promotions.
  • the viewer may try on the product, and/or allow their avatar to try it on.
  • the viewer may purchase a product in normal fashion, or may complete the transaction online, and either be allowed to take the product home or the same may be shipped.
  • AR implementations provide numerous benefits to consumers and also provide benefits to computing environments, as users may be enabled to more quickly find items and locations of interest, thus enabling more efficient and focused use of technology at hand, saving computing cycles and battery power.
  • systems and methods according to present principles include hardware and software that provides multi platform (mobile and desktop) and VR platforms to simulate real-life malls or stores or to provide virtual malls or stores, giving customers near real-life shopping experiences such as: walking into a mall, entering a store, meeting sales associates, checking product displays, trying on products, watching a demo or promotion, and so on.
  • systems and methods according to present principles may default to a 3-D environment viewable on a display screen.
  • Augmented reality implementations may be provided for in-store shoppers. Such allow shoppers to enter a 3-D virtual store, which is generally an exact or highly accurate simulation of the real store, to help shoppers navigate in malls and stores and to find merchandise quickly.
  • the AR system may recognize user's locations based on GPS or by the use of sensors in stores or malls through the internet of things or other techniques, e.g., Wi-Fi, infrared, telemetry, the use and tracking of wearables, and so on.
  • the system thus allows users to search products or product categories and show a location and a path of the product or product category. Shoppers may then follow the path indicated to reach the location of the product or product category.
  • the system may further provide the viewer with product discounts or other promotions or recommendations that may be of interest.
  • a shopper's friend can join the 3-D store and look at the same merchandise with the shopper.
  • the shopper's friend may employ a VR headset which views the same scene as the shopper's AR headset, but from a slightly different vantage point.
  • the portion of the scene that is actual or real life in AR may be portrayed by a 3-D model in VR, e.g., or a computer-generated depiction of, or even a video feed.
  • the same can appear to emanate from the same location as the viewer, in which case the same video feed can be employed and the shopper's friend sees the same vantage point as the shopper, or the same can be made to be from a slightly different vantage point, to simulate that the shopper's friend is standing next to the shopper.
  • the AR system can be implemented in an independent or coordinated manner with a VR system.
  • the AR system may be implemented in 2-D or 3-D.
  • either AR or VR may be implemented as a 2-D application.
  • two additional engines are provided, a location engine to identify the user location, and a service engine to provide user services based on location.
  • an AR system generally will include just the location and service engine.
  • the viewer may have several options to show the merchandise to their friend. For example, if the merchandise is in a 3-D/VR store, the viewer and their friends can look at the product in the 3-D/VR online virtual environment, and can discuss within the context of that system. If the merchandise is not in the 3-D/VR online environment, or the viewer wishes to "try on" the product, the viewer can use a 3-D scanner or 3-D model capture booth to scan the product or himself/herself with the product, and can automatically import the image to the 3-D system for sharing with friends.
  • the platform/system may provide features that will benefit both in-store and online shopping customers, enhancing online and in-store consumer confidence of purchases, bridging the gap between online shopping and in-store shopping.
  • the system can simulate a city or mall or store for online shoppers. Online shoppers can view both products and services.
  • a viewer can be enabled to walk on a virtual street and enter a 3-D/VR mall or store through their mobile or desktop device, e.g., coupled with VR or AR functionality.
  • the viewer can enter a store and meet sales associates or view a model.
  • the viewer can enter a demo room or demo table to watch a demo or product promotion, to view or test products in-store.
  • the viewer can enter a product promotion event in a theater.
  • the viewer can watch a movie in a theater.
  • the viewer can enter a class or school for learning.
  • the viewer can shop for services such as travel products.
  • Online or in-store viewers can invite friends or relatives to a virtual store in real time to see the products and help them make purchase decisions, regardless of distances between them.
  • Viewers can create their own virtual room or house, can fit furniture within the same, can fit other items such as housewares, home decorations, or DIY/home-improvement products, all before making expensive investments or purchases.
  • the viewers can virtually attempt to construct or assemble products, where necessary, prior to buying, to determine the complexity of an assembly procedure.
  • a creator who is a retailer can create a virtual live sales associate avatar to assist online shoppers, which provides a more user-friendly experience than merely chat.
  • Viewers can enter a 3-D virtual mall or store through a mobile device to allocate stores or products, or compare prices (in certain such implementations, no VR/AR equipment is required).
  • Retailers/creators can register and have the option of selecting prebuilt stores or can create their own custom stores. Drag-and-drop building blocks may be provided to make such construction more convenient. Product images/models may be uploaded to stores and placed on shelves. Stores may be decorated in any manner enabled by materials/textures/shader creation.
  • Creators can preselect or create a customer service avatar to assist online shoppers.
  • Creators/retailers can set up or build a demo room for products or services.
  • Creators/retailers may employ an API to transfer online merchandise data.
  • Creators/retailers may be enabled to automatically convert 2-D product images to 3-D product images.
  • Systems and methods according to present principles can also be used in a 3-D VR platform, e.g., with or without a payment component, to allow the merchant to display products or services.
  • Fig. 20 illustrates a method 750 related to 3-D, 3-D animation, or VR product displays as related to 2-D transaction sites, and more particularly where such are displayed on the "front end" of such 2-D transaction sites.
  • a transaction site viewer registers and/or logs on (step 366).
  • the 3-D model is then displayed directly on the front page of the site, or in a product gallery, replacing regular product images (step 365).
  • the viewer may click on the 3-D model or 3-D animation or the product name (step 374), and additional details may be displayed (step 376).
  • the 3-D model is not directly displayed (step 371).
  • the product image or name is displayed, and if activated or otherwise clicked on (step 368), the viewer may be led to a button with a title such as "See in 3D" on the product page (step 372).
  • the product model, 3-D model, or 3-D animation may then be displayed (step 376).
  • Fig. 21 illustrates a method 800 related to similar model types as in Fig. 21 , but where the same are integrated as part of the backend.
  • a first step again is a login step, and in this case the transaction site may be logged onto with administrative privileges (step 378).
  • a product section may be activated (step 382), where the user can select the product or service.
  • the user may then choose, on an appropriate user interface, an indication of a desire to click to upload a desired model, e.g., a 3-D model, 3-D animation, and so on.
  • the user may then select the model to be loaded (step 386), and the same may be subsequently uploaded (step 388).
  • the user with administrative privileges can see the 3-D product or service model in the user page and admin backend page, i.e., the product model or animation may then be displayed in the front end, or in both the front end and the backend (step 392).
  • the product model or animation may then be displayed in the front end, or in both the front end and the backend (step 392).
  • a button section in the admin page (usually on the front page) that either allows a website admin to upload fitting room 3-D models, or enable a building environment for users to build their own fitting room.
  • users can import premade 3-D models of their home or office, or build numerous types of constructions in the 3-D or VR environment, either for visualization of products or services, transactions regarding such products or services, e.g., commercial transactions, entertainment, and so on.
  • a commercial enterprise can build an environment representative of their business, to show off products and services.
  • the commercial enterprise can build a virtual environment representative of a showroom, offices, or the like.
  • Commercial enterprises or users can build structures such as cars, boats, chairs, and so on.
  • Creators and viewers can build a personal home or room or office space, or can build any environment to simulate the real world using premade building blocks in the 3-D or VR environment, and can build or import 3-D components.
  • Creators may build environments in several ways. In one way, home or office dimensions may be entered, and the system may automatically generate a 3-D model of the home or office. Creators may also enter windows or door dimensions, and the system may automatically generate windows and doors. In this way, creators can put windows or doors (or other accoutrements) on their virtual room or office. In a second way, creators can import/upload premade 3-D models of their homes or offices. In a third way, creators may take a photo of their real-world home or office to import to the system. A 3-D model generator, as described above, will automatically generate a 3-D model for such users based on the photos. In some implementations, the creators may have the option to edit the 3-D model. In a fourth way, creators can use building blocks from the system itself or imported from external sources, and build the appertaining environments within the system. Numerous customization options may be provided, including allowing the creator to choose color, style, and so on.
  • Business users can login as a business owner or entity, and can either create an avatar as an end user or build a business in the 3-D, VR, or AR environment. Like end-users, business users can build their business by importing premade 3-D models, including stores, products,
  • models/animations indicating services provided, and so on may also be enabled to take a photo of their existing business, for importing to a 3-D model generator to generate the 3-D model. As before, users can have the option to edit the 3-D model. Business owners may also take a 3- D image of their business and the 3-D model generator may generate a 3-D model therefrom. Multiple photos may be taken to allow a better 3-D visualization. In yet other implementations, a 3-D camera may be employed to even further improve the 3-D model of the business. In other implementations, businesses or stores may be built from building blocks in the environment, or imported from external construction programs, to build a business or store from scratch. As before, business owners can choose the style, color, logo, and so on.
  • Business users i.e., business creators, can either build 3-D or VR tangible businesses such as stores, malls, or can build other sorts of businesses, e.g., restaurants, travel agencies, theaters, games, and so on, for various purposes including merchandising, promotion of services, marketing, to provide user experiences, and so on.
  • a restaurant owner may provide a virtual experience at their restaurant, including the provision of food items, even if the viewer cannot fully experience the restaurant without actually being there.
  • Businesses that sell items that require user assembly in the real world can show the packing of the box in the virtual reality environment, and users can have an experience of opening the box and removing and/or assembling the product.
  • viewer or end-user functions include: registering and uploading 3-D models, or pre-selecting an existing avatar with the option of entering their own body dimensions; entering an environment, and moving around inside it; checking or reviewing products or services, talking to virtual customer service avatars, watching product demos, testing products; product selection to reveal features and functions of the product; checking out the product and making payments; inviting friends to shop together in a store through a social network or engine; chatting or talking to friends on social networks while inside the online environment; opening a product box and assembling or
  • merchant or creator functions include: registering and selecting prebuilt stores, or creating new stores; customizing prebuilt or constructed stores; uploading 2-D/3-D product images or models to the store, and placing the same on shelves; decorating the store; preselecting or creating a customer service avatar to assist viewers; setting up or building a demo room for products or services; providing or using an API to transfer online merchandise data to other entities in the transaction chain; automatically converting 2-D product images to 3-D product images; displaying a product demo video; making a product demo room or table; providing demos to viewers; creating a product assembly or disassembly instructional video, e.g., using 2-D or 3-D using the product building blocks to show users how to assemble or deassemble a product; create product repair instructions in 2- D, 3-D, or VR to show users how to repair products; create themed stores (as described in greater detail below); import/create/set up sales or customer service avatars; display services such as restaurants, travel agents, and so on.
  • a viewer may be enabled to situate potential furniture in a 3d model of the viewer's living room, testing not only for size but for aesthetic qualities, e.g., color, and the like.
  • Viewers e.g., customers, may be enabled to virtually experience a product or service, increasing satisfaction and likelihood of conversion.
  • services may be combined with purchasing functionality. For example, an interior designer may virtually "walk-through" a room, suggesting items with which to furnish or decorate the room. The same may also virtually change the color or wallpaper of a room.
  • a customer service representative may be implemented as an avatar in the virtual environment, and the same may teach a viewer how to dress in a particular way, how to apply makeup, what jewelry might be of interest, and so on. The viewer may then purchase such items.
  • a target mesh will be employed, with potentially a subject mesh of a user (constituted of an avatar).
  • Such experiences include learning experiences, e.g., classrooms, labs, adventures, watching movies or life performances, and so on.
  • Shopping or browsing experiences may include comparison of a viewer subject mesh with a creator target mesh, such generally including the purchase of personal goods such as clothes, shoes, bags, electronic devices, and so on.
  • the employment of the subject mesh is particularly where the potential product to be purchased can be "tried on" or, e.g., fitted in a room.
  • Other examples include attempting in VR to assemble a product, to determine the difficulty of assembly.
  • the experience of assembly can be enhanced with haptic or other feedback (which may also be applied to other implementations).
  • Storefronts may be created and modified by the creator, i.e., store owner, retailer, and so on. While a viewer generally does not modify the storefront, certain animations or other graphics may be employed such that the viewer can visualize putting or placing items in a shopping cart, and so on.
  • models may be created of furniture (or indeed any products), and such models may be loaded by a viewer during shopping.
  • a viewer wants to see how a couch looks in their living room, the viewer may have already loaded a 3-D model of their living room into their system, or the same may be accessible from the cloud.
  • 3-D model creation may be performed by methods described, including taking a 2-D image, having a 3-D image taken, using a 3-D model generator, and so on. The viewer may then scan a barcode or the like of the couch, and a 3-D model of the couch may be visualized in their living room or on a table, e.g., using an AR device such as Google Glass or Microsoft Hololens.
  • the actual couch may be situated within the (virtual) image or model of the viewer's room.
  • a scale may have been stored with each mesh, e.g., the living room and the couch, such that when the couch is visualized in the living room, the scale is properly set.
  • the cloud may be accessed for the 3-D model of the living room, but the couch in the show room seen through the AR device and shown, in particular, situated within the viewer's living room. Similar applications inure to VR environments, and so on. It will be understood that numerous variations of the above are within the scope of systems and methods according to present principles. For example, viewers may employ such systems and methods to fit and select office furniture, store furnishings, laboratory layouts, clean room layouts, and so on.
  • properly-sized avatars may be employed to virtually "try on” clothes for fitting purposes.
  • the meshes used for the clothes may be conveniently obtained from manufacturers, who often design clothes using 3-D modeling software.
  • Such systems may be employed to try on, besides clothes, shoes, cosmetics, hairstyles, jewelry, makeup, handbags, and so on.
  • selected items may be placed into a shopping cart and the viewer/user may make payment through various online payment systems.
  • a mall where a mall is configured, if it makes economic sense to the mall owner, spaces in the mall may be leased to store owners. Particularly preferable locations may be near anchor stores or entertainment locations.
  • virtual malls may include virtual arcades where viewers can go to play games, solo or with others, or cinemas where viewers may watch movies with other viewers.
  • creators may be AI representatives of creators.
  • Creators may provide a themed shopping experience.
  • a merchant may build an online environment (which still serves as a store) in a setting that fits their product.
  • a merchant who sells sports products may build a store that has a race track, basketball field, mountain for skiing, a beach for surfing, and so on, for users to try on and try out (virtually) their sports products.
  • a merchant who sells Italian products can build a store in a simulated Italian background/city, such as Rome.
  • users in a 2-D, 3-D, or VR online environment or game may enter a store or other environment having products, e.g., museum, classroom, and so on (step 502).
  • the viewer may select a product that is interesting (step 504).
  • Buttons or handles may be attached to the product allowing the viewer to interact with the same, e.g., upon appropriate clicking, dragging, or activation (step 506).
  • a viewer may be in a physical store (step 508), and may use, e.g., a mobile device to capture information about a product that is interesting (step 512), e.g., a SKU or barcode, QR code, or the like.
  • a mobile device to capture information about a product that is interesting (step 512), e.g., a SKU or barcode, QR code, or the like.
  • An indication of the product is then displayed on the mobile device (step 514), and again buttons or handles may be provided to allow more convenient user interaction.
  • the user or viewer, in VR or AR, may interact with the product in various ways.
  • the product itself may be interacted with to review internal structure or components (step 516).
  • an exploded view may be provided.
  • An animation may be provided to illustrate how the product works (step 518).
  • An animation may also be provided to demonstrate various product features (step 522).
  • a step-by-step animation may be provided to illustrate how to assemble or disassemble the product (step 524). Any of the aspects portrayed, e.g., product interactions or animations, may be added to a favorites list, shopping cart, wish list, or customized space or room within a 3-D model (step 526). The user may click on a product information button to obtain additional information (step 528), e.g., a more detailed description, price information, inventory information, and so on.
  • additional information e.g., a more detailed description, price information, inventory information, and so on.
  • Fig. 23 shows a flowchart 1200 of a method according to present principles, e.g., in particular for a shopper or user interaction in 3-D, VR, or AR, for group shopping or social shopping.
  • a user enters an online environment, e.g., which may be 2-D, 3-D, VR, a mall, and so on (step 534).
  • the user may invite others to join in various ways (step 536).
  • a user may be in a physical store or mall (step 538).
  • the location of the user may be identified in various ways, e.g., GPS, Wi-Fi, pattern recognition in an AR system, and so on (step 542).
  • the user may then invite another user to join their AR session (step 544). Friends or family receive the invitation and may accept the same (step 546).
  • the friends or family typically employing a VR device, may then join and enter the same store or mall in the same location as the inviter (step 548).
  • the friends or family may join the shopping (step 552), and may interact with the original user using voice, text, video, and so on (step 554).
  • Fig. 24A illustrates a method according to present principles, and in particular a flowchart 1250.
  • the user enters a store as above (step 556), although in this case generally a VR or 3-D environment is preferred.
  • the viewer may select a desired article of clothing (step 558), and may push or activate a button or otherwise indicate a desire to try on the article of clothing (step 562).
  • a check is made as to whether the user is appropriately registered (step 564). If not, the user may register and enter their bodily dimensions, as well as an avatar or other image, as described above (step 566). Subsequently, or if the user is already registered, their avatar may be displayed (step 568).
  • the user may be presented with the selected article of clothing, or may select the same again (step 572), and the system may match the selected article of clothing with the user body dimensions.
  • An algorithm may be employed to determine, given the type of fabric, the stretch of the fabric, the size of the article, and so on, whether the article of clothing will fit the user to within a predetermined threshold, e.g., 5%, 10%, and so on. Users may enter into preferences or user settings whether they prefer loose fitting clothes, tight clothes, and so on. If there is a fit, the article of clothing may be displayed on the user's body (step 578). If the fit is determined to be not good by the algorithm, e.g., more than 20% away from an optimum, then the system may suggest that the user select another article of clothing (step 584).
  • the user may have the option to see detailed information about the article of clothing, such as price, size, color options, and so on (step 582).
  • the user may then share a photo or other indicator of the experience with their friends (step 586), may invite friends to view the fitting using the social engine (step 588), and/or may add the article of clothing to their cart, favorites list, wish list, and so on (step 592).
  • clothes may be fitted in AR.
  • This implementation is illustrated by the flowchart 1275 of Fig. 24B. Certain details are the same as in Fig. 24A, and their description is not repeated here.
  • the user may employ a mobile device to scan or otherwise capture information about the article of clothing (step 594), and a 3-D model of the article of clothing may be displayed on the mobile device (step 596).
  • the rest of the implementation is similar to that of Fig. 24A, although in this case the user may also physically try on the article of clothing.
  • Fig. 25 illustrates a flowchart 1300 of a method of using a real-time intelligent assistant in an online virtual environment, e.g., in 3-D, VR, or AR.
  • a user virtually enters a 2-D, 3-D, or VR store or mall or game, or physically enters a store in an AR implementation (step 602).
  • the viewer may request sales or customer support through an appropriate pushbutton, text, voice indicator, video indicator, and so on (step 604).
  • Sales or customer support associated with the creator may then dispatch a support avatar based on the number of users and requests, and the same may be instantiated near the user who made the request (step 606).
  • a greeting may be made, and users may then ask questions of the avatar using various means (step 608).
  • the avatar may receive the questions using voice recognition, text, video, and so on. If the avatar is backed by a real person, the avatar may provide the answer to a user from the real person (step 618). If the avatar is purely virtual, the avatar routine may search a database to match the question (step 616) for the appropriate answer (step 614). If the match is found, the avatar provides the answer to the viewer, or may list several possible answers for the user to choose (step 622). The user then chooses the answer or requests further assistance (step 624). If additional assistance is needed, an avatar backed by a real person may communicate with the requesting viewer in various ways, e.g., text, voice, video, and so on. The system may then terminate or a survey may be provided to the viewer (step 628).
  • themed shopping may be provided in 3-D, VR, or AR.
  • 3-D or VR stores may be provided according to a certain product theme.
  • a store or shopping mall may be placed in a virtual mountain ski resort, or luxury Italian products may be placed in a virtual boutique in Rome.
  • a sporting area may be provided to allow viewers to try, test, or play with products, e.g., a field or area.
  • a ski resort may be virtually constructed in a ski store, a swimming pool in a swimming wear store, and so on. Users may be enabled to take an image and share their experiences testing products in the themed store.
  • Fig. 26 illustrates a flowchart 1350 for themed shopping.
  • a business user may choose a store location/address (step 632).
  • An option may be given to the owner to build a themed store (step 634). If the answer is no, the store owner or creator may proceed to the regular store building process described elsewhere (step 636). However, if the answer is yes, a theme may be selected or uploaded for the themed store (step 638).
  • step 642 Whether the store is themed or not, a choice may be given to the creator to build a product test area (step 642). If the answer is no, a regular store model may be built as described elsewhere (step 644). If the answer is yes, the test area, generally in accordance with the theme, may be selected, built, or uploaded (step 646).
  • the flowchart 1400 of Fig. 27 illustrates themed shopping from the standpoint of the user.
  • the user enters the themed shopping store, center, scene, game, mall, or so on (step 648).
  • the user may, e.g., enter keywords in order to search for products (step 652).
  • the user will choose a themed store (step 654).
  • the user may then try, test, or play with products in the themed store or in a test area if provided (step 656).
  • a mobile device or the like may capture the ID of an interesting product (step 660).
  • An AR app may show the product in the themed virtual store, or in a test area (step 662). Whether or not the user is in the physical store, the user can try, test, or play with products in the themed store or test area (step 664).
  • the product may be added to a cart, wish list, or shared (step 666).
  • the theme could be saved for the next time, or shared within the social network (step 668). In this way, the user may be enabled to shop in the desired environment but potentially in every store in the mall.
  • a final step if the user decides to purchase a product from the themed environment, is the checkout step (step 670).
  • Fig. 28 illustrates a flowchart 1450 specific to AR shopping in malls or shopping centers.
  • a user with a mobile device such as an iPhone, iPad, wearable device, and so on, enters a physical mall or shopping center (step 672).
  • An AR app on the mobile device may recognize the location in various ways, e.g., pattern recognition, object recognition, scanning signs, codes, images, or AR IDs, and so on (step 674).
  • the app may locate the user location through GPS, Wi-Fi, or other positioning systems (step 678).
  • the AR app may recognize the entire mall (step 682), may recognize a single store (step 684), may recognize an event or activity (step 688), and so on. Following recognition of the entire mall, major promotions, activities, events, or announcements may be caused to be displayed on the user mobile device (step 688). If recognition is of a single store, the same may be accompanied by store promotions, or announcements of various events/activities related to the store, again displayed on the mobile device (step 692). If recognition is of activities/events, e.g., by QR codes, barcodes, or the like, event or activity information may be displayed on the screen of the mobile device (step 694).
  • activities/events e.g., by QR codes, barcodes, or the like
  • event or activity information may be displayed on the screen of the mobile device (step 694).
  • the user may then tap or select a promotion, event, or activity (step 696).
  • An arrow or sign on the mobile device may appear to guide the user to the location of the selected promotion, event, or activity (step 698).
  • the user may search products, events, or activities in the mall or shopping center (step 676).
  • the flowchart 1500 of Fig. 29 illustrates an implementation of using AR shopping in physical stores.
  • a mobile user enters a physical store (step 704). If a product is found online, then the user may select a particular store/location (step 706).
  • a 3-D model of the store may appear on the user's mobile device screen (step 708), and the user may be directed to the product section within the virtual store (step 710).
  • product variations may be displayed (step 712), similar products may be displayed (step 714), online coupons and promotions may be displayed and activated by the user (step 716), and so on.
  • the product code or the like may be scanned (step 718) and the product or an indicator thereof displayed on the screen (step 719).
  • the product may be physically or virtually tried on in a fitting room (step
  • FIG. 30 illustrates another shopping implementation, but where the products involve furniture, home decor, or appliances, and where the same are being fitted to a space in 3-D, VR, AR, or the like.
  • users enter a store or game in VR or 3-D (step 724).
  • the user may be in a physical store (step 730), and a mobile device may be employed to capture the store or a product with an, e.g., AR app (step 732).
  • the app may inquire of the user if a customized space or room is available (step 726). If not, the user may be prompted to create a new space or room with a desired dimension and/or shape (step 728). If the user has a customized space/room already, the same may be loaded or imported, or downloaded from the cloud, or the like. Products may be moved into or out of the room or space (step 734). Other steps may be employed, including that products may be moved, rotated, or otherwise decorated to fit the space or room, and to fit other items in the space or room.
  • products may have an appropriate scale that may be set as a common scale for both the viewer and the creator, e.g., a common scale for a target mesh and a subject mesh, so that the appropriate scale is set and products are properly sized in the 3-D environment, the VR environment, and/or the AR environment.
  • an appropriate scale may be set as a common scale for both the viewer and the creator, e.g., a common scale for a target mesh and a subject mesh, so that the appropriate scale is set and products are properly sized in the 3-D environment, the VR environment, and/or the AR environment.
  • Products may be clicked on for additional information (step 738).
  • the space itself, with the products may be captured as an image or 3-D model (step 740), shared with friends or family (step 742), added to various lists including shopping carts and wish lists (step 744).
  • steps 740 may be captured as an image or 3-D model
  • steps 742 shared with friends or family
  • steps 744 may be added to various lists including shopping carts and wish lists (step 744).
  • the user may then check out (step 746).
  • systems and methods according to present principles may implement an AR ID and location system, employable to identify places, objects, or products to display in AR.
  • AR applications often identify locations primarily through GPS.
  • GPS is often not accurate enough to identify places that are very close.
  • systems and methods according to present principles include a system that uses wireless technology (e.g., Bluetooth, Wi-Fi, infrared, or a mobile network) to locate the mobile device in question.
  • Scanner technology may also be employed such as is associated with barcodes, QR codes, RFID's, or specially designed images, patterns or codes.
  • the locations or objects may have a device that emits wireless signals. A user with a mobile device may automatically detect the wireless signal so that identification of the locations or objects can be retrieved.
  • scanner technology is employed, the locations or objects may be fitted with a barcode, QR code, RFID, or other specially designed image, pattern, or code, for a user's mobile device to scan.
  • FIG. 31 is a flowchart 1600 illustrating an AR ID and location system, usable for identifying places, objects, or products, to display in AR.
  • a user uses their mobile device while around objects in combination with an AR device or system (step 748).
  • the user's mobile device may employ wireless signals for location acquisition (step 754), or may employ scanning of various visual images to obtain location data (step 756).
  • the AR app may identify various locations or objects (step 750), and the same may access one or more databases or files to extract location information (step 752).
  • a property owner or agent may load images/3-D models and dimensions as well as various requirements.
  • An interior designer may be employed if necessary, or 3-D artist, who may then receive real estate information and make 3-D models or interior designs based on property owner's requirements.
  • the interior designer/3-D artist may then send the finished designs to the property owners or agents. If the designs are approved, payment may be made from the property owner to the designers or 3-D artists. Properties may then be displayed in VR or 3-D for the user to view or purchase, e.g., online or otherwise.
  • a property owner/agent logs into an appropriate user interface (step 394).
  • 3-D models or images may then be uploaded or constructed (step 396).
  • a decision may be made as to whether an interior designer or other 3-D artist needs to become involved (step 318). If so, a UI is provided for the interior designer/3- D artist (step 402). The same receives or downloads images/3-D models, as well as dimensions and requirements (step 404). The interior designer or 3-D artist then constructs 3-D models of the property (step 406).
  • a property owner sells a house
  • he or she can upload the image of the house with appropriate dimensions so that an interior designer or 3-D artist can make a model of the house or room and then decorate the same with furniture and home decor products.
  • Potential buyers can compare the property in an original condition versus a newly decorated property as the same appears in 3-D or VR. This is especially useful if the original property is in a poor condition. Potential buyers can see the potential of the property through 3-D or VR, thus increasing the confidence of buying the property.
  • Buyers can also purchase furniture or home decor products through online transactions as may be connected to the 3-D or VR visualization, or through the interior designer.
  • Property owners may also use a 3-D camera to obtain a 3-D model of the house or room, or obtain the blueprint of the house from city records so that interior designers or 3-D artists can build a 3-D model from the blueprint.
  • More passive experiences associated with real estate include real estate purchases and rentals, and traversing and exploring properties associated therewith.
  • a subject model may be measured against a target model, where the models are generally meshes with appropriate materials, textures, and shading.
  • the models are generally meshes with appropriate materials, textures, and shading.
  • such allows users to perform virtual walk-throughs of properties, or to perform a physical walkthrough with significantly enhanced information through an AR interface.
  • friends may be invited to a conference conducted in VR, or the friends may go shopping or otherwise meet together. Chat functionality, voice functionality, and videoconferencing functionality may be employed and/or leveraged to allow communications between such friends. Connections to social networks may be had through an appropriate API, e.g., to
  • Competitions may be made accessible to VR viewers using the technology, such competitions including sporting events, videogame competitions, social events including parties, and so on.
  • an educator or other creator may select classrooms or buildings in 3-D or VR (step 790). Alternatively, the same may upload 3-D models of a classroom or school (step 792). Similarly, classroom furniture, lab tools, equipment, or decorations may be selected or uploaded to the classroom (step 796).
  • the creator or educator may upload an image of the classroom or school (step 794), and a 3-D generation engine may create a 3-D model of the same (step 798).
  • 3-D artists may create a model by hand (step 802).
  • items may be moved to arrange or decorate the virtual environment (step 804).
  • Educational materials may then be selected and/or uploaded (step 806).
  • Such educational materials may include textbooks and other texts, videos, 3-D models, and so on.
  • An appropriate payment system may be in place to compensate artists and other content creators (step 808).
  • Students may then enter the virtual classroom and purchase materials (step 810).
  • Other steps the students may take include watching videos, interacting with educators, performing projects, interacting with each other, assembling or disassembling objects, virtually traveling to ancient times or remote places to experience historic events, watching movies, watching 3-D animations, and so on.
  • Figure 36 is a flowchart 1800 illustrating an educational or classroom implementation of augmented reality.
  • educators and students may enter a classroom with mobile devices (step 812).
  • Appropriate images may be provided for the mobile devices to scan (step 814), so as to allow the mobile device, educators, and students, to register their presence.
  • Various educational material may then be displayed on the mobile device (step 816). It is noted here, particularly in this implementation but also in others, that the mobile device may constitute laptop computers, tablet computers, as well as smart phones. On the same, students may provide various data, e.g., choices, during class (step 818).
  • Such data may also include, e.g., reviewing detailed information, taking tests and quizzes, voting on a subject, conducting a discussion on the subject, providing comments, doing experiments, interacting with 3-D objects, watching 3-D animations to learn additional details about a subject, achieving hands-on experience with the subject.
  • Target meshes in this implementation may generally relate to the virtual environment, classroom, lab, and so on, and subject meshes may typically pertain to student avatars or the like.
  • Significant educational leverage may be gained by employing social and communications engines as described, including, e.g., voice, video conferencing, and so on.
  • Travel/Entertainment may include those pertaining to venturing, entertainment, and so on. For example, users may travel to a far-off location, or may just go to a virtual arcade to play games or to a virtual movie theater with friends or family to watch a movie. Before traveling, users can enter a resort or hotel room, e.g., to determine if the same would be appropriate for a physical trip, at least where the hotel room is configured to be an accurate representation of the hotel or resort offerings.
  • Such virtual travel or adventure experiences may allow viewers to virtually travel to locations and destinations which they are interested in, or which they are unable to physically travel to. Such experiences can be used to learn about destinations, and so on.
  • a travel agent may select or load images/3-D models of a travel destination. If images are loaded, the 3-D model generation engine may generate 3-D images or models. 3-D artists may then construct 3-D models if desired, and the same may be paid
  • 3-D models or images of travel destinations may also be displayed for the user to view. Users may purchase travel packages using 3-D, VR, AR, or the like.
  • FIG 33 illustrates a flowchart 1650 according to present principles.
  • a travel application in 3-D or VR is illustrated.
  • a travel agent may select an existing destination 3-D model (step 758).
  • the travel agent or company may upload a destination 3-D model (step 760).
  • a user may review or purchase the travel service (step 768), invoking a payment system if necessary (step 774).
  • a travel agent or company may upload a destination image (step 762).
  • the image may be converted to a 3-D model by the 3-D model engine (step 764).
  • a 3-D model may be constructed by hand from the image (step 766).
  • the output of the 3-D model engine are travel destination models (step 770), which may be included in the travel destination models that the user reviews in step 768. If the 3-D model is made by hand, the 3-D model artist may employ an appropriate UI to make, download, or upload the desired model (step 772). Flow may continue with step 770.
  • FIG 34 illustrates a flowchart 1700 corresponding to a travel application using augmented reality.
  • a user with a mobile device travels to a desired location or destination (step 776).
  • the AR app may identify the location through GPS or other wireless location determination systems and technology (step 778).
  • the AR app may present several options for the user to select from, including: location information, lodging information, eatery information, gas information, car information, or other promotions (step 780).
  • the user may then select information to view (step 782).
  • the user may be directed to the location through GPS (step 784).
  • the user may also employ the AR app to book the desired option (step 786), e.g., to make a reservation at a displayed lodge.
  • the transaction may conclude with payment if necessary (step 788).
  • an entertainment company or individual can build a 3-D or VR themed area such as an adventure land, arcade, stage, or theater, separately or in a 3-D VR mall or shopping center.
  • the user of a mobile device can obtain various entertainment options and may review such in 3-D or VR for trial or purchase.
  • the entertainment theme owner can attract users by automatically sending invitations to watch or play various content items.
  • a VR shopping area or mall such may be offered to users or viewers "passing by". Following the trial, if desired, users can pay for the entertainment content by an appropriate payment mechanism.
  • images or 3-D models may be imported of hotel rooms or resorts to the 3-D or VR system or cloud.
  • 2-D images may be converted to 3-D models and imported into the system or cloud using a 3-D model engine as described above, such that hotel and resort owners can build example rooms or houses more easily.
  • 3-D images may also be employed for this purpose, as taken from a 3-D camera.
  • Entertainers may be enabled to import video to the 3-D or VR cloud or to a server. Video may be streamed for live performances to the VR or 3-D cloud or server. Entertainers may be enabled to import or build custom 3-D or VR theaters.
  • product or service information may be obtained from manufacturers, retailers, service providers, and so on. If there is no 3-D model representative of the desired product or service, a 3-D artist may be requested to make a model, or the creator (product or service provider) may choose a premade model from a library. Once obtained, the product or service 3-D model may be constructed and used by designers or game developers. In the case of a game, a game player may see the product or service 3-D model and subsequently purchase products or arrange for services. The transaction may be initiated by the game owner, who may then send the order information to the product or service owner.
  • Payment may be made to various entities, including the game owner, the game developer, and the 3-D artist.
  • various entities 422 desiring to allow for commercial transactions within a game may access an appropriate API as an initial step in providing service information in a virtual environment.
  • entities 422 are generally providing services
  • entities 426 are generally providing products, although it will be understood that in a given implementation both products and services may be provided by a given single entity.
  • the product or service provider may have a game ready 3-D model ready to go, and the same may be provided to the game maker 428 through what is termed interface III.
  • interface I or interface II may be accessed by the service or product provider, respectively, as a portal to a designer or 3-D artist 424.
  • the designer or 3-D artist may then create an appropriate model for the product or service and provide the same to the game maker 428 through interface III.
  • the game 432 can be created, and a game player 434 may play the game and purchase products or services within the game, with orders for products or services being routed to manufacturers 426 or service entities 422, respectively.
  • the flowchart 950 of Fig. 38 illustrates additional details of interface I.
  • a retailer or other service provider accesses an appropriate UI to configure or access the API of a system or method according to present principles, the system for providing such a VR or AR or 3-D functionality (step 436).
  • Product or service information is loaded, including images, manually or through an API (step 438).
  • a designer or 3-D artist UI 442 may then be configured and used to allow the designer or 3-D artist to choose the product/service and make 3-D models accordingly (step 444).
  • the designer or 3-D artist UI may then be employed to upload 3-D models to game makers for their use (step 448). Payments may be provided to designers or 3-D artists at various points in the process as indicated in the figure.
  • Fig. 39 is a flowchart 1000 of exemplary steps taken by interface II.
  • a manufacturer accesses or employs a UI to begin the process of providing product information (step 452).
  • Product information including 3-D models and images may then be loaded, manually or through an API (step 454).
  • a designer or 3-D artist UI 456 may then be employed to choose the product and prepare 3-D models in a way appropriate for game engine importation based on product information (step 458).
  • the 3-D models may then be uploaded to game makers (step 464). Payment may be made as indicated.
  • FIG 40 illustrates a flowchart 1050 indicating exemplary steps taken by interface III.
  • a game maker UI 468 may be accessed and game ready 3-D models displayed, e.g., on a webpage (step 466).
  • Game makers may then download the desired 3-D models (step 472), and integrate the same into games (step 474).
  • Such models may then be linked with a backend transaction system, e.g., a shopping cart system (step 476). Orders from users of such products or services associated with 3-D models may be transmitted to product or service providers 478. In some cases, referral or advertising payments may be sent to the game maker.
  • 3-D models may be automatically configured and programmed.
  • objects inside 3-D models may contain special attributes, which may be used to add features and functionalities to 3-D models, so that users/players can interact with the 3-D models.
  • attributes may be provided, e.g., attributes to identify avatars, products, ad banners, decoration items, and so on.
  • the program may read the 3-D models and parse the attributes for each 3- D model object, assigning functionality based on attributes.
  • the functionality can then communicate with transaction (e.g., commerce) APIs, interfacing with ad providers and other types of internal and external communications.
  • transaction e.g., commerce
  • a system may start with a 3-D model (step 482).
  • Various attributes may be assigned to the model (step 484).
  • the model may be uploaded to the game engine, and the same may automatically check for model attributes (step 486).
  • the system may automatically apply programming to the 3-D model based on the attributes (step 488). In this way, 3-D models obtain various features and functions (step 492).
  • the system and method may be fully implemented in any number of computing devices.
  • instructions are laid out on computer readable media, generally non-transitory, and these instructions are sufficient to allow a processor in the computing device to implement the method of the invention.
  • the computer readable medium may be a hard drive or solid state storage having instructions that, when run, are loaded into random access memory.
  • Inputs to the application e.g., from the plurality of users or from any one user, may be by any number of appropriate computer input devices.
  • users may employ a keyboard, mouse, touchscreen, joystick, trackpad, other pointing device, or any other such computer input device to input data relevant to the calculations.
  • Data may also be input by way of an inserted memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of file - storing medium.
  • the outputs may be delivered to a user by way of a video graphics card or integrated graphics chipset coupled to a display that maybe seen by a user. Alternatively, a printer may be employed to output hard copies of the results. Given this teaching, any number of other tangible outputs will also be understood to be contemplated by the invention. For example, outputs may be stored on a memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of output.
  • the invention may be implemented on any number of different types of computing devices, e.g., personal computers, laptop computers, notebook computers, net book computers, handheld computers, personal digital assistants, mobile phones, smart phones, tablet computers, and also on devices specifically designed for these purpose.
  • a user of a smart phone or wi-fi - connected device downloads a copy of the application to their device from a server using a wireless Internet connection.
  • An appropriate authentication procedure and secure transaction process may provide for payment to be made to the seller.
  • the application may download over the mobile connection, or over the WiFi or other wireless network connection.
  • the application may then be run by the user.
  • Such a networked system may provide a suitable computing environment for an implementation in which a plurality of users provide separate inputs to the system and method.
  • the plural inputs may allow plural users to input relevant data at the same time.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Finance (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Systems and methods according to present principles relate to processing and presentation of data from a database, transmitted in a pushed or pulled fashion from the database and rendered to present a 3-D video or image to a viewer. The 3-D video or image may be rendered as a 3-D CG object on a display via a, e.g., GPU or video graphics card, or integrated chipset, or rendered on a specialized device such as a virtual reality headset so as to be perceived by a viewer as a 3-D visualization.

Description

TITLE
GRAPHICAL PROCESSING OF DATA, IN PARTICULAR BY
MESH VERTICES COMPARISON
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims benefit of priority of U.S. Provisional Patent Applications Serial No. 62/167,665 filed May 28, 2015 entitled "Method And System Of Virtual-Reality Commerce Cloud For Goods And Services Such As Shopping, Real Estate, Education, Travel And Entertainment" and Serial No. 62/237,090, filed October 5, 2015, entitled "The Method And System Of A 3-D, VR And AR Commerce/Showroom Platforms/Cloud For Goods And Services Such As Shopping, Real Estate/Rental, Education, Travel And Entertainment", which are owned by the owner of the present application and herein incorporated by reference herein in its entirety.
FIELD
[0002] The invention relates to graphical processing pipelines and the graphical presentation of data.
BACKGROUND
[0003] Data visualization is known. In particular, data from databases has been visualized in a number of ways in the prior art. Such visualizations tend to be performed in a number of ways.
[0004] General depictions of data may be, for comparison purposes, embodied by charts and graphs, such as bar and pie charts and the like. In 3-D environments, data has been visualized more graphically, such as by the rendering of 3-D CG objects. For example, graphical engines such as Unity, Unreal Engine, and Cry Engine provide significant functionality in the way of 3-D object presentation. The construction of 3-D data objects is also well known, such as by the use of graphical 3-D design applications such as Maya, Blender, 3DS Max, and the like. Particular architectural projects may be designed with Google SketchUp, along with others of the applications noted above.
[0005] However, the integration of such designed 3-D environments with certain types of databases is so far lacking, as among other deficiencies, appropriate APIs and database accessors have not been constructed. [0006] Accordingly, the construction of such environments is very difficult for a layperson, requiring significant custom programming and skill. In addition, the lack of such integration leads to difficulties for the viewer, as the same is given little choice in how to view data and is left with old substandard ways, requiring significant user input, e.g., clicks and entered data, and thus further requiring additional computing cycles and in many cases unnecessary battery usage.
[0007] Thus, there is a need, both on the part of UI developers and viewers, for better ways to present database data.
[0008] This Background is provided to introduce a brief context for the Summary and Detailed Description that follow. This Background is not intended to be an aid in determining the scope of the claimed subject matter nor be viewed as limiting the claimed subject matter to implementations that solve any or all of the disadvantages or problems presented above.
SUMMARY
[0009] Systems and methods according to present principles meet the needs of the above in several ways.
[00010] In a first aspect, the invention is directed towards a method of configuring a server to provide a media experience, including: on a first server, providing a first user interface operable to allow a creator to construct a virtual environment, the virtual environment including at least one target mesh; on the first server, or on a second server in network communication with the first server, providing a second user interface operable to allow a first viewer to log on to the virtual environment and move through and interact with the virtual environment in a viewing session; where the second user interface is further operable to allow the first viewer to cause an invitation to be sent to a second viewer to share the viewing session, where upon acceptance of the invitation, the second viewer is enabled to share the viewing session and move through and interact with the virtual environment along with the first viewer.
[00011] Implementations of the invention may include one or more of the following. On the first server or on the second server, the second user interface may be further operable to allow the first viewer to construct a subject mesh for use in the virtual environment. The subject mesh may be an avatar or may be a virtual environment, such as a room or building. Upon acceptance of the invitation, the second viewer may be presented with the second user interface. The second user interface may be further operable to allow the second viewer to construct a subject mesh for use in the virtual environment. The second user interface may be further operable to allow the first viewer to move the subject mesh relative to the target mesh. The target mesh may have metadata associated therewith, the meta-data indicating a scale. The subject mesh may have metadata associated therewith, the metadata indicating a scale, and the target mesh and the subject mesh may be configured to have the same scale, whereby a size and appearance of models in the subject mesh may be correctly displayed against the target mesh. The first viewer may access the virtual environment using a virtual reality device or an augmented reality device, as may the second viewer, although commonly the second viewer will access the virtual environment using a virtual reality device.
[00012] In another aspect, the invention is directed towards a non-transitory computer readable medium, including instructions for causing a computing environment to perform the method above.
[00013] In yet another aspect, the invention is directed towards a method of providing a media experience, the media experience modifiable on a server side by a creator, including: accessing a database, the database including data corresponding to an inventory of available items; exposing a user interface, the user interface operable to allow a creator to situate items in the inventory in a virtual environment; where the user interface is operable to allow the creator to situate the items by allowing the creator to create a 3-D model of the item or allowing the creator to import a 3-D model of the item.
[00014] Implementations of the invention may include one or more of the following. The allowing the creator to create a 3-D model of the item may include allowing the creator to import an image corresponding to the item to a 3-D model generation engine. The method may further include exposing a user interface where a first viewer can access the virtual environment. The exposed user interface for viewer access of the virtual environment may further allow the first viewer to invite a second user to access the virtual environment.
[00015] Advantages of the invention may include, in certain embodiments, one or more of the following. 3-D, 2-D, AR, VR, may be conveniently constructed by creators and used by users for various purposes, including where viewers employ a subject mesh which is situated in a position relative to a target mesh for various purposes within a virtual environment. Other advantages will be understood from the description that follows, including the figures and claims.
[00016] In further aspects and embodiments, the above method features of the various aspects are formulated in terms of a system and as in various aspects. Any of the features of an embodiment of any of the aspects, including but not limited to any embodiments referred to above, is applicable to all other aspects and embodiments identified herein, including but not limited to any embodiments referred to above. Moreover, any of the features of an embodiment of the various aspects, including but not limited to any embodiments referred to above, is independently combinable, partly or wholly with other embodiments described herein in any way, e.g., one, two, or three or more embodiments may be combinable in whole or in part. Further, any of the features of an embodiment of the various aspects, including but not limited to any embodiments referred to above, may be made optional to other aspects or embodiments. Any aspect or embodiment of a method can be performed by a system or apparatus of another aspect or embodiment, and any aspect or embodiment of a system or apparatus can be configured to perform a method of another aspect or embodiment, including but not limited to any embodiments referred to above.
[00017] This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described in the Detailed Description section. Elements or steps other than those described in this Summary are possible, and no element or step is necessarily required. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended for use as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[00018] Fig. 1 is a schematic diagram of an implementation according to present principles.
[00019] Fig. 2 illustrates exemplary AR/VR/3D application components, according to present principles.
[00020] Fig. 3 illustrates exemplary AR/VR/3D engine components, according to present
principles.
[00021] Fig. 4 is a flowchart showing an implementation of a method according to present principles.
[00022] Fig. 5 is another flowchart showing another implementation of a method according to present principles.
[00023] Fig. 6 illustrates an objective or subjective comparison of a subject mesh against a target mesh, according to present principles.
[00024] Fig. 7 illustrates another objective or subjective comparison of a subject mesh against the target mesh, according to present principles.
[00025] Fig. 8 illustrates the interface architecture in modular format, according to present principles.
[00026] Fig. 9 illustrates a system architecture in modular format, according to present
principles. [00027] Fig. 10 illustrates a system architecture for augmented reality (AR) applications, in modular format, according to present principles.
[00028] Fig. 11 is a flowchart for avatar construction, according to present principles.
[00029] Fig. 12 is a flowchart for a virtual environment creation, according to present
principles.
[00030] Fig. 13 is a flowchart for model creation, according to present principles.
[00031] Fig. 14 is another flowchart for model creation, according to present principles.
[00032] Fig. 15 is a flowchart for service model creation, according to present principles.
[00033] Fig. 16 illustrates ways in which machine leaming can be employed in the creation of
3-D models, according to present principles.
[00034] Fig. 17 is a flowchart for business logo creation, according to present principles.
[00035] Fig. 18 is a flowchart for integration with existing transaction backend systems,
according to present principles.
[00036] Fig. 19 illustrates an augmented reality user process for transactions, according to present principles.
[00037] Fig. 20 is a flowchart showing 3-D, 3-D animation, or VR product display on a
transaction site front end, according to present principles.
[00038] Fig. 21 is a flowchart for a 3-D model, 3-D animation, or VR product display on a transaction site back end, according to present principles.
[00039] Fig. 22 is a flowchart for shopper/user interaction with products in 3-D, VR, or AR, according to present principles.
[00040] Fig. 23 is a flowchart for shopper or user interaction in 3-D, VR, or AR, illustrating group shopping or social shopping, according to present principles.
[00041] Fig. 24A is a flowchart for fitting clothes or outfits in 3-D or VR, according to present principles.
[00042] Fig. 24B is a flowchart for fitting clothes or outfits in AR, according to present
principles.
[00043] Fig. 25 is a flowchart illustrating steps in providing real-time intelligent assistance in 3- D, VR, or AR, according to present principles. [00044] Fig. 26 is a flowchart illustrating steps for themed shopping in 3-D, VR, and AR, according to present principles.
[00045] Fig. 27 is another flowchart illustrating steps for themed shopping in 3-D, VR, and AR, according to present principles.
[00046] Fig. 28 is a flowchart illustrating steps for shopping in augmented reality in malls or shopping centers or in individual stores, according to present principles.
[00047] Fig. 29 is a flowchart illustrating steps for shopping including AR functionality in physical stores, according to present principles.
[00048] Fig. 30 is a flowchart illustrating furniture, home decor, and appliance products fitting to a 3-D, VR, or AR space or room, according to present principles.
[00049] Fig. 31 is a flowchart illustrating AR ID and location systems, according to present principles.
[00050] Fig. 32 is a flowchart illustrating implementing real estate transaction functionality within VR, AR, or 3-D, according to present principles.
[00051] Fig. 33 is a flowchart illustrating steps in a travel application using 3-D or VR
functionality, according to present principles.
[00052] Fig. 34 is a flowchart illustrating steps in a travel application using AR functionality, according to present principles.
[00053] Fig. 35 is a flowchart illustrating steps in education/classroom learning using 3-D or VR functionality, according to present principles.
[00054] Fig. 36 is a flowchart illustrating steps in education/classroom learning using AR
functionality, according to present principles.
[00055] Fig. 37 is a flowchart illustrating product/service integration into 3-D or VR games, according to present principles.
[00056] Fig. 38 is a flowchart showing steps in a first interface according to the integration flowchart of Fig. 37, according to present principles.
[00057] Fig. 39 is a flowchart showing steps in a second interface according to the integration flowchart of Fig. 37, according to present principles.
[00058] Fig. 40 is a flowchart showing steps in a third interface according to the integration flowchart of Fig. 37, according to present principles. [00059] Fig. 41 is a flowchart illustrating automatic configuration and programming of 3-D models, wherein 3-D models may be provided with attributes providing additional functionality for 3-D, VR, or AR applications, according to present principles.
[00060] Like reference numerals refer to like elements throughout. Elements are not to scale unless otherwise noted.
DETAILED DESCRIPTION
[00061] Systems and methods according to present principles relate to processing and presentation of data from a database, transmitted in a pushed or pulled fashion from the database and rendered to present a 3-D video or image to a viewer. The 3-D video or image may be rendered as a 3-D CG object on a display via a, e.g., GPU or video graphics card, or integrated chipset, or rendered on a specialized device such as a virtual or augmented reality headset so as to be perceived by a viewer as a 3-D visualization.
[00062] In various examples, systems and methods according to present principles may be employed to allow viewers to more effectively view content from creators. For example, a 3- D/VR/AR device may enable viewers, a.k.a. end users, to visualize a computer simulated 3-D world. Currently, in transactional settings, 2-D content is viewed with a web browser, with a generally poor user experience for applications such as online shopping, education, travel, or entertainment. A 3D VR system provides significant technological benefits to viewers, and allows viewers to traverse such transactional settings , e.g., virtual and/or online environments, in a convenient way, saving computing cycles, battery power, and the like. An AR system similarly overcomes various technological obstacles to viewer transactional experiences, allowing end-users to receive, process, and display signals relevant to traversing terrain such as real-world locations, and allowing access to products and information about products in a significantly more convenient fashion. AR displays may be 3-D or 2- D. Such systems also allow for the invitation of friends to VR or AR experiences, allowing friends to take part in the experience. A 3-D or VR cloud or platform enables 3-D or VR experiences, including transactional experiences, e.g., shopping, traversing real estate for purchase/sale/rental, educational and learning experiences, entertainment experiences, or travel/venture experiences. Systems and methods may typically be employed in VR, but may default to 3-D, in the absence of a VR device such as a headset.
[00063] A 3-D, VR, or AR system according to present principles generally includes one or more cloud servers with software and databases incorporating load-balancing and security, connected with end-users through a network (e.g., private or the Internet) to one or more desktop computers, laptops, tablets, mobile phones, or any other user devices with appropriate processing power. Users may or may not need a glass or device that allows them to enter a virtual or augmented reality environment, the same generally simulating the real or a virtual world. The glasses are devices may connect to end-user's computing environments, e.g., desktops, laptops, tablets, or mobile phones, or may also be embodied as dedicated standalone virtual reality devices that simulate the real or a virtual world for end-users. End-users will generally have their computers, tablets, or mobile phones or other devices connected to the Internet or private networks. Such devices generally can download software and communicate with servers. Similarly, in user VR devices can connect to end-user devices or may also be standalone.
[00064] For example, referring to Fig. 1, a system 50 is illustrated with a number of client devices lOa-lOd in network communication with a server 12 (server i). The server 12 may be associated with a particular type of application, e.g., coming from a particular source, and it will be understood that client devices may access a number of servers and different applications.
[00065] The server 12 is illustrated with components including a security module 14, a database 18, a module 22 for load-balancing, a module 16 including AR/VR application components, and a module 24 including AR/VR engine components.
[00066] Referring to Fig. 2, while the same is not an exhaustive list, the module 16 for AR VR application components may include a UI module 26 and a renderer module 28, the same for rendering the results of the graphical processing in 3-D, and/or VR/AR.
[00067] Similarly, referring to Fig. 3, the module 24 of AR/VR engine components is shown.
These include (in a nonexhaustive list): a module 32 for a user interface, a renderer module 34 for rendering in 3-D, e.g., VR/AR, a 3-D modeling tool module 36, a build/creation tool 38, a media tool 42, a communications tool 44, a social network module 46, the same including an API for
communications with a social network, a cart/payment tool 48, and a security module 52. Certain of these modules will be discussed in greater detail below.
[00068] Referring to the flowchart 75 of Fig. 4, in a first step of configuring a server to provide a media experience for VR/AR/3-D as described herein, a first server is configured or is operable to provide a first user interface operable to allow a creator to create or input a target mesh (step 11). In this step, a creator either creates the target mesh or causes one to be entered or input into the system.
For example, the target mesh may be created in an outside 3-D modeling program, and imported into the graphical processing system. The target mesh may then be situated in an appropriate location in the environment (step 13). For example, the creator may choose various locations in which to situate the target mesh. The creator may also place various textures or shading on the target mesh, or provide other materials thereon. In some cases the target mesh may have no particular scale associated with it. For example, the creator may simply be creating a scene for viewing by a viewer. In other cases, the target mesh may have a particular scale associated with it. For example, where a target mesh from a creator is to be aligned or measured up against a subject mesh from a viewer, then it is important that the two meshes have the same scale. For example, if a target mesh is a piece of furniture or an article of clothing, the same would have an associated scale, so that the same could be measured up against the room of a viewer, or an accurate body avatar of the viewer, respectively.
[00069] In some cases, the first server, or a second server in network communication with the first server, is configured or operable to provide a second user interface which in turn allows a first viewer to create or input a subject mesh (step 15), which may then generally be compared in some way against the target mesh entered by the creator. In so doing, the viewer may use a 3-D modeling program to create the subject mesh, or more commonly the subject mesh may be created for the viewer, based on data entered by the viewer. For example, the viewer may enter a room size, in the same may then constitute the subject mesh against which a creator configured target mesh is compared, e.g., the subject mesh being a room and the target mesh being a piece of furniture to be situated within the room. Where social or chat functionality is described below, the viewer described above is referred to as a first viewer and the first viewer invites a second viewer to take part in the media experience, e.g., the second user interface is configured to allow the first user to cause an invitation to be sent to a second viewer to share the viewing session, e.g., by email, text, instant message, or the like. In this way, upon acceptance of the issued or sent invitation, both the first viewer and the second viewer are enabled to share a viewing session and move through and interact with the virtual environment
[00070] A next step is that a "fit" of the subject mesh to the target mesh is determined (step 17). For example, one may be compared against another, and on objective or subjective determination made as to whether the fit is satisfactory. For example, where the subject mesh is a room,
appropriately modeled and textured to be similar to a room of the viewer, a target mesh of a piece of furniture, again appropriately textured to look like a piece of furniture the viewer is interested in, may be situated within the room and the viewer may gauge whether the result is desirable. In some cases the result is binary, e.g., a yes or no determination. In other cases, the viewer may be enabled to change textures on the piece of furniture, i.e., to envision or visualize a different type of furniture within the room. For example, the user may switch out a couch having a leather texture for a couch having a suede texture. Other implementations will also be understood.
[00071] In any case, whether the determination of the "fit" is objective or subjective, user input may be received about the fit (step 19). In one implementation, the user input may accept the target and subject meshes, and the result may be optionally rendered into a file that may be subsequently saved (step 21). For example, the user may like the way the couch appears in the room, and may save a visualization of the same to show to their family and friends. For example, the viewer may post the same on a social network using an appropriately configured API.
[00072] In another example, the viewer may modify the target or subject mesh (or both) (step 23). This would be the case of the viewer modifying a material (textures/shaders) of the couch, to determine if a different couch would appear better in the room. In yet another example, a new subject mesh or target mesh may be provided and a subsequent comparison performed (step 25). For example, the viewer may try an entirely different type of couch, or may try the same couch, but in a different location or a different room. Other implementations will also be understood. For example, it will be understood that the definitions of target and subject mesh are arbitrary, and the same may be definitionally switched.
[00073] The implementation of Fig. 4 may be applied in VR or AR, but is generally preferred for VR implementations. Fig. 5 refers to an implementation that is more specific to AR
implementations. In the flowchart 125 of Fig. 5, a first step is to receive or create a subject mesh (step 27). This subject mesh may be either created by a viewer or by a creator. For example, if a viewer wishes to visualize a couch in their actual house, the viewer may wear an AR headset in their house and look at a wall against which the couch is to be placed. The subject mesh may have been created by a creator, but instantiated by the subsequent viewer for viewing against a desired wall. Consequently, a next step is to apply the subject mesh against the environment (step 29). New or additional subject meshes may then be received or entered, or a current subject mesh may be modified (step 31). The new or added subject mesh, or modified subject mesh, may then be again applied against the environment (step 29). In this way, the viewer may visualize how the subject mesh fits against the environment.
[00074] Fig. 6 gives an illustration of a subj ect mesh 120 being applied against a target mesh 1 10, particularly in the case where an object such as a piece of furniture is being situated within an environment. Fig. 7 gives an example of a subject mesh 140 being applied against a target mesh 130. In this case, the target mesh 130 is an avatar of a viewer, constructed so as to accurately represent the body of the viewer. The subject mesh 140 is an article of clothing, and the article of clothing is being applied against the target mesh 130. The subject mesh 140 is generally received from a clothing supplier, and such meshes are generally available, even from manufacturers, as clothing design is usually performed using 3-D modeling tools.
[00075] As noted above, in some cases a creator will create a 3-D environment without regard to whether the same will be matched up with, compared against, or in any other way aligned with a viewer created subject mesh. These implementations include 3-D environment such as theaters, educational applications, transactions including shopping or retail transactions, and the like. These implementations may still include, e.g., a social networking component, where the experience by the viewer is shared with another viewer, typically a friend or teacher. It will be understood that some degree of "matching" may occur, but the same will only entail matching the subject mesh of a viewer avatar against, e.g., a theater seat, classroom seat, laboratory stool, shop, store, mall, or the like. In these cases, the avatar may be appropriately sized to appear to be of normal size given the size of the creator configured target mesh.
[00076] Target and subject meshes may be created in various ways. For example, user data entry, e.g., of an appropriate height and weight, may lead to a avatar being created of approximately accurate dimensionality relative to the user. The viewer may be provided with a user interface through which they may configure the avatar for, e.g., fine adjustments. In more precise ways, scanning techniques or camera image inputs may be employed to create better meshes, e.g., of products, users (in the creation of avatars), buildings, structures, rooms, malls, shops, stores, clothes, jewelry, or the like. Various other implementations may also be provided, including where a user selects a desired mesh or portion of a mesh from a menu of options.
[00077] In certain cases, such as where makeup is to be applied against a viewer avatar, e.g., against the viewer avatar's face, details about the makeup may be employed in the creation of not only a mesh but also (or alternatively) in some cases a texture, material, or shader to be applied against the face or other location of makeup application. In this way, the avatar mesh may be appropriately modified according to the appearance characteristics of the applied makeup.
[00078] In the particular instance of creating buildings, Google's Sketchup or other such 3-D building creation tools may be used, including by the use of 360° cameras which can create a model of the interior of a room (or, indeed, of even an outdoor location).
[00079] Combinations of the above implementations will also be understood. For example, a bridesmaid (modeled by an avatar or subject mesh) may be configured to wear a bridesmaid dress (first target mesh) and may appear in a church or other location for a wedding (second target mesh)
[00080] Systems and methods according to present principles include various components to allow creators to conveniently create environments, as well as for end-users or viewers to either traverse the creator configured environments or to apply subject meshes against the creator configured environments. In so doing, the creator, which may typically include a developer using a creator system, may create a data file corresponding to a target mesh, which is generally a model including textures/materials/shaders. Of course, multiple models may be created and provided in a given environment. In the same way, where a subject mesh is created, a viewer or end-user may be provided a tool in which a subject mesh may be created, which is also a data file, and which may of be a model including textures/ materials/ shaders .
[00081] In one implementation, where a database already exists providing access to data about various items of inventory, systems and methods according to present principles may be implemented as a "front end", and implemented on a server or even on the same server in which the database is located. In any case, the front end will be in network communication with the database.
[00082] In one implementation, the front-end is implemented in a virtual reality (VR) system to provide a VR environment for transactional and other experiences. Such may be implemented as a storefront with various products, allowing users to roam about and to select items. Such items may be interacted with by the viewer, e.g., placed in a shopping cart, and appropriate animations provided in some cases to make such interactions more interesting. However, a storefront or other transactional detail are not required in all implementations. The system may simply allow visualizations of products and services that the viewer is interested in from one or more sources.
[00083] Where VR or AR are employed, such may be implemented on devices such as the Oculus Rift, HTC Vive, Sony PlayStation VR, Samsung Gear VR, Google Cardboard, Microsoft Hololens, and the like.
[00084] Besides VR implementations, augmented reality or AR implementations may also be provided to allow viewers to visualize certain image or CG data against a real-life backdrop. For example, an AR overlay may be employed when a viewer is in a physical "brick-and-mortar" store. An AR overlay may also be used when a viewer is in their own home or office, e.g., to visualize furniture placement, kitchen appliances, and so on. In yet other implementations, the same visualization may simply appear on a screen, e.g., in 3-D, but without reference or use of a specific VR or AR device.
[00085] Besides input or creation of a target mesh and an optional subject mesh, other inputs to the system include data from accelerometers, allowing the creation of a VR environment, GPS data, which may be particularly important in AR implementations, allowing user navigation around an environment such as a store. Other input data of note include user entered data, e.g., pertaining to their avatar or the environment. For example, user data entry of "Yosemite" may cause a VR or AR implementation of that national park to be instantiated.
[00086] Processing of the input data is generally performed by an attached computer, within the VR or AR device, or in the cloud. Outputs are provided by the VR or AR screen, or by a computer screen where VR/AR are not employed. [00087] Various technological benefits inure to described implementations, including a significantly more intuitive interface, one that is easier to navigate, as well as an environment in which it is easier to navigate. Viewers may conduct transactions with friends, and transactions may occur as if the friend were only a few feet away. Such functionality depends on implementation within a specially designed and configured computer system, as realistic mutual transactions are otherwise impossible.
[00088] For example, the above-described objective or subjective determination of the suitability of a subject or target mesh vis-a-viz the other may be particularly useful in the context of mutual shopping, where a viewer is shopping with a friend. In a particularly useful implementation, one user may be employing AR, testing a subject mesh against a real world environment, while a friend, connected via a social engine, provides commentary in a VR environment. In this case, the VR environment may include a video feed of the view seen by the viewer using an AR device.
[00089] Advantages of such systems and methods include that systems and methods take reduced computing cycles and battery power to achieve the same benefits for users. For example, viewers may be assisted in the navigation of difficult environments, and product location within those environments may be provided to the viewer. Viewers may also be enabled to conduct transactions with others, in a way previously requiring more complicated videoconferencing, which in turn previously required complicated and bulky equipment, and so on.
[00090] An exemplary interface architecture is shown in a modular fashion in the system architecture diagram 100 of Fig. 8. Various applications 54a-54f are shown interacting with the interface architecture, and various end-users 82a-82f are shown also interacting with the architecture. The applications 54i may pertain to different environments or the like, and are generally embodied on one or more different servers, although in a given implementation multiple environments may be situated on one server or in a cloud architecture. The various applications 54i may pertain to, e.g., a merchant application, a real estate agent/owner application, a travel application, a hotel or resort owner application, an entertainer application, an education application, a social application, and so on. Specific implementation details of these are described below.
[00091] End-users 82i generally communicate with the architecture through network communications, which may be on a smart phone, desktop computer, laptop computer, tablet computer, or the like. The interface architecture 100 includes a security module 56, a user
management module 58, an entity management module 62, an application management module 64, a data storage module such as a database 66, a media import/export module 68, a communications management module 72, a transaction management module 74, an open API module 76, as well as one or more of various VR/AR/3D transaction (in some cases commerce) engines 78. [00092] In more detail, the security module 56 provides security to the system, and works in combination with the user management module 58 to securely log viewers into a particular application. The user management module 58 allows the creation of user accounts, storage of user profile and preferences information, and so on. The entity management module 62 provides management of various entities, particularly those operating applications 54i, and serves as a management module for business users stores for products or services, hhe data storage module 66 stores data associated with the applications and users. Access to the database or data storage module is generally by one or more database accessors, which are executable statements associated with getting data, setting data, and so on, from a database. The media import/export module 68 allows media importation and exportation, the media pertaining to target meshes, subject meshes, virtual environments, meshes representing projects, tools for importing 3-D and 2-D models and images, and the like. The communications management module 72 provides network communications functionality between creator servers, servers implementing systems and methods according to current principles, and end-user systems, as well as designers, 3-D artists, and so on. For example, the module 72 may provide voice, text, and video communications. The transaction management module 74 provides functionality related to conducting transactions for users, e.g., providing shopping cart functionality, credit or debit processing, or the like. The open API module 76 provides access to systems and methods according to present principles by various social networking sites or the like. In addition, the open API allows appropriate access of social networking feeds, such that users can access the same to promote virtual environments, promote products, send invitations to friends for mutual shopping ventures, or the like. The open API further provides an API for developers to build applications on the platform. In this context it is noted that an open API is a publicly available application programming interface that provides developers with programmatic access to a proprietary software application. APIs are sets of requirements that govern how one application can communicate and interact with another. The VR, AR, and 3-D engines 78 provide functionality for the creation of the virtual environment. For example, the engine 78 may provide tools for the creator to easily and conveniently implement a VR/AR/3-D storefront. The engine 78 may further provide tools for the viewer to easily and conveniently create a viewer avatar, or a viewer subject mesh pertaining to an item which the viewer desires to visualize within the 3-D environment of a creator.
[00093] Fig. 9 illustrates a modular depiction 150 of a system architecture according to present principles. As may be seen, various interfaces are provided, such as for an application owner 1
(interface 116), who may be a retailer, merchant, or other storefront offering real-life products but portraying the same as 3-D models within a virtual environment. An application owner 2 is provided with an interface 122, and the same may provide access by a travel agent, teacher, educator, hotel owner, entertainer, or the like. An end user interface 118 is also provided for end-users or other viewers to access desired environments, and such allow communications with the computing environments of end-users, e.g., computers or mobile devices, to exchange information, enable downloads, and install or update software to end-user devices.
[00094] An environment layer 1 17 may then be provided in which the creator or end user interfaces are implemented, and the same may be an environment which is 3-D, VR, or AR.
[00095] One level up are various objects which the viewer or creator may view and/or manipulate, including 3-D models 98, video files 102, video streaming files 108, and the like. Other 2-D and 3-D images and textual obj ects are provided in this layer by a module 106. Accessible by viewers and creators, depending on implementation, is a 3-D model generation engine 104, which can be employed to generate 3-D models from text or can convert 2-D images to 3-D models. Various components may be "gamified" by a game engine 96, which may also provide appropriate interaction components for, e.g., a target mesh and a subject mesh. For example, through the game engine, colliders may be placed on the meshes to allow a subject mesh to abut a target mesh without passing through the same.
[00096] Above the game engine level is a build/create engine layer 94. The build/create layer 94 may be employed by a creator to create one or more underlying target meshes to be displayed to viewers. The same may further allow a viewer to create a subject mesh. A social engine 92 may be employed to allow invitations to friends for mutual shopping, sharing functionality, e.g., sharing a potential purchase with a friend, or "buddy shopping". The social engine may enable end-users to invite friends to mutual experiences, e.g., stores, theaters, travel locations, or to share items in the virtual reality environment. For example, users may shop together in a store.
[00097] A communication engine 88 may be employed to allow voice, video, texting or chat, and other communications with other viewers, or even with the creator. Such may also afford videoconferencing capabilities. Where transactions are occurring, a transaction and/or payment layer 86 may be employed to facilitate payment by one party and credits to another.
[00098] Finally, an application engine layer 85 may be employed to accomplish required functionality to allow the layers below it to communicate with applications, e.g., backend database functionality of a given source, e.g., the back end of a storefront, the back end of a travel site or game site, and so on.
[00099] Through the system architecture access may be made to various applications 84a-84f, which may include, e.g., stores, malls, houses, apartments, theaters, games, e.g., multiplayer arcades, classrooms, resorts, hotels, adventures and other travel applications, demonstrations, e.g., of products, processes, how a product works, how a device is put together, and the like. [000100] It will be understood that these layers may be situated in different locations so as to accomplish different goals as desired by the designer.
[000101] Other components may also be included, e.g., a media display and video streaming engine to display text, image, or video, as well as to allow video streaming. A block assembly engine may be employed to provide a user interface whereby a viewer or creator may assemble building blocks to build real-life items, e.g., a car, boat, store, lab, and so on.
[000102] In these ways, particularly for creation or building engines, a template or overlay may be placed on top of a game engine such as Unity, Unreal Engine, or the like, to allow storeowners to create stores and to allow viewers to move 3-D models around in the virtual retail environment. In the same way, viewers may be enabled to see items being animated and/or rendered as being placed in a shopping cart, and so on.
[000103] Fig. 10 illustrates an exemplary implementation of an architecture 200, with particular regard to augmented reality applications. Certain modules of the architecture are the same as that of Fig. 9 and their description is not repeated here. The applications at one end of the architecture, denoted applications 124a-124f, are generally specific to AR although in some cases recourse may be had to a repurposed (and reconfigured or reprogrammed) VR application. As part of augmented reality is that the user is situated in an actual physical environment, a location-based service engine 126 is used to provide location and other services, and the same may interact with a service application engine 127, the service application engine 127 using the location-based services data from the service engine 126 and providing the same to the applications 124i. The applications 124 may also provide data going the opposite direction in return. A location capture engine 132 is provided to obtain location-based information, e.g., via pattern recognition, GPS, Wi-Fi, telemetry, and so on.
[000104] Functionality of a social engine and communication engine are as described above with respect to Fig. 9.
[000105] An environment layer 144 provides an interface, and in particular an interface to the "real-world". The environment layer may be provided, e.g., by the lens of a glass, by a camera feed, and so on. A media layer 142 is provided on which to render visual components in AR, e.g., CG objects. An interaction engine layer 138 is provided atop the layer 142, such that the viewer is enabled to interact, with other viewers or with creators, with regard to the objects portrayed on the AR display and the underlying physical environment beyond.
[000106] In general, a user with a VR device will enter the virtual environment, which may be configured for transactions (commerce), a showroom, or a virtual experience. Without a VR device, the viewer will generally be in a 3-D environment. In either case systems and methods may be implemented in the cloud or using a system including servers and software to enable end-users to have experiences including 3-D VR, 3-D gameplay, as well as to enable user avatar creation through entering text information or importing a 2-D/3-D self image to the virtual reality system, in which case the system may automatically generate a 3-D model for the user if the user enters a 2-D image. Other potential experiences include a virtual shopping experience, a building/creation experience, whereby a user may be enabled to build items such as a car, a boat, a room, an office, a store, a lab, a business, and so on, by using components available in the virtual reality environment, or imported by users.
[000107] In a particular implementation, as illustrated by the flowchart 350 of Fig. 1 1, a new user may click a "new user" button (step 228) which leads the new user to a registration/login screen (step 232). Following appropriate registration, a new user, in this case an end user or viewer, may be asked to create a new avatar (step 234). In some implementations the user or viewer will have the option of creating a new subject avatar (step 236) by entering their body measurement information on a provided UI (step 244). In this case the user may import their image (step 246). In some cases, the user may even be enabled to choose a different image, but with the same measurements. For example, a user may use the face of a movie star, but with their own body measurements. The body
measurements taken will generally involve a chest measurement, a waist measurement, a hip measurement, a thigh measurement, an inseam measurement, a sleeve length measurement, and in some cases an upper arm measurement. In some cases, if the user has a garment that fits particularly well, they may use that garment as a source of measurements rather than their body. In some cases, scanning a code associated with the garment may bring up data about the garment which may be automatically entered into the system for use in avatar creation. In any case, based on the
measurement, systems and methods according to present principles will generate an avatar or cause a choice of avatars to be presented for user selection.
[000108] Alternatively, the user may choose an existing avatar (step 238), optionally modifying the same according to their body measurements. In yet another implementation, the user may load their own avatar, again optionally modifying the same with body measurements (step 242). In any case, the end result is that a 3-D subject avatar is created for the viewer to use in traversing virtual environments (step 248). Where body measurements are employed, such may be particularly useful in the virtual "trying on" of clothes to be purchased, as well as the virtual wearing of jewelry, application of makeup, and so on.
[000109] Users may be enabled to choose their hairstyle, or the like. Users can further use a 3-D model scanner or camera to take a 3-D picture and import the same to the system, or the 3-D model scanner can automatically upload the 3-D image to the system or cloud. [000110] Fig. 12 is a flowchart 300 by which creators can conveniently provide create 3-D environments for consumers, i.e., viewers. At an initial visit, a creator may click on a "new user" button (step 182) and thereby be enabled to register with the system (step 184). A user interface may then be instantiated in which the visitor to the site is asked if they are a creator, e.g., (step 186). In the case where they are not, thus clicking a "no" button or the like, the user may be redirected to the end user login (step 188), as implemented by, e.g., the method of Fig. 11.
[000111] If they click that they are a creator, e.g., merchant, they may be directed to a screen in which an option is presented to create an environment, e.g., store (step 192). Various ways are provided in which to build an environment or a store. Where a store or environment is not premade but built by the creator (step 194), then creators, which may include business owners, can import premade 3-D models of their home or office or business, or be enabled to build such (by the provision of an appropriate UI) within the 3-D VR system (step 196). Creators may also choose an option of selecting an existing store from a library of stores (step 198). In this case, various functionality may be provided to edit or otherwise personalize the selected premade store (step 202).
[000112] Where a premade store is not employed, various assets may be imported and used to provide a storefront or other environment as desired by the designer. For example, steps may be included of importing 2-D images (step 204), importing 3-D images (step 206), importing 3-D models (step 216) of products or the like, importing video files (step 208), or importing streaming video (step 212). Text may also be provided to display information about the environment descriptively. Such options will, it is understood, also be provided for premade stores, as part of an editing or
personalization step. Generally such steps provide the creator with a means for provisioning the store, and placing 3-D models of products in desired locations, e.g., shelves, for viewing and purchase by a viewer. In so doing, the creator generally accesses a database, the database including data
corresponding to an inventory of available items. In providing the media experience, e.g., the VR/AR environment, a server may expose a user interface, the user interface operable to allow a creator to situate items in the inventory in a virtual environment, by allowing or enabling the creator to create 3- D models of the items, or allowing for enabling the creator to import 3-D models of the items. A user interface may then be exposed for viewers to access the virtual environment, e.g., first and second viewers, and further to allow viewers to invite others to access the virtual environment.
[000113] Creators may also build items besides structures, e.g., a car, a boat, a chair, or the like. In these ways described, creators may build a personal home or room or office space, or any environment, to simulate the real world using premade building blocks (or custom building blocks) in the 3-D or VR environment. 3-D components may also be imported by users. [000114] The store engine may then be built and finalized (step 218) for a given product (step 222) or service (step 224). For example, such steps may include determining and creating ways for a viewer to traverse the store or environment, providing software permissions for where viewers are allowed to traverse, providing rules for purchase or traversal, as may depend on the identity of the viewer, as may be based on login credentials, or the like.
[000115] Where the environment is primarily for a viewer to traverse a scene, just for enjoyment or amusement, it may not be required that the environment have a particular scale vis-a-viz the viewer. The scale of the environment may be selected to simply be comfortable to the viewer, to not cause nausea, and so on. In some cases, scale may be particularly important and taken account of. For example, if the viewer is traversing an online environment that is a furniture store, and wants to ensure that a selected couch will fit in their living room, items in the 3-D environment such as furniture may have a scale associated therewith, such that when the furniture is placed by the viewer in a virtual environment such as a 3-D representation of their living room, the scale of the living room and the scale of the furniture are the same, thus allowing an accurate picture of how the furniture will appear in the room. It is noted in this context that generally mouse or other movements may be employed to move a subject mesh (e.g., the couch) around a target mesh (e.g., the living room). Colliders may be appropriately employed such that, as the user moves one around the other, the CGI objects or meshes do not interpenetrate, causing unphysical visualizations. The scale may be stored as metadata with the target and/or subject mesh, or via another means.
[000116] Using systems and methods according to present principles, which may be
implemented "on top of more complicated 3-D environment creation tools such as Maya, Unity, Unreal Engine, CryEngine, and the like, creators may be enabled to quickly and easily create appropriate 3-D environments, including online environments and storefronts. Benefits of such implementations include that creators are not burdened with requiring significant software or modeling expertise to create such environments.
[000117] Fig. 13 is a flowchart 400 indicating one implementation of 3-D model creation or importation, which may apply to the creation of most of the 3-D and/or virtual environments described. An initial step is registration and/or login (step 252). The creator may be asked if they wish to use their own product or model (step 254). If the answer is yes, the creator may import images or 3- D models (step 258), e.g., via a 3-D model import engine, to enable 3-D models to be imported to the VR engine. Such imported images or 3-D models may then be converted to 3-D models (step 262) in the case of 2-D images or used directly as 3-D models (step 264). The models may then be moved around to appropriate locations as desired by the designer (step 266). Textual descriptors may be entered or imported to describe the models (step 268), e.g.,SKUs, barcodes, QR codes or the like may be entered in cases where the models represent items for sale. The models may then be tied to a backend database (step 272) via an appropriate API, so as to allow control and tracking of items sold in store inventory systems, accounting systems, and the like. It is noted that, where the creator has no models of their own, a market may be provided and visited for a creator to select (and optionally purchase) models for use in their store (step 256).
[000118] Fig. 14 is a flowchart 450 illustrating a more detailed method of 3-D product creation. Assuming a creator has a 2-D or 3-D image (step 274), the user may register and/or login to a system according to present principles (step 276). The user may indicate a desire to create a 3-D product model (step 278). In some cases, a new product model is not created, but a user selects a premade model (step 284). If the user indicates a desire to create a new product model, then the product model may be created (step 282). The 2-D or 3-D product image may be imported (step 286), followed by the 3-D model generation engine creating the actual model for placement in the environment based on the product model created in step 282 and the 2-D/3-D product image (step 288). In particular, the 3-D model generation engine may take the created product model and wrap the 2-D or 3-D product image around the same, texturing the model, and may further apply one or more shaders so as to achieve a desired model of a product.
[000119] Fig. 15 illustrates a flowchart 500 for creation of an environment in which services are showcased. After an initial registration/login step (step 296), the creator may be asked whether they intend to use their own 3-D service model (step 298). Such a service model may include animations of services provided, modeled images or textual descriptors of services provided, or the like. If the creator does not have their own, a market may be provided from which the creator may select premade or prefabricated models (step 302). If the creator does have their own models, such are imported in step 304. In this step, the creation may be of actual 3-D models or 2-D images. Where the imported objects are 3-D models, the same may be used directly (step 308). If the same are imported 2-D images, the 2-D images may be converted to 3-D models in an appropriate fashion, e.g., via the model generation engine (step 306). Once all the models are created or imported, the designer may move them around and otherwise showcase the services in a desired fashion (step 312).
[000120] Fig. 16 illustrates a flowchart 550 of a more detailed method of creating models for products or services. In particular, after an initial step of registration and/or login (step 314), an image may be entered into the system (step 316). The image may be a 2-D image such as a photograph, or 3- D image, e.g., a set of stereoscopic photographs. Machine learning (step 318) may be employed to allow the system to learn over time and improve its estimation and calculation/creation of 3-D models. Machine vision may also be employed to review and analyze images for depth data, so as to reconstruct 3-D objects from 2-D images. Steps involved may include one or more of pattern recognition/depth calculation (step 322), 3-D model generation (step 324), and finally creating the 3-D model (step 326), which may include applying textures and shaders to the created 3-D model.
[000121] Fig. 17 illustrates a flowchart 600 for creation of a desired creator logo. A creator generally starts with a file corresponding to a 2-D or 3-D image (step 328), and commences by registering or logging into the system (step 332). The creator may be asked if they wish to create a new 3-D logo (step 334). If the answer is "no", the creator may be allowed to select a premade logo (step 338). If the answer is affirmative, then a create logo subroutine may be commenced (step 336). For example, the creator may take the 2-D or 3-D image file from step 328 and import the same to a logo creation engine (step 342). The same may then serve as the basis for a 3-D model as created by a 3-D model generation engine (step 344). Alternatively, the user may select an existing premade 3-D logo and provide edits thereto (step 346), again resulting in a usable 3-D logo model (step 348).
[000122] Fig. 18 illustrates a flowchart 650 related to integration with existing e-commerce backend systems. Using such methods, products may be integrated into the 3-D or VR/AR displays according to present principles, and/or information may be pulled from backend databases. For example, a 3-D product model may be input into the system (step 352). A product ID may be assigned to the 3-D product model (step 354). For example, exemplary product IDs may include SKUs, UPC barcodes, RFIDs, EAN-13 identifications, or any other identifier. The 3-D models may then be imported into an appropriate engine (step 356). For example, the 3-D models may be entered into a transaction engine, game engine, or the like. The engine may then be interfaced with a transaction database through an appropriate API (step 358). The API may be employed to pass product information to and from the engine (step 362). Such information may include, a product description, price, stocks, and so on. The engine may then employ the product information and display or render the 3-D model and the virtual environment, e.g., in 3-D, VR, AR, or the like.
[000123] The flowchart 700 of Fig. 19 illustrates an exemplary user process for a viewer in AR. In a first step, a viewer registers and/or logs into the system (step 366). Variations may occur at this point. For example, if a viewer has found a product online (step 367), the viewer may select a store/location using an appropriate user interface (step 368), e.g., by gazing at a store location and having the gaze tracked. A 3-D version of the store may show up on the screen (step 372), and once the viewer is in the virtual store, the AR may be employed to direct the user to the product location (step 374).
[000124] Other variations may occur at this point. For example, a user interface may be employed which when activated allows variations in the product to be shown (step 376). Similarly, similar products may be displayed (step 378). In some implementations, an in-store or online coupon or other promotion may be activated (step 382). The viewer may if desired add the product to a virtual fitting room to allow their avatar, if accurately sized, to virtually "try on" the product, if the same is an article of clothing (step 388). The viewer may then check out (step 392) and allow the product to be shipped.
[000125] In another variation, after step 366, the viewer may find the physical product in a store (step 369). The product code, RFID, or other identifier may then be scanned (step 384). A 3-D version of the product may then appear on the AR display (step 386). Similar variations may occur as above, including activating to show product variations, activating to show similar products, or the use of in- store or online coupons or promotions.
[000126] The viewer may try on the product, and/or allow their avatar to try it on. The viewer may purchase a product in normal fashion, or may complete the transaction online, and either be allowed to take the product home or the same may be shipped.
[000127] AR implementations provide numerous benefits to consumers and also provide benefits to computing environments, as users may be enabled to more quickly find items and locations of interest, thus enabling more efficient and focused use of technology at hand, saving computing cycles and battery power.
[000128] For example, in-store consumers often find it difficult to navigate in large malls or large department stores to find the merchandise that they want. Such has fueled much of the move to online shopping, because it is easier to find merchandise online through a search engine. Retailers are also difficult to connect with in-store shoppers for additional services. In systems and methods according to present principles, one or more of these problems are solved, according to
implementation. For example, systems and methods according to present principles include hardware and software that provides multi platform (mobile and desktop) and VR platforms to simulate real-life malls or stores or to provide virtual malls or stores, giving customers near real-life shopping experiences such as: walking into a mall, entering a store, meeting sales associates, checking product displays, trying on products, watching a demo or promotion, and so on. Without VR devices, systems and methods according to present principles may default to a 3-D environment viewable on a display screen.
[000129] Augmented reality implementations may be provided for in-store shoppers. Such allow shoppers to enter a 3-D virtual store, which is generally an exact or highly accurate simulation of the real store, to help shoppers navigate in malls and stores and to find merchandise quickly. The AR system may recognize user's locations based on GPS or by the use of sensors in stores or malls through the internet of things or other techniques, e.g., Wi-Fi, infrared, telemetry, the use and tracking of wearables, and so on. The system thus allows users to search products or product categories and show a location and a path of the product or product category. Shoppers may then follow the path indicated to reach the location of the product or product category. The system may further provide the viewer with product discounts or other promotions or recommendations that may be of interest. Once the viewer has reached the product (or before), he or she can invite his or her friends to the 3-D store using text, voice, video, and so on, as allowed through the social communications engine. A shopper's friend can join the 3-D store and look at the same merchandise with the shopper. For example, the shopper's friend may employ a VR headset which views the same scene as the shopper's AR headset, but from a slightly different vantage point. In this case, the portion of the scene that is actual or real life in AR may be portrayed by a 3-D model in VR, e.g., or a computer-generated depiction of, or even a video feed. Where video is provided, the same can appear to emanate from the same location as the viewer, in which case the same video feed can be employed and the shopper's friend sees the same vantage point as the shopper, or the same can be made to be from a slightly different vantage point, to simulate that the shopper's friend is standing next to the shopper.
[000130] The AR system can be implemented in an independent or coordinated manner with a VR system. The AR system may be implemented in 2-D or 3-D. For example, where bandwidth constraints are present, or there is no 3-D transaction system, then either AR or VR may be implemented as a 2-D application. Generally, where implemented in AR, two additional engines are provided, a location engine to identify the user location, and a service engine to provide user services based on location. In the case where VR is not employed, an AR system generally will include just the location and service engine.
[000131] The viewer may have several options to show the merchandise to their friend. For example, if the merchandise is in a 3-D/VR store, the viewer and their friends can look at the product in the 3-D/VR online virtual environment, and can discuss within the context of that system. If the merchandise is not in the 3-D/VR online environment, or the viewer wishes to "try on" the product, the viewer can use a 3-D scanner or 3-D model capture booth to scan the product or himself/herself with the product, and can automatically import the image to the 3-D system for sharing with friends.
[000132] The platform/system may provide features that will benefit both in-store and online shopping customers, enhancing online and in-store consumer confidence of purchases, bridging the gap between online shopping and in-store shopping.
[000133] The platform technology has several functions and advantages. First, the system can simulate a city or mall or store for online shoppers. Online shoppers can view both products and services. For example, a viewer can be enabled to walk on a virtual street and enter a 3-D/VR mall or store through their mobile or desktop device, e.g., coupled with VR or AR functionality. The viewer can enter a store and meet sales associates or view a model. The viewer can enter a demo room or demo table to watch a demo or product promotion, to view or test products in-store. The viewer can enter a product promotion event in a theater. The viewer can watch a movie in a theater. The viewer can enter a class or school for learning. The viewer can shop for services such as travel products. Online or in-store viewers can invite friends or relatives to a virtual store in real time to see the products and help them make purchase decisions, regardless of distances between them. Viewers can create their own virtual room or house, can fit furniture within the same, can fit other items such as housewares, home decorations, or DIY/home-improvement products, all before making expensive investments or purchases. The viewers can virtually attempt to construct or assemble products, where necessary, prior to buying, to determine the complexity of an assembly procedure. A creator who is a retailer can create a virtual live sales associate avatar to assist online shoppers, which provides a more user-friendly experience than merely chat. Viewers can enter a 3-D virtual mall or store through a mobile device to allocate stores or products, or compare prices (in certain such implementations, no VR/AR equipment is required).
[000134] Retailers/creators can register and have the option of selecting prebuilt stores or can create their own custom stores. Drag-and-drop building blocks may be provided to make such construction more convenient. Product images/models may be uploaded to stores and placed on shelves. Stores may be decorated in any manner enabled by materials/textures/shader creation.
Creators can preselect or create a customer service avatar to assist online shoppers. Creators/retailers can set up or build a demo room for products or services. Creators/retailers may employ an API to transfer online merchandise data. Creators/retailers may be enabled to automatically convert 2-D product images to 3-D product images. Systems and methods according to present principles can also be used in a 3-D VR platform, e.g., with or without a payment component, to allow the merchant to display products or services.
[000135] While the above implementation provide certain aspects related to online shopping, the same procedure is intended to be a general description for the use of 3-D created models in AR. Additional implementation details specific to shopping or other applications are described below.
[000136] Fig. 20 illustrates a method 750 related to 3-D, 3-D animation, or VR product displays as related to 2-D transaction sites, and more particularly where such are displayed on the "front end" of such 2-D transaction sites. In a first step, a transaction site viewer registers and/or logs on (step 366). In some implementations, the 3-D model is then displayed directly on the front page of the site, or in a product gallery, replacing regular product images (step 365). In this case, the viewer may click on the 3-D model or 3-D animation or the product name (step 374), and additional details may be displayed (step 376).
[000137] In some implementations, the 3-D model is not directly displayed (step 371). In this case, the product image or name is displayed, and if activated or otherwise clicked on (step 368), the viewer may be led to a button with a title such as "See in 3D" on the product page (step 372).
Following activation of such a button, the product model, 3-D model, or 3-D animation may then be displayed (step 376).
[000138] Fig. 21 illustrates a method 800 related to similar model types as in Fig. 21 , but where the same are integrated as part of the backend. In this figure, a first step again is a login step, and in this case the transaction site may be logged onto with administrative privileges (step 378). A product section may be activated (step 382), where the user can select the product or service. The user may then choose, on an appropriate user interface, an indication of a desire to click to upload a desired model, e.g., a 3-D model, 3-D animation, and so on. The user may then select the model to be loaded (step 386), and the same may be subsequently uploaded (step 388). Once the product or service model is uploaded, the user with administrative privileges can see the 3-D product or service model in the user page and admin backend page, i.e., the product model or animation may then be displayed in the front end, or in both the front end and the backend (step 392). For a 3-D fitting room model for fitting furniture, home decor, appliances, or clothes products, there may be provided a button section in the admin page (usually on the front page) that either allows a website admin to upload fitting room 3-D models, or enable a building environment for users to build their own fitting room.
[000139] Given the above, it will be understood that certain aspects of systems and methods according to present principles are general to multiple applications, while others are specific to certain applications. The above described implementations tend to be generic to multiple applications, while below sections generally describe implementation details specific to certain applications.
[000140] Using systems and methods according to present principles, users can import premade 3-D models of their home or office, or build numerous types of constructions in the 3-D or VR environment, either for visualization of products or services, transactions regarding such products or services, e.g., commercial transactions, entertainment, and so on. For example, a commercial enterprise can build an environment representative of their business, to show off products and services. The commercial enterprise can build a virtual environment representative of a showroom, offices, or the like. Commercial enterprises or users can build structures such as cars, boats, chairs, and so on. Creators and viewers can build a personal home or room or office space, or can build any environment to simulate the real world using premade building blocks in the 3-D or VR environment, and can build or import 3-D components.
[000141] Creators may build environments in several ways. In one way, home or office dimensions may be entered, and the system may automatically generate a 3-D model of the home or office. Creators may also enter windows or door dimensions, and the system may automatically generate windows and doors. In this way, creators can put windows or doors (or other accoutrements) on their virtual room or office. In a second way, creators can import/upload premade 3-D models of their homes or offices. In a third way, creators may take a photo of their real-world home or office to import to the system. A 3-D model generator, as described above, will automatically generate a 3-D model for such users based on the photos. In some implementations, the creators may have the option to edit the 3-D model. In a fourth way, creators can use building blocks from the system itself or imported from external sources, and build the appertaining environments within the system. Numerous customization options may be provided, including allowing the creator to choose color, style, and so on.
[000142] Business users can login as a business owner or entity, and can either create an avatar as an end user or build a business in the 3-D, VR, or AR environment. Like end-users, business users can build their business by importing premade 3-D models, including stores, products,
models/animations indicating services provided, and so on. Business owners may also be enabled to take a photo of their existing business, for importing to a 3-D model generator to generate the 3-D model. As before, users can have the option to edit the 3-D model. Business owners may also take a 3- D image of their business and the 3-D model generator may generate a 3-D model therefrom. Multiple photos may be taken to allow a better 3-D visualization. In yet other implementations, a 3-D camera may be employed to even further improve the 3-D model of the business. In other implementations, businesses or stores may be built from building blocks in the environment, or imported from external construction programs, to build a business or store from scratch. As before, business owners can choose the style, color, logo, and so on. Business users, i.e., business creators, can either build 3-D or VR tangible businesses such as stores, malls, or can build other sorts of businesses, e.g., restaurants, travel agencies, theaters, games, and so on, for various purposes including merchandising, promotion of services, marketing, to provide user experiences, and so on. For example, a restaurant owner may provide a virtual experience at their restaurant, including the provision of food items, even if the viewer cannot fully experience the restaurant without actually being there. Businesses that sell items that require user assembly in the real world can show the packing of the box in the virtual reality environment, and users can have an experience of opening the box and removing and/or assembling the product.
[000143] This description generally uses terms such as "viewer" and "creator", although in some cases, where both entities are creating meshes which will then be compared, either objectively or subjectively, each party is creating or providing a mesh.
[000144] Generally, viewer or end-user functions include: registering and uploading 3-D models, or pre-selecting an existing avatar with the option of entering their own body dimensions; entering an environment, and moving around inside it; checking or reviewing products or services, talking to virtual customer service avatars, watching product demos, testing products; product selection to reveal features and functions of the product; checking out the product and making payments; inviting friends to shop together in a store through a social network or engine; chatting or talking to friends on social networks while inside the online environment; opening a product box and assembling or
disassembling the product; creating a virtual room/house that simulates their own room/house so that furniture or other accoutrements may be tested in the room; decorating a virtual room or house; trying on products or testing products before buying; trying on clothes or other personal items before buying; and so on.
[000145] Generally, merchant or creator functions include: registering and selecting prebuilt stores, or creating new stores; customizing prebuilt or constructed stores; uploading 2-D/3-D product images or models to the store, and placing the same on shelves; decorating the store; preselecting or creating a customer service avatar to assist viewers; setting up or building a demo room for products or services; providing or using an API to transfer online merchandise data to other entities in the transaction chain; automatically converting 2-D product images to 3-D product images; displaying a product demo video; making a product demo room or table; providing demos to viewers; creating a product assembly or disassembly instructional video, e.g., using 2-D or 3-D using the product building blocks to show users how to assemble or deassemble a product; create product repair instructions in 2- D, 3-D, or VR to show users how to repair products; create themed stores (as described in greater detail below); import/create/set up sales or customer service avatars; display services such as restaurants, travel agents, and so on.
[000146] Various examples are now described.
[000147] Examples
[000148] Online Shopping
[000149] Current online shopping experiences are unsatisfactory. There is a high rate of return of items in online shopping, as users cannot try on or fit products before buying, articles such as furniture or house items cannot be tested or determined to be appropriate for a particular location in a house or apartment, and for products requiring assembly, users are unable to determine the difficulty of assembly before purchasing the object.
[000150] Current shopping experiences may also be unsatisfactory for other reasons. For example, it is difficult to get a group of friends together at a common time to experience shopping in the real world. However, such mutual shopping experiences are highly desirable for consumers, as one's friends can help decide whether a purchase is worthwhile or if a product "looks good" on the consumer. And the experience of "hanging out at the mall" is generally enjoyable for many users. Online shopping is more convenient, but is generally a much more solitary experience, lacking the social character of going to the mall with friends.
[000151] In addition, many online shopping sites are not intuitive, and finding a particular desired product may be very difficult. Searching for a product in a search engine typically yields thousands of results, and it is difficult for a consumer to know which item to purchase. In addition, it is very difficult for an online retailer to introduce new products into an online ecosystem, much less to have new products situated in a way to call attention to viewers.
[000152] Thus, in systems and methods according to present principles, many or all of the needs above are addressed. Mutual shopping with friends is provided, but with all the convenience of online shopping. Viewers may invite a friend to a store or mall to shop together, or to socialize in a coffee shop, or to chat or watch a movie together, just like in the 'real world. Such viewers may communicate through text, voice, video, or various conferencing schemes. Disadvantages associated with not being able to try on clothes or other articles are ameliorated by graphical processing of a subject mesh vis-a- viz a target mesh. Similar advantages inure to testing furniture in rooms associated with a viewers home, trying on jewelry, trying on makeup, and the like. For example, a viewer may be enabled to situate potential furniture in a 3d model of the viewer's living room, testing not only for size but for aesthetic qualities, e.g., color, and the like. Viewers, e.g., customers, may be enabled to virtually experience a product or service, increasing satisfaction and likelihood of conversion. In some cases services may be combined with purchasing functionality. For example, an interior designer may virtually "walk-through" a room, suggesting items with which to furnish or decorate the room. The same may also virtually change the color or wallpaper of a room. In another shopping example, a customer service representative, either real or AI, may be implemented as an avatar in the virtual environment, and the same may teach a viewer how to dress in a particular way, how to apply makeup, what jewelry might be of interest, and so on. The viewer may then purchase such items.
[000153] Benefits of such systems and methods according to present principles are manyfold. For example, items look particularly more interesting in virtual reality and in combination with a realistic avatar or a realistic depiction of a viewer's environment. Consumers are more easily educated through virtual demonstrations or showrooms. Customers can "try" products without the potential to break such products, and customers may be easily trained through virtual assembly or disssembly of products. VR/AR front end systems may be conveniently integrated with existing storefront backend databases. Merchandise may be "fitted", e.g., users can try on clothes, shoes, cosmetics, on their avatars, or can fit furniture or equipment into a room, an office, a store, and can in this way also test out different colors, dimensions, and so on. Other advantages will also be understood. [000154] Generally systems and methods according to to present principles may be applied to 3- D, VR, and AR experiences. In some cases only a target mesh will be employed, with potentially a subject mesh of a user (constituted of an avatar). Such experiences include learning experiences, e.g., classrooms, labs, adventures, watching movies or life performances, and so on. Shopping or browsing experiences may include comparison of a viewer subject mesh with a creator target mesh, such generally including the purchase of personal goods such as clothes, shoes, bags, electronic devices, and so on. The employment of the subject mesh is particularly where the potential product to be purchased can be "tried on" or, e.g., fitted in a room. Other examples include attempting in VR to assemble a product, to determine the difficulty of assembly. In particularly enhanced implementations, the experience of assembly can be enhanced with haptic or other feedback (which may also be applied to other implementations).
[000155] Storefronts may be created and modified by the creator, i.e., store owner, retailer, and so on. While a viewer generally does not modify the storefront, certain animations or other graphics may be employed such that the viewer can visualize putting or placing items in a shopping cart, and so on.
[000156] Where purchasing furniture, models may be created of furniture (or indeed any products), and such models may be loaded by a viewer during shopping. For example, in an AR implementation, if a viewer wants to see how a couch looks in their living room, the viewer may have already loaded a 3-D model of their living room into their system, or the same may be accessible from the cloud. For example, such 3-D model creation may be performed by methods described, including taking a 2-D image, having a 3-D image taken, using a 3-D model generator, and so on. The viewer may then scan a barcode or the like of the couch, and a 3-D model of the couch may be visualized in their living room or on a table, e.g., using an AR device such as Google Glass or Microsoft Hololens. Of course, in AR, the actual couch may be situated within the (virtual) image or model of the viewer's room. A scale may have been stored with each mesh, e.g., the living room and the couch, such that when the couch is visualized in the living room, the scale is properly set. Alternatively, the cloud may be accessed for the 3-D model of the living room, but the couch in the show room seen through the AR device and shown, in particular, situated within the viewer's living room. Similar applications inure to VR environments, and so on. It will be understood that numerous variations of the above are within the scope of systems and methods according to present principles. For example, viewers may employ such systems and methods to fit and select office furniture, store furnishings, laboratory layouts, clean room layouts, and so on.
[000157] In clothing applications, as noted above, properly-sized avatars may be employed to virtually "try on" clothes for fitting purposes. The meshes used for the clothes may be conveniently obtained from manufacturers, who often design clothes using 3-D modeling software. Such systems may be employed to try on, besides clothes, shoes, cosmetics, hairstyles, jewelry, makeup, handbags, and so on. In any case, selected items may be placed into a shopping cart and the viewer/user may make payment through various online payment systems.
[000158] In some implementations, where a mall is configured, if it makes economic sense to the mall owner, spaces in the mall may be leased to store owners. Particularly preferable locations may be near anchor stores or entertainment locations. For example, virtual malls may include virtual arcades where viewers can go to play games, solo or with others, or cinemas where viewers may watch movies with other viewers.
[000159] Use of a mall layout/metaphor provides significant technological benefits, as a viewer or user can traverse from one virtual storefront to another in a particularly rapid manner. In prior efforts, the user would have to open a new browser window, navigate to the second virtual storefront, and even then the response would be a staid 2D experience, far inferior to the VR experience of a mall. But in systems and methods according to present principles, the viewer or user is saved considerable effort, saving keystrokes, mouse clicks, as well as computing cycles and battery power (for battery-operated devices).
[000160] In certain implementations, creators (or AI representatives of creators) may
communicate with customers in the VR environment to provide assistance thereto, either directly or through a customer service avatar, which the creator may import or create for such purpose. Merchants may communicate with their customers in 3-D or VR or AR for sales and customer support questions.
[000161] Creators may provide a themed shopping experience. For example, a merchant may build an online environment (which still serves as a store) in a setting that fits their product. For example, a merchant who sells sports products may build a store that has a race track, basketball field, mountain for skiing, a beach for surfing, and so on, for users to try on and try out (virtually) their sports products. A merchant who sells Italian products can build a store in a simulated Italian background/city, such as Rome.
[000162] In one example of an implementation of a user interaction with products in 3-D, VR, or AR, where the user can be a shopper, the following steps may be instituted, as seen by the flowchart 1150 of Fig. 22.
[000163] In a VR implementation, users in a 2-D, 3-D, or VR online environment or game may enter a store or other environment having products, e.g., museum, classroom, and so on (step 502). The viewer may select a product that is interesting (step 504). Buttons or handles may be attached to the product allowing the viewer to interact with the same, e.g., upon appropriate clicking, dragging, or activation (step 506).
[000164] In an AR implementation, a viewer may be in a physical store (step 508), and may use, e.g., a mobile device to capture information about a product that is interesting (step 512), e.g., a SKU or barcode, QR code, or the like. An indication of the product is then displayed on the mobile device (step 514), and again buttons or handles may be provided to allow more convenient user interaction.
[000165] The user or viewer, in VR or AR, may interact with the product in various ways. For example, the product itself may be interacted with to review internal structure or components (step 516). For example, an exploded view may be provided. An animation may be provided to illustrate how the product works (step 518). An animation may also be provided to demonstrate various product features (step 522).
[000166] In one example of an animation, a step-by-step animation may be provided to illustrate how to assemble or disassemble the product (step 524). Any of the aspects portrayed, e.g., product interactions or animations, may be added to a favorites list, shopping cart, wish list, or customized space or room within a 3-D model (step 526). The user may click on a product information button to obtain additional information (step 528), e.g., a more detailed description, price information, inventory information, and so on.
[000167] Fig. 23 shows a flowchart 1200 of a method according to present principles, e.g., in particular for a shopper or user interaction in 3-D, VR, or AR, for group shopping or social shopping. In a first step, a user enters an online environment, e.g., which may be 2-D, 3-D, VR, a mall, and so on (step 534). The user may invite others to join in various ways (step 536).
[000168] Alternatively, a user may be in a physical store or mall (step 538). The location of the user may be identified in various ways, e.g., GPS, Wi-Fi, pattern recognition in an AR system, and so on (step 542). The user may then invite another user to join their AR session (step 544). Friends or family receive the invitation and may accept the same (step 546). The friends or family, typically employing a VR device, may then join and enter the same store or mall in the same location as the inviter (step 548). The friends or family may join the shopping (step 552), and may interact with the original user using voice, text, video, and so on (step 554).
[000169] Fig. 24A illustrates a method according to present principles, and in particular a flowchart 1250. Again the user enters a store as above (step 556), although in this case generally a VR or 3-D environment is preferred. The viewer may select a desired article of clothing (step 558), and may push or activate a button or otherwise indicate a desire to try on the article of clothing (step 562). A check is made as to whether the user is appropriately registered (step 564). If not, the user may register and enter their bodily dimensions, as well as an avatar or other image, as described above (step 566). Subsequently, or if the user is already registered, their avatar may be displayed (step 568). The user may be presented with the selected article of clothing, or may select the same again (step 572), and the system may match the selected article of clothing with the user body dimensions. An algorithm may be employed to determine, given the type of fabric, the stretch of the fabric, the size of the article, and so on, whether the article of clothing will fit the user to within a predetermined threshold, e.g., 5%, 10%, and so on. Users may enter into preferences or user settings whether they prefer loose fitting clothes, tight clothes, and so on. If there is a fit, the article of clothing may be displayed on the user's body (step 578). If the fit is determined to be not good by the algorithm, e.g., more than 20% away from an optimum, then the system may suggest that the user select another article of clothing (step 584).
[000170] Assuming a fit is eventually achieved, the user may have the option to see detailed information about the article of clothing, such as price, size, color options, and so on (step 582). The user may then share a photo or other indicator of the experience with their friends (step 586), may invite friends to view the fitting using the social engine (step 588), and/or may add the article of clothing to their cart, favorites list, wish list, and so on (step 592).
[000171] In another implementation, clothes may be fitted in AR. This implementation is illustrated by the flowchart 1275 of Fig. 24B. Certain details are the same as in Fig. 24A, and their description is not repeated here. In Fig. 24B, as the user is present with the article of clothing, the user may employ a mobile device to scan or otherwise capture information about the article of clothing (step 594), and a 3-D model of the article of clothing may be displayed on the mobile device (step 596). The rest of the implementation is similar to that of Fig. 24A, although in this case the user may also physically try on the article of clothing.
[000172] Fig. 25 illustrates a flowchart 1300 of a method of using a real-time intelligent assistant in an online virtual environment, e.g., in 3-D, VR, or AR. In a first step, a user virtually enters a 2-D, 3-D, or VR store or mall or game, or physically enters a store in an AR implementation (step 602). At some point, the viewer may request sales or customer support through an appropriate pushbutton, text, voice indicator, video indicator, and so on (step 604). Sales or customer support associated with the creator (retailer) may then dispatch a support avatar based on the number of users and requests, and the same may be instantiated near the user who made the request (step 606). A greeting may be made, and users may then ask questions of the avatar using various means (step 608). The avatar may receive the questions using voice recognition, text, video, and so on. If the avatar is backed by a real person, the avatar may provide the answer to a user from the real person (step 618). If the avatar is purely virtual, the avatar routine may search a database to match the question (step 616) for the appropriate answer (step 614). If the match is found, the avatar provides the answer to the viewer, or may list several possible answers for the user to choose (step 622). The user then chooses the answer or requests further assistance (step 624). If additional assistance is needed, an avatar backed by a real person may communicate with the requesting viewer in various ways, e.g., text, voice, video, and so on. The system may then terminate or a survey may be provided to the viewer (step 628).
[000173] As noted above, themed shopping may be provided in 3-D, VR, or AR. In this implementation, 3-D or VR stores may be provided according to a certain product theme. For example, a store or shopping mall may be placed in a virtual mountain ski resort, or luxury Italian products may be placed in a virtual boutique in Rome. In some implementations, a sporting area may be provided to allow viewers to try, test, or play with products, e.g., a field or area. In more sophisticated implementations, a ski resort may be virtually constructed in a ski store, a swimming pool in a swimming wear store, and so on. Users may be enabled to take an image and share their experiences testing products in the themed store.
[000174] Fig. 26 illustrates a flowchart 1350 for themed shopping. For example, a business user may choose a store location/address (step 632). An option may be given to the owner to build a themed store (step 634). If the answer is no, the store owner or creator may proceed to the regular store building process described elsewhere (step 636). However, if the answer is yes, a theme may be selected or uploaded for the themed store (step 638).
[000175] Whether the store is themed or not, a choice may be given to the creator to build a product test area (step 642). If the answer is no, a regular store model may be built as described elsewhere (step 644). If the answer is yes, the test area, generally in accordance with the theme, may be selected, built, or uploaded (step 646).
[000176] The flowchart 1400 of Fig. 27 illustrates themed shopping from the standpoint of the user. The user enters the themed shopping store, center, scene, game, mall, or so on (step 648). The user may, e.g., enter keywords in order to search for products (step 652). In some cases the user will choose a themed store (step 654). The user may then try, test, or play with products in the themed store or in a test area if provided (step 656).
[000177] Alternatively, if the user is in a physical store (step 658), a mobile device or the like may capture the ID of an interesting product (step 660). An AR app may show the product in the themed virtual store, or in a test area (step 662). Whether or not the user is in the physical store, the user can try, test, or play with products in the themed store or test area (step 664). The product may be added to a cart, wish list, or shared (step 666). The theme could be saved for the next time, or shared within the social network (step 668). In this way, the user may be enabled to shop in the desired environment but potentially in every store in the mall. A final step, if the user decides to purchase a product from the themed environment, is the checkout step (step 670).
[000178] Fig. 28 illustrates a flowchart 1450 specific to AR shopping in malls or shopping centers. In a first step, a user with a mobile device such as an iPhone, iPad, wearable device, and so on, enters a physical mall or shopping center (step 672). An AR app on the mobile device may recognize the location in various ways, e.g., pattern recognition, object recognition, scanning signs, codes, images, or AR IDs, and so on (step 674). Alternatively, the app may locate the user location through GPS, Wi-Fi, or other positioning systems (step 678).
[000179] In the first case, the AR app may recognize the entire mall (step 682), may recognize a single store (step 684), may recognize an event or activity (step 688), and so on. Following recognition of the entire mall, major promotions, activities, events, or announcements may be caused to be displayed on the user mobile device (step 688). If recognition is of a single store, the same may be accompanied by store promotions, or announcements of various events/activities related to the store, again displayed on the mobile device (step 692). If recognition is of activities/events, e.g., by QR codes, barcodes, or the like, event or activity information may be displayed on the screen of the mobile device (step 694).
[000180] The user may then tap or select a promotion, event, or activity (step 696). An arrow or sign on the mobile device may appear to guide the user to the location of the selected promotion, event, or activity (step 698).
[000181] Other specific AR applications may also be understood (step 702).
[000182] In another implementation, the user may search products, events, or activities in the mall or shopping center (step 676).
[000183] The flowchart 1500 of Fig. 29 illustrates an implementation of using AR shopping in physical stores. A mobile user enters a physical store (step 704). If a product is found online, then the user may select a particular store/location (step 706). A 3-D model of the store may appear on the user's mobile device screen (step 708), and the user may be directed to the product section within the virtual store (step 710). As above, product variations may be displayed (step 712), similar products may be displayed (step 714), online coupons and promotions may be displayed and activated by the user (step 716), and so on.
[000184] In the case where the user finds a product in a store, the product code or the like may be scanned (step 718) and the product or an indicator thereof displayed on the screen (step 719).
[000185] In any case, the product may be physically or virtually tried on in a fitting room (step
720), and if necessary or desired a checkout procedure may be performed (step 722). [000186] The flowchart 1550 of Fig. 30 illustrates another shopping implementation, but where the products involve furniture, home decor, or appliances, and where the same are being fitted to a space in 3-D, VR, AR, or the like. In a first step, users enter a store or game in VR or 3-D (step 724). Alternatively, the user may be in a physical store (step 730), and a mobile device may be employed to capture the store or a product with an, e.g., AR app (step 732).
[000187] The app may inquire of the user if a customized space or room is available (step 726). If not, the user may be prompted to create a new space or room with a desired dimension and/or shape (step 728). If the user has a customized space/room already, the same may be loaded or imported, or downloaded from the cloud, or the like. Products may be moved into or out of the room or space (step 734). Other steps may be employed, including that products may be moved, rotated, or otherwise decorated to fit the space or room, and to fit other items in the space or room. As noted above, products may have an appropriate scale that may be set as a common scale for both the viewer and the creator, e.g., a common scale for a target mesh and a subject mesh, so that the appropriate scale is set and products are properly sized in the 3-D environment, the VR environment, and/or the AR environment.
[000188] Products may be clicked on for additional information (step 738). The space itself, with the products, may be captured as an image or 3-D model (step 740), shared with friends or family (step 742), added to various lists including shopping carts and wish lists (step 744). As before the user may then check out (step 746).
[000189] In various applications, including shopping but also others, systems and methods according to present principles may implement an AR ID and location system, employable to identify places, objects, or products to display in AR. In particular, current AR applications often identify locations primarily through GPS. However, GPS is often not accurate enough to identify places that are very close. There is no current solution to identify objects or products, thus it is difficult to implement AR in shopping, events, activities, or other applications.
[000190] However, systems and methods according to present principles include a system that uses wireless technology (e.g., Bluetooth, Wi-Fi, infrared, or a mobile network) to locate the mobile device in question. Scanner technology may also be employed such as is associated with barcodes, QR codes, RFID's, or specially designed images, patterns or codes. If a wireless technology is employed, the locations or objects may have a device that emits wireless signals. A user with a mobile device may automatically detect the wireless signal so that identification of the locations or objects can be retrieved. If scanner technology is employed, the locations or objects may be fitted with a barcode, QR code, RFID, or other specially designed image, pattern, or code, for a user's mobile device to scan. Once the information is scanned into the AR system, the locations and the objects may be recognized. [000191] Fig. 31 is a flowchart 1600 illustrating an AR ID and location system, usable for identifying places, objects, or products, to display in AR. In a first step, a user uses their mobile device while around objects in combination with an AR device or system (step 748). The user's mobile device may employ wireless signals for location acquisition (step 754), or may employ scanning of various visual images to obtain location data (step 756). The AR app may identify various locations or objects (step 750), and the same may access one or more databases or files to extract location information (step 752).
[000192] Real Estate
[000193] In a real estate or hospitality implementation, a property owner or agent may load images/3-D models and dimensions as well as various requirements. An interior designer may be employed if necessary, or 3-D artist, who may then receive real estate information and make 3-D models or interior designs based on property owner's requirements. The interior designer/3-D artist may then send the finished designs to the property owners or agents. If the designs are approved, payment may be made from the property owner to the designers or 3-D artists. Properties may then be displayed in VR or 3-D for the user to view or purchase, e.g., online or otherwise.
[000194] In particular, referring to the flowchart 850 of Fig. 32, in a first step, a property owner/agent logs into an appropriate user interface (step 394). 3-D models or images may then be uploaded or constructed (step 396). A decision may be made as to whether an interior designer or other 3-D artist needs to become involved (step 318). If so, a UI is provided for the interior designer/3- D artist (step 402). The same receives or downloads images/3-D models, as well as dimensions and requirements (step 404). The interior designer or 3-D artist then constructs 3-D models of the property (step 406).
[000195] For furniture or home decor, existing product models may be employed (step 408), or new ones made or uploaded (step 412). The property may then be displayed with furniture and home decor (step 414). A payment engine may be employed (step 416), such that the interior designer or 3- D artist is enabled to receive payment (step 418).
[000196] In one implementation, when a property owner sells a house, he or she can upload the image of the house with appropriate dimensions so that an interior designer or 3-D artist can make a model of the house or room and then decorate the same with furniture and home decor products. Potential buyers can compare the property in an original condition versus a newly decorated property as the same appears in 3-D or VR. This is especially useful if the original property is in a poor condition. Potential buyers can see the potential of the property through 3-D or VR, thus increasing the confidence of buying the property. Buyers can also purchase furniture or home decor products through online transactions as may be connected to the 3-D or VR visualization, or through the interior designer.
[000197] Property owners may also use a 3-D camera to obtain a 3-D model of the house or room, or obtain the blueprint of the house from city records so that interior designers or 3-D artists can build a 3-D model from the blueprint.
[000198] More passive experiences associated with real estate include real estate purchases and rentals, and traversing and exploring properties associated therewith.
[000199] Variations will be understood, for example, while the above describes various home purchase implementations, it will be understood that users may walk through a virtual model of an apartment or other rental. Using AR, users may be enabled to visualize their own furniture or fixtures in the virtual environment of a house or apartment that they are interested in.
[000200] In a typical implementation, a subject model may be measured against a target model, where the models are generally meshes with appropriate materials, textures, and shading. In any case, such allows users to perform virtual walk-throughs of properties, or to perform a physical walkthrough with significantly enhanced information through an AR interface.
[000201] In social applications, friends may be invited to a conference conducted in VR, or the friends may go shopping or otherwise meet together. Chat functionality, voice functionality, and videoconferencing functionality may be employed and/or leveraged to allow communications between such friends. Connections to social networks may be had through an appropriate API, e.g., to
Facebook, Google +, Twitter, Linkedln, and so on.
[000202] Competitions may be made accessible to VR viewers using the technology, such competitions including sporting events, videogame competitions, social events including parties, and so on.
[000203] Education / Class Room Learning
[000204] In the field of education or classroom learning, and referring to Fig. 35, in a first step, an educator or other creator may select classrooms or buildings in 3-D or VR (step 790). Alternatively, the same may upload 3-D models of a classroom or school (step 792). Similarly, classroom furniture, lab tools, equipment, or decorations may be selected or uploaded to the classroom (step 796).
[000205] In an alternative implementation, the creator or educator may upload an image of the classroom or school (step 794), and a 3-D generation engine may create a 3-D model of the same (step 798). Alternatively, 3-D artists may create a model by hand (step 802). [000206] However the 3-D model is created, items may be moved to arrange or decorate the virtual environment (step 804). Educational materials may then be selected and/or uploaded (step 806). Such educational materials may include textbooks and other texts, videos, 3-D models, and so on. An appropriate payment system may be in place to compensate artists and other content creators (step 808).
[000207] Students may then enter the virtual classroom and purchase materials (step 810). Other steps the students may take include watching videos, interacting with educators, performing projects, interacting with each other, assembling or disassembling objects, virtually traveling to ancient times or remote places to experience historic events, watching movies, watching 3-D animations, and so on.
[000208] Figure 36 is a flowchart 1800 illustrating an educational or classroom implementation of augmented reality. In a first step, educators and students may enter a classroom with mobile devices (step 812). Appropriate images may be provided for the mobile devices to scan (step 814), so as to allow the mobile device, educators, and students, to register their presence. Various educational material may then be displayed on the mobile device (step 816). It is noted here, particularly in this implementation but also in others, that the mobile device may constitute laptop computers, tablet computers, as well as smart phones. On the same, students may provide various data, e.g., choices, during class (step 818). Such data may also include, e.g., reviewing detailed information, taking tests and quizzes, voting on a subject, conducting a discussion on the subject, providing comments, doing experiments, interacting with 3-D objects, watching 3-D animations to learn additional details about a subject, achieving hands-on experience with the subject.
[000209] Target meshes in this implementation may generally relate to the virtual environment, classroom, lab, and so on, and subject meshes may typically pertain to student avatars or the like. Significant educational leverage may be gained by employing social and communications engines as described, including, e.g., voice, video conferencing, and so on.
[000210] Variations will be understood. In addition to lectures, labs may be performed, experiments and other projects or constructions may be built or disassembled, and so on. Educators or creators may be enabled to import premade 3-D models or to construct a virtual classrooms or labs for instruction in 3-D or VR, e.g., in a network environment, in the cloud, and so on. Course content may be constructed and delivered to users, i.e., students, and such course content may include text, voice, images, video, and so on. Private lessons or public lessons may be configured for the course may be managed at a system level, local network level, or in the cloud.
[000211] Travel/Entertainment [000212] Various travel-related applications may include those pertaining to venturing, entertainment, and so on. For example, users may travel to a far-off location, or may just go to a virtual arcade to play games or to a virtual movie theater with friends or family to watch a movie. Before traveling, users can enter a resort or hotel room, e.g., to determine if the same would be appropriate for a physical trip, at least where the hotel room is configured to be an accurate representation of the hotel or resort offerings.
[000213] Such virtual travel or adventure experiences may allow viewers to virtually travel to locations and destinations which they are interested in, or which they are unable to physically travel to. Such experiences can be used to learn about destinations, and so on.
[000214] Friends may virtually travel together, and during travels, virtual theaters or game arcades may be visited, and so on. Other implementations will be understood, including those employing social networks.
[000215] In a particular implementation, a travel agent may select or load images/3-D models of a travel destination. If images are loaded, the 3-D model generation engine may generate 3-D images or models. 3-D artists may then construct 3-D models if desired, and the same may be paid
appropriately. 3-D models or images of travel destinations may also be displayed for the user to view. Users may purchase travel packages using 3-D, VR, AR, or the like.
[000216] Figure 33 illustrates a flowchart 1650 according to present principles. In the flowchart 1650, a travel application in 3-D or VR is illustrated. In a first step, a travel agent may select an existing destination 3-D model (step 758). In another implementation, or as an additional aspect of the first implementation, the travel agent or company may upload a destination 3-D model (step 760).
[000217] In either of these implementations, a user may review or purchase the travel service (step 768), invoking a payment system if necessary (step 774).
[000218] In yet another implementation, which may further be in addition to steps 758 or 760, a travel agent or company may upload a destination image (step 762). The image may be converted to a 3-D model by the 3-D model engine (step 764). Alternatively, a 3-D model may be constructed by hand from the image (step 766). The output of the 3-D model engine are travel destination models (step 770), which may be included in the travel destination models that the user reviews in step 768. If the 3-D model is made by hand, the 3-D model artist may employ an appropriate UI to make, download, or upload the desired model (step 772). Flow may continue with step 770.
[000219] Figure 34 illustrates a flowchart 1700 corresponding to a travel application using augmented reality. In a first step, a user with a mobile device travels to a desired location or destination (step 776). The AR app may identify the location through GPS or other wireless location determination systems and technology (step 778). The AR app may present several options for the user to select from, including: location information, lodging information, eatery information, gas information, car information, or other promotions (step 780). The user may then select information to view (step 782). Alternatively, or in addition, the user may be directed to the location through GPS (step 784). The user may also employ the AR app to book the desired option (step 786), e.g., to make a reservation at a displayed lodge. As before, the transaction may conclude with payment if necessary (step 788).
[000220] In other implementations related to entertainment, an entertainment company or individual can build a 3-D or VR themed area such as an adventure land, arcade, stage, or theater, separately or in a 3-D VR mall or shopping center. The user of a mobile device can obtain various entertainment options and may review such in 3-D or VR for trial or purchase. The entertainment theme owner can attract users by automatically sending invitations to watch or play various content items. In the case of a VR shopping area or mall, such may be offered to users or viewers "passing by". Following the trial, if desired, users can pay for the entertainment content by an appropriate payment mechanism.
[000221] Variations will be understood. For example, images or 3-D models may be imported of hotel rooms or resorts to the 3-D or VR system or cloud. 2-D images may be converted to 3-D models and imported into the system or cloud using a 3-D model engine as described above, such that hotel and resort owners can build example rooms or houses more easily. 3-D images may also be employed for this purpose, as taken from a 3-D camera.
[000222] Entertainers may be enabled to import video to the 3-D or VR cloud or to a server. Video may be streamed for live performances to the VR or 3-D cloud or server. Entertainers may be enabled to import or build custom 3-D or VR theaters.
[000223] Product Or Service Integration Into 3-D Or VR Environments Or Games Applications
[000224] To integrate a product or service into a 3-D or VR environment, such as games or other applications, product or service information may be obtained from manufacturers, retailers, service providers, and so on. If there is no 3-D model representative of the desired product or service, a 3-D artist may be requested to make a model, or the creator (product or service provider) may choose a premade model from a library. Once obtained, the product or service 3-D model may be constructed and used by designers or game developers. In the case of a game, a game player may see the product or service 3-D model and subsequently purchase products or arrange for services. The transaction may be initiated by the game owner, who may then send the order information to the product or service owner. Payment may be made to various entities, including the game owner, the game developer, and the 3-D artist. [000225] In a particular implementation, as shown by the diagram 900 of Fig. 37, various entities 422 desiring to allow for commercial transactions within a game may access an appropriate API as an initial step in providing service information in a virtual environment. In Fig. 37, entities 422 are generally providing services, and entities 426 are generally providing products, although it will be understood that in a given implementation both products and services may be provided by a given single entity. In some cases, the product or service provider may have a game ready 3-D model ready to go, and the same may be provided to the game maker 428 through what is termed interface III. Another interface, interface I or interface II, may be accessed by the service or product provider, respectively, as a portal to a designer or 3-D artist 424. The designer or 3-D artist may then create an appropriate model for the product or service and provide the same to the game maker 428 through interface III. Once the game maker 428 has obtained the necessary models, the game 432 can be created, and a game player 434 may play the game and purchase products or services within the game, with orders for products or services being routed to manufacturers 426 or service entities 422, respectively.
[000226] The flowchart 950 of Fig. 38 illustrates additional details of interface I. In a first step, a retailer or other service provider accesses an appropriate UI to configure or access the API of a system or method according to present principles, the system for providing such a VR or AR or 3-D functionality (step 436). Product or service information is loaded, including images, manually or through an API (step 438). A designer or 3-D artist UI 442 may then be configured and used to allow the designer or 3-D artist to choose the product/service and make 3-D models accordingly (step 444). The designer or 3-D artist UI may then be employed to upload 3-D models to game makers for their use (step 448). Payments may be provided to designers or 3-D artists at various points in the process as indicated in the figure.
[000227] Fig. 39 is a flowchart 1000 of exemplary steps taken by interface II. in a first step, a manufacturer accesses or employs a UI to begin the process of providing product information (step 452). Product information including 3-D models and images may then be loaded, manually or through an API (step 454). A designer or 3-D artist UI 456 may then be employed to choose the product and prepare 3-D models in a way appropriate for game engine importation based on product information (step 458). The 3-D models may then be uploaded to game makers (step 464). Payment may be made as indicated.
[000228] Figure 40 illustrates a flowchart 1050 indicating exemplary steps taken by interface III. In particular, in interface III, a game maker UI 468 may be accessed and game ready 3-D models displayed, e.g., on a webpage (step 466). Game makers may then download the desired 3-D models (step 472), and integrate the same into games (step 474). Such models may then be linked with a backend transaction system, e.g., a shopping cart system (step 476). Orders from users of such products or services associated with 3-D models may be transmitted to product or service providers 478. In some cases, referral or advertising payments may be sent to the game maker.
[000229] In another variation, 3-D models may be automatically configured and programmed. In more detail, objects inside 3-D models may contain special attributes, which may be used to add features and functionalities to 3-D models, so that users/players can interact with the 3-D models. Different types of attributes may be provided, e.g., attributes to identify avatars, products, ad banners, decoration items, and so on. The program may read the 3-D models and parse the attributes for each 3- D model object, assigning functionality based on attributes. The functionality can then communicate with transaction (e.g., commerce) APIs, interfacing with ad providers and other types of internal and external communications. For example, and referring to the flowchart 1100 of Fig. 41 , a system may start with a 3-D model (step 482). Various attributes may be assigned to the model (step 484). The model may be uploaded to the game engine, and the same may automatically check for model attributes (step 486). The system may automatically apply programming to the 3-D model based on the attributes (step 488). In this way, 3-D models obtain various features and functions (step 492).
[000230] The system and method may be fully implemented in any number of computing devices. Typically, instructions are laid out on computer readable media, generally non-transitory, and these instructions are sufficient to allow a processor in the computing device to implement the method of the invention. The computer readable medium may be a hard drive or solid state storage having instructions that, when run, are loaded into random access memory. Inputs to the application, e.g., from the plurality of users or from any one user, may be by any number of appropriate computer input devices. For example, users may employ a keyboard, mouse, touchscreen, joystick, trackpad, other pointing device, or any other such computer input device to input data relevant to the calculations. Data may also be input by way of an inserted memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of file - storing medium. The outputs may be delivered to a user by way of a video graphics card or integrated graphics chipset coupled to a display that maybe seen by a user. Alternatively, a printer may be employed to output hard copies of the results. Given this teaching, any number of other tangible outputs will also be understood to be contemplated by the invention. For example, outputs may be stored on a memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of output. It should also be noted that the invention may be implemented on any number of different types of computing devices, e.g., personal computers, laptop computers, notebook computers, net book computers, handheld computers, personal digital assistants, mobile phones, smart phones, tablet computers, and also on devices specifically designed for these purpose. In one implementation, a user of a smart phone or wi-fi - connected device downloads a copy of the application to their device from a server using a wireless Internet connection. An appropriate authentication procedure and secure transaction process may provide for payment to be made to the seller. The application may download over the mobile connection, or over the WiFi or other wireless network connection. The application may then be run by the user. Such a networked system may provide a suitable computing environment for an implementation in which a plurality of users provide separate inputs to the system and method. In the below system where creation and consumption of virtual environments including target and subject meshes are contemplated, the plural inputs may allow plural users to input relevant data at the same time.

Claims

1. A method of configuring a server to provide a media experience, comprising:
a. on a first server, providing a first user interface operable to allow a creator to construct a virtual environment, the virtual environment including at least one target mesh; b. on the first server, or on a second server in network communication with the first server, providing a second user interface operable to allow a first viewer to log on to the virtual environment and move through and interact with the virtual environment in a viewing session;
c. wherein the second user interface is further operable to allow the first viewer to cause an invitation to be sent to a second viewer to share the viewing session, d. wherein upon acceptance of the invitation, the second viewer is enabled to share the viewing session and move through and interact with the virtual environment along with the first viewer.
2. The method of claim 1, wherein on the first server or on the second server, the second user interface is further operable to allow the first viewer to construct a subject mesh for use in the virtual environment.
3. The method of claim 2, wherein the subject mesh is an avatar.
4. The method of claim 2, wherein the subject mesh is a virtual environment.
5. The method of claim 4, wherein the subject mesh is a room or building.
6. The method of claim 1, wherein upon acceptance of the invitation, the second viewer is presented with the second user interface.
7. The method of claim 6, wherein the second user interface is further operable to allow the second viewer to construct a subject mesh for use in the virtual environment.
8. The method of claim 2, wherein the second user interface is further operable to allow the first viewer to move the subject mesh relative to the target mesh.
9. The method of claim 1, wherein the target mesh has metadata associated there with, the metadata indicating a scale.
10. The method of claim 9, wherein the subject mesh has metadata associated therewith, the metadata indicating a scale, and wherein the target mesh and the subject mesh are configured to have the same scale, whereby a size and appearance of models in the subject mesh may be correctly displayed against the target mesh.
1 1. The method of claim 1, wherein the first viewer accesses the virtual environment using a virtual reality device or an augmented reality device.
12. The method of claim 1 1, wherein the second viewer accesses the virtual environment using a virtual reality device.
13. A non-transitory computer readable medium, comprising instructions for causing a computing environment to perform the method of claim 1.
14. A method of providing a media experience, the media experience modifiable on a server side by a creator, comprising:
a. accessing a database, the database including data corresponding to an inventory of available items;
b. exposing a user interface, the user interface operable to allow a creator to situate items in the inventory in a virtual environment;
c. wherein the user interface is operable to allow the creator to situate the items by
allowing the creator to create a 3-D model of the item or allowing the creator to import a 3-D model of the item.
15. The method of claim 14, wherein allowing the creator to create a 3-D model of the item includes allowing the creator to import an image corresponding to the item to a 3-D model generation engine.
16. The method of claim 14, further comprising exposing a user interface wherein a first viewer can access the virtual environment.
17. The method of claim 16, wherein the exposed user interface for viewer access of the virtual environment further allows the first viewer to invite a second user to access the virtual environment.
PCT/US2016/034663 2015-05-28 2016-05-27 Graphical processing of data, in particular by mesh vertices comparison Ceased WO2016191685A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562167665P 2015-05-28 2015-05-28
US62/167,665 2015-05-28
US201562237090P 2015-10-05 2015-10-05
US62/237,090 2015-10-05

Publications (1)

Publication Number Publication Date
WO2016191685A1 true WO2016191685A1 (en) 2016-12-01

Family

ID=57393369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/034663 Ceased WO2016191685A1 (en) 2015-05-28 2016-05-27 Graphical processing of data, in particular by mesh vertices comparison

Country Status (2)

Country Link
TW (1) TW201710871A (en)
WO (1) WO2016191685A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107221066A (en) * 2017-05-16 2017-09-29 嘉兴市天篷农业休闲有限公司 A kind of Tourist Experience strengthens AR systems
US11276103B2 (en) 2020-06-30 2022-03-15 Meta Platforms, Inc. Evaluating presentation of products offered by a publishing user based on content items provided to an online system by other users
US11379908B2 (en) * 2020-05-11 2022-07-05 Meta Platforms, Inc. Three-dimensional reconstruction of a product from content including the product provided to an online system by users
WO2022246425A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Ar-based connected portal shopping
WO2022246422A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Vr- based connected portal shopping
WO2022246426A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Customized virtual store
WO2024254210A1 (en) * 2023-06-05 2024-12-12 Khorsandi Marilyn R Apparatus, systems and methods for creating, activating, displaying in a computer display, and interacting with, a virtual three-dimensional digital analog to traditional physical shopping in a physical marketplace

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767579A (en) * 2017-11-20 2018-03-06 深圳市共享维啊科技有限公司 VR racks and its shared special rent method based on Internet of Things and cloud computing
CN108958945A (en) * 2018-07-27 2018-12-07 三盟科技股份有限公司 A kind of AR teaching resource processing method and system based under cloud computing environment
CN109191266A (en) * 2018-09-20 2019-01-11 叶昆联 The shared business model of tourist translation terminating machine
US11398079B2 (en) * 2020-09-23 2022-07-26 Shopify Inc. Systems and methods for generating augmented reality content based on distorted three-dimensional models
CN115437490A (en) * 2021-06-01 2022-12-06 深圳富桂精密工业有限公司 Interaction method, device, server and computer-readable storage medium
US12361476B2 (en) * 2022-07-29 2025-07-15 Ncr Voyix Corporation Augmented reality order assistance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072367A1 (en) * 2009-09-24 2011-03-24 etape Partners, LLC Three dimensional digitally rendered environments
US20130016090A1 (en) * 2011-07-15 2013-01-17 Disney Enterprises, Inc. Providing a navigation mesh by which objects of varying sizes can traverse a virtual space

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072367A1 (en) * 2009-09-24 2011-03-24 etape Partners, LLC Three dimensional digitally rendered environments
US20130016090A1 (en) * 2011-07-15 2013-01-17 Disney Enterprises, Inc. Providing a navigation mesh by which objects of varying sizes can traverse a virtual space

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107221066A (en) * 2017-05-16 2017-09-29 嘉兴市天篷农业休闲有限公司 A kind of Tourist Experience strengthens AR systems
US11379908B2 (en) * 2020-05-11 2022-07-05 Meta Platforms, Inc. Three-dimensional reconstruction of a product from content including the product provided to an online system by users
US11276103B2 (en) 2020-06-30 2022-03-15 Meta Platforms, Inc. Evaluating presentation of products offered by a publishing user based on content items provided to an online system by other users
WO2022246425A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Ar-based connected portal shopping
WO2022246422A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Vr- based connected portal shopping
WO2022246426A1 (en) * 2021-05-19 2022-11-24 Snap Inc. Customized virtual store
US11580592B2 (en) 2021-05-19 2023-02-14 Snap Inc. Customized virtual store
US11636654B2 (en) 2021-05-19 2023-04-25 Snap Inc. AR-based connected portal shopping
US11941767B2 (en) 2021-05-19 2024-03-26 Snap Inc. AR-based connected portal shopping
US11978112B2 (en) 2021-05-19 2024-05-07 Snap Inc. Customized virtual store
US12062084B2 (en) 2021-05-19 2024-08-13 Snap Inc. Method, system, and machine-readable storage medium for VR-based connected portal shopping
WO2024254210A1 (en) * 2023-06-05 2024-12-12 Khorsandi Marilyn R Apparatus, systems and methods for creating, activating, displaying in a computer display, and interacting with, a virtual three-dimensional digital analog to traditional physical shopping in a physical marketplace

Also Published As

Publication number Publication date
TW201710871A (en) 2017-03-16

Similar Documents

Publication Publication Date Title
US10846937B2 (en) Three-dimensional virtual environment
WO2016191685A1 (en) Graphical processing of data, in particular by mesh vertices comparison
US12393908B2 (en) Apparatus and method of conducting a transaction in a virtual environment
US11249714B2 (en) Systems and methods of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment
US10967255B2 (en) Virtual reality system for facilitating participation in events
US20140214629A1 (en) Interaction in a virtual reality environment
US20130325647A1 (en) Virtual marketplace accessible to widgetized avatars
US20220245716A1 (en) Non-transitory computer readable medium storing virtual store management program and virtual store management system
Sekhavat KioskAR: an augmented reality game as a new business model to present artworks
US20160048908A1 (en) Interactive computer network system and method
KR20210017516A (en) Furniture customization service system
CN104504576A (en) Method for achieving three-dimensional display interaction of clothes and platform
Bug et al. The future of fashion films in augmented reality and virtual reality
Barbara et al. Extended store: How digitalization effects the retail space design
Bafadhal et al. Does virtual hotel shifting realities or just daydreaming? A wake-up call
KR102536983B1 (en) Method and system for providing AR-based advertising platform using GPS and barometric pressure
KR102502116B1 (en) Virtual environment providing system based on virtual reality and augmented reality
Treub Enhancing the Shopping Experience in Flooring Specialty Stores with Phygital Design
Swaminathan Phygital Experiences
Kamble et al. Metaverse Marketplace Development
Arshad et al. Apparel Arena
Salem et al. Demonstrate the perception of Metaverse fashion market in-to Gen-Z and its impact on retail stores’ design
Grancharova The Democratization of AR: Prospects for the Development Of an Augmented Reality App in the Realm of Spatial Design
MCDOUGALL Digital Tools
Jackson Virtual flagships

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16800801

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16800801

Country of ref document: EP

Kind code of ref document: A1