[go: up one dir, main page]

CN117788764A - Method for meta-universe incubator based on WebGL - Google Patents

Method for meta-universe incubator based on WebGL Download PDF

Info

Publication number
CN117788764A
CN117788764A CN202311827426.XA CN202311827426A CN117788764A CN 117788764 A CN117788764 A CN 117788764A CN 202311827426 A CN202311827426 A CN 202311827426A CN 117788764 A CN117788764 A CN 117788764A
Authority
CN
China
Prior art keywords
scene
model
user
webgl
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311827426.XA
Other languages
Chinese (zh)
Inventor
辛忠
林帅
张悦
常靓
李捷明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Software Technology Co Ltd
Original Assignee
Inspur Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Software Technology Co Ltd filed Critical Inspur Software Technology Co Ltd
Priority to CN202311827426.XA priority Critical patent/CN117788764A/en
Publication of CN117788764A publication Critical patent/CN117788764A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method of a meta-universe incubator based on WebGL, which belongs to the technical field of 3D modeling, and utilizes WbGL to take charge of graphic rendering of a system bottom layer, threeJS is used for packaging WebGL for a developer to call, and an amo library is used for processing collision detection and physical simulation of a virtual world. The user does not need to have professional technical capability, and can operate by hand by opening the service page in the browser, and build a favorite meta-universe scene.

Description

Method for meta-universe incubator based on WebGL
Technical Field
The invention relates to the technical fields of WebGL, threejs, ammo,3D modeling and the like, in particular to a method of a meta-space incubator based on WebGL.
Background
Building a 3D meta universe is a complex and huge task requiring mastering a variety of techniques including 3D modeling and rendering, virtual Reality (VR)/Augmented Reality (AR) techniques, network communication, database management, etc. For non-technical professionals, learning and applying these techniques can take a significant amount of time and effort. Creating rich, realistic 3D scenes and objects requires a high level of 3D design and artistic creation capability. This requires knowledge in terms of solid art, design, and animation, and may require employment of professional 3D artists and animators to create high quality content. There is also a need to provide a smooth, natural interactive experience that enables users to communicate and interact effectively with the environment and other users. Development and implementation of these aspects may require significant human and time investment.
Disclosure of Invention
In order to solve the technical problems, the invention provides a convenient, quick and quick meta-space incubator method based on WebGL, which aims to enable a user to simply and quickly build a meta-space platform of the user, avoid a plurality of original technical details and save time and development cost.
The technical scheme of the invention is as follows:
a meta-space incubator method based on WebGL is characterized in that WebGL is responsible for graphic rendering of a system bottom layer, threeJS packages WebGL for a developer to call, and an amo library processes collision detection and physical simulation of a virtual world.
The user does not need to have professional technical capability, and can operate by hand by opening the service page in the browser, and build a favorite meta-universe scene.
Further, the method comprises the steps of,
comprising the following steps:
1) Packaging production codes: navigating to a root directory of items in a command line and running the command to generate a code package for production use;
2) The configuration server: uploading the generated static file 'dist' directory to a server; the folders may be uploaded to the server using FTP, SCP.
3) Selecting a Web server: an appropriate Web server is selected on the servers used to host the application. The Web server is Apache, nginx, caddy, and an appropriate server can be selected according to the requirements and the configuration.
Still further, the method comprises the steps of,
introducing a three.js library file and creating a 3D scene can be realized by creating a THREE.Scene object; the scene contains a container of all 3D objects for managing and displaying objects, lighting and camera elements in the scene.
Creating a camera defining a view angle and field of view in a scene; different types of cameras, such as perspective cameras or quadrature cameras, can be created and their position and projection properties set.
And creating a renderer, wherein the renderer is used for rendering the 3D scene onto the HTML page, creating a renderer object based on WebGL, setting the size of the renderer to be matched with the screen size, adding the renderer object into an element in a document, and loading an external scene model file for model creation.
Still further, the method comprises the steps of,
after the scene is loaded, the scaling, rotation and translation operations of the model or the appearance operations of adding textures and modifying the model are realized by uploading the user-defined model;
comprising the following steps:
allowing the user to change the position of the model in the scene by dragging the model or using a translation tool may be accomplished by capturing mouse movement events or touch events.
Allowing the user to rotate the model via a mouse drag or touch gesture may be accomplished by obtaining rotational increments of a mouse or touch event, and performing a corresponding update on the rotational properties of the model.
To provide a more intuitive operating experience, the use of interactive controllers may be considered.
Allowing the user to alter the scale of the model via a mouse wheel, touch gesture, or zoom tool; in the operation process, the scaling attribute of the model can be adjusted to realize the zooming-in or zooming-out effect of the model.
The appearance of the model is allowed to be modified by a user, and the modified appearance effect can be presented in real time by updating the material property.
Still further, the method comprises the steps of,
user interaction
Several different interaction modes are provided, including picture, video, hyperlink, text, model and audio display types, and the requirement of diversified interaction display of users can be met.
In the triggering mode, the type of display set by the user can be intuitively displayed by clicking the object in the scene, and in the regional mode, the user is not required to operate and automatically trigger when the user approaches.
Still further, the method comprises the steps of,
load roaming
After the scene editing is completed, the user can enter the scene to truly experience the meta-universe scene just created and interact; at this stage, physical rules and interactions in the real world need to be simulated to enable the user to perceive and manipulate objects in the virtual environment; the character can be controlled to display various actions, and all actions of a person character are reflected.
The invention has the beneficial effects that
The complex technical concept of the metauniverse is converted into a simple tool which is easy to start by an online configuration mode, the imagination of the general public is fully exerted, the digital space marketing scene is activated, an interactive metauniverse digital exhibition hall is created, self products are integrated into the metauniverse of the current trend, the marketing experience with more visual impact is realized, the 3D editing capability is high, the personalized configuration of materials, animations, cameras, illumination and the like is supported, the metauniverse contents such as XR/3D/digital people/virtual exhibition hall and the like are efficiently authored, the cache loading technology is adopted, and the brand-assisting/marketing/service digital upgrading is realized.
Drawings
Fig. 1 is a schematic of the workflow of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
WebGL is a JavaScript-based graphics library that can run 3D graphics and rendering on a Web browser. It is mainly suitable for creating interactive, browser-based graphics applications, and therefore it has the potential to interact with the meta-universe. WebGL can utilize a Graphics Processing Unit (GPU) of a computer for high performance graphics rendering, which is very advantageous for creating realistic virtual worlds and for handling complex 3D scenes. Moreover, webGL is part of Web-based technology and thus can run on a variety of operating systems and devices without the need for additional installation of any plug-ins. This allows WebGL-based metauniverse applications to be accessed across platforms, whether desktop or mobile devices. WebGL provides a viable way for manipulating the universe as a powerful graphic library.
The invention provides a method for a meta-universe incubator based on WebGL, which comprises the following specific implementation processes (figure 1):
1) Scene building
Introducing the three.js library file and creating a 3D scene can be achieved by creating the three.scene object. A scene is a container containing all 3D objects for managing and displaying objects, lighting, cameras, etc. in the scene. A camera is created that defines a view angle and field of view in the scene. Different types of cameras, such as perspective cameras or quadrature cameras, can be created and their position and projection properties set. And creating a renderer, wherein the renderer is used for rendering the 3D scene onto the HTML page, and creating a renderer object based on WebGL. The renderer is then sized to match the screen size and added to one element in the document. External scene model files (such as OBJ, GLTF, etc.) are loaded for model creation.
2) Scene editing
After the scene is loaded, the user can exert the imagination of the user, tile the scene, and realize the operations of scaling, rotating and translating the model or the operations of adding textures, modifying the appearance of the model and the like by uploading the user-defined model. Allowing the user to change the position of the model in the scene by dragging the model or using a translation tool. This may be accomplished by capturing a mouse movement event or a touch event. Allowing the user to rotate the model by a mouse drag or touch gesture. This may be accomplished by obtaining rotational increments of the mouse or touch event, and making corresponding updates on the rotational properties of the model. To provide a more intuitive operating experience, the use of interactive controllers may be considered. Allowing the user to alter the scale of the model via a mouse wheel, touch gesture, or zoom tool. In the operation process, the scaling attribute of the model can be adjusted to realize the zooming-in or zooming-out effect of the model. Allowing the user to modify the appearance of the model, such as changing the color of the model, applying textures, or shader effects. By updating the material properties, the modified appearance effect can be presented in real time.
3) User interaction
The 3D objects in the scene are diversified, and different objects can generate different interaction effects. For this reason, a plurality of different interaction modes are provided, including but not limited to different display types such as pictures, videos, hyperlinks, texts, models, audios and the like, so that the requirement of diversified interaction display of users can be met. In the triggering mode, various choices such as clicking are provided, the type of display set by the user can be intuitively displayed by the object in the clicking scene, and the area mode is also provided, so that the user is not required to operate and automatically trigger when the user approaches. The different interaction modes greatly enrich scenes and improve the immersion of the meta universe.
4) Load roaming
After the scene editing is completed, the user can enter the scene to truly experience the meta-universe scene just created and interact. At this stage, it is necessary to simulate physical rules and interactions in the real world, enabling the user to perceive and manipulate objects in the virtual environment. For example, taking into account physical effects such as gravity, collisions, friction, etc., a user is enabled to perform collision blocking and interaction with virtual objects as in the real world. The user can also control the roles to display various actions, and all actions which the role of a person should have are reflected.
The foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A method of a meta-space incubator based on WebGL is characterized in that,
the WebGL is responsible for the graphics rendering of the bottom layer, the WebGL is packaged by using the threjs for personnel to call, and the amo library is used for processing the collision detection and physical simulation of the virtual world.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
comprising the following steps:
1) Packaging production codes: navigating to a root directory of items in a command line and running the command to generate a code package for production use;
2) The configuration server: uploading the generated static file 'dist' directory to a server;
3) Selecting a Web server: an appropriate Web server is selected on the servers used to host the application.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the folders may be uploaded to the server using FTP, SCP.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the Web server is Apache, nginx, caddy, and an appropriate server can be selected according to the requirements and the configuration.
5. The method of claim 2, wherein the step of determining the position of the substrate comprises,
introducing a three.js library file and creating a 3D scene, which can be realized by creating a THREE.Scene object; the scene contains a container of all 3D objects for managing and displaying objects, lighting and camera elements in the scene;
creating a camera defining a view angle and field of view in a scene;
and creating a renderer, wherein the renderer is used for rendering the 3D scene onto the HTML page, creating a renderer object based on WebGL, setting the size of the renderer to be matched with the screen size, adding the renderer object into an element in a document, and loading an external scene model file for model creation.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
different types of cameras can be created and their location and projection properties set.
7. The method according to claim 5 or 6, wherein,
scene editing
After the scene is loaded, the scaling, rotation and translation operations of the model or the appearance operations of adding textures and modifying the model are realized by uploading the user-defined model;
comprising the following steps:
allowing the user to change the position of the model in the scene by dragging the model or using a translation tool may be accomplished by capturing mouse movement events or touch events;
allowing a user to rotate the model through a mouse dragging or touching gesture, and correspondingly updating the rotation attribute of the model by acquiring the rotation increment of a mouse or touching event;
interactive controls may be used to provide a more intuitive operational experience;
allowing the user to alter the scale of the model via a mouse wheel, touch gesture, or zoom tool; in the operation process, the scaling attribute of the model can be adjusted to realize the enlarging or reducing effect of the model;
the appearance of the model is allowed to be modified by a user, and the modified appearance effect can be presented in real time by updating the material property.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
user interaction
Several different interaction modes are provided, including display types of pictures, videos, hyperlinks, texts, models and audios, so as to meet the requirements of diversified interaction displays of users;
in the triggering mode, the object in the clicking scene can intuitively display the display type set by the user, and in the area mode, the user is not required to operate and automatically trigger when the user approaches.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
load roaming
After the scene editing is completed, the user can enter the scene to truly experience the meta-universe scene just created and interact; at this stage, physical rules and interactions in the real world need to be simulated to enable the user to perceive and manipulate objects in the virtual environment; the character can be controlled to display a plurality of actions, and all actions of a person character are reflected.
CN202311827426.XA 2023-12-28 2023-12-28 Method for meta-universe incubator based on WebGL Pending CN117788764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311827426.XA CN117788764A (en) 2023-12-28 2023-12-28 Method for meta-universe incubator based on WebGL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311827426.XA CN117788764A (en) 2023-12-28 2023-12-28 Method for meta-universe incubator based on WebGL

Publications (1)

Publication Number Publication Date
CN117788764A true CN117788764A (en) 2024-03-29

Family

ID=90381368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311827426.XA Pending CN117788764A (en) 2023-12-28 2023-12-28 Method for meta-universe incubator based on WebGL

Country Status (1)

Country Link
CN (1) CN117788764A (en)

Similar Documents

Publication Publication Date Title
EP3017364B1 (en) System and method for streamlining user interface development
CN112802192B (en) Three-dimensional graphic image player capable of realizing real-time interaction
Parisi Programming 3D Applications with HTML5 and WebGL: 3D Animation and Visualization for Web Pages
US7818690B2 (en) Framework for creating user interfaces containing interactive and dynamic 3-D objects
CA2648408C (en) System and method for comic creation and editing
Gomez Twixt: A 3d animation system
US9159168B2 (en) Methods and systems for generating a dynamic multimodal and multidimensional presentation
Eng Qt5 C++ GUI programming cookbook
CN117788689A (en) Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
Molina Massó et al. Towards virtualization of user interfaces based on UsiXML
Sukin Game development with Three. js
Moioli Introducing Blender 3.0
Van der Spuy Learn Pixi. js
CN117853662A (en) Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player
CN117788764A (en) Method for meta-universe incubator based on WebGL
CN115794979B (en) A GIS digital twin system based on Unity3D
CN117742677A (en) XR engine low-code development platform
Talib et al. Design and development of an interactive virtual shadow puppet play
Ko et al. Interactive web-based virtual reality with Java 3D
Ulloa Kivy–Interactive Applications and Games in Python
Santos et al. A-frame experimentation and evaluation for the development of interactive VR: a virtual tour of the Conimbriga Museum
Arakawa et al. Ensemble3D: Interactive 3D Scene Creation System with Multiple Roles and Devices
Yang et al. Construction of 3D visualization platform for visual communication design based on virtual reality technology
Shekar Cocos2d cross-platform game development cookbook
Elordi et al. Virtual reality interfaces applied to web-based 3D E-commerce

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination