CN120803438B - Application development method, application development device and XR equipment - Google Patents
Application development method, application development device and XR equipmentInfo
- Publication number
- CN120803438B CN120803438B CN202511300001.2A CN202511300001A CN120803438B CN 120803438 B CN120803438 B CN 120803438B CN 202511300001 A CN202511300001 A CN 202511300001A CN 120803438 B CN120803438 B CN 120803438B
- Authority
- CN
- China
- Prior art keywords
- scene
- user
- model
- glasses
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application discloses an application development method, an application development device and XR equipment, and relates to the technical field of augmented reality, wherein the method comprises the steps of receiving an XR scene component preset by a user in a dragging way into an XR scene template through a visual editor; and receiving configuration of interaction logic by a user, converting the current XR scene and the interaction logic into an executable format of the target XR glasses through an adaptation layer, and outputting an application package of the adaptation target XR glasses. By combining visual development and AIGC automatic content generation, the XR content creation threshold is reduced, the development efficiency is improved, and diversified application scenes are supported.
Description
Technical Field
The application relates to the technical field of augmented reality, in particular to an application development method, an application development device and XR equipment.
Background
In recent years, extended Reality (XR) technology is increasingly used in the fields of education and training, industrial guidance, travel and entertainment, etc., but the development of XR glasses application still faces a plurality of technical bottlenecks. The traditional development mode is highly dependent on professional programming skills and three-dimensional art design capability, a developer needs to manually write interactive logic by using engines such as Unity, unreal and the like, and automatically make or purchase materials such as 3D models, animations and the like, and the development period is long and the cost is high. Although some visualization tools attempt to reduce the development threshold, their functionality is limited to simple AR effect fabrication, failing to meet the development requirements of full XR applications, especially with significant shortcomings in cross-device adaptation and complex interactive implementations.
Another major drawback of existing XR development flows is the efficiency bottleneck of content authoring. High quality 3D models, scene environments, and animation effects often require time-consuming production by professional designers, and it is difficult for common developers to quickly obtain customized materials. Although the technology of generating artificial intelligence Content (ARTIFICIAL INTELLIGENCE GENERATED Content, AIGC) can generate 3D Content according to text or image, the integration level of the generated artificial intelligence Content and an XR development tool is insufficient, and the generated model needs to manually optimize a topological structure and adjust material parameters, so that the generated model cannot be directly embedded into a development flow. Furthermore, the writing of interactive logic still relies on traditional coding, and lacks efficient means to automatically translate natural language intent into executable scripts, resulting in difficulty for non-technicians to participate in development.
In the cross-platform adaptation aspect, the hardware characteristics and the running environments of different XR glasses are obviously different, and a developer needs to optimize rendering pipelines, interaction logic and performance parameters for various devices respectively. Existing solutions typically require a developer to manually adjust the code or switch development engines, not only increasing the workload, but also easily introducing compatibility issues. Especially for the scene of WebXR and native application mixed deployment, the lack of a unified automatic adaptation mechanism leads to difficulty in realizing one-time development and multi-terminal deployment.
Disclosure of Invention
The embodiment of the application provides an application development method, an application development device and XR equipment, which are used for solving the technical problems.
In one aspect, an embodiment of the present application provides an application development method, including:
Receiving an XR scene component which is preset by a user and dragged into an XR scene template through a visual editor, wherein the XR scene component comprises a virtual object, interaction logic and a UI component;
Responding to a user request, calling AIGC a model to generate XR content materials, and inserting the XR content materials into a specified position of a current XR scene, wherein the XR content materials comprise at least one of a three-dimensional model, an animation sequence or an interaction script;
And receiving configuration of interaction logic by a user, converting the current XR scene and the interaction logic into an executable format of the target XR glasses through an adaptation layer, and outputting an application package adapting to the target XR glasses, wherein the configuration comprises a drag trigger or an interaction rule described by natural language.
In one implementation of the present application, in response to a user request, invoking AIGC a model to generate an XR content material, and inserting the XR content material into a specified location of a current XR scene, including:
receiving a user request under the condition that a preset basic object library does not contain an object required by a user;
analyzing the user request to extract key characteristic parameters corresponding to the three-dimensional model under the condition that the XR content material to be generated is a three-dimensional model, wherein the user request comprises natural language description, images or videos;
Responding to the user request, calling AIGC a model based on the key characteristic parameters, and generating a three-dimensional model file matched with natural language description;
Inserting the three-dimensional model file into a current XR scene, and rendering the three-dimensional model in real time at a designated position of the current XR scene;
and receiving an adjustment instruction of a user for a rendering result so as to perform initial position adjustment and scaling adjustment on the rendered three-dimensional model.
In one implementation of the present application, the method further includes:
Analyzing natural language sentences of user description behavior logic under the condition that XR content materials to be generated are interaction scripts, and determining spatial position relations and event triggering conditions in the natural language sentences;
And generating corresponding executable code segments through a large language model, and binding the executable code segments to interaction event nodes of specified virtual objects in the visual editor to generate interaction scripts, wherein the executable code segments comprise Unity C# scripts or WebXR JavaScript scripts.
In one implementation of the present application, receiving a configuration of interaction logic by a user specifically includes:
receiving connection operation of a user to an event node and an action node through the visual editor so as to form an interactive logic chain;
Under the condition that the user describes the interaction rule through natural language, corresponding script codes or visual logic nodes are generated through an AI model.
In one implementation of the present application, the adaptation layer converts the current XR scene and interaction logic into an executable format of the target XR glasses, and outputs an application package adapted to the target XR glasses, which specifically includes:
detecting an operating environment supported by the target XR glasses, wherein the operating environment comprises WebXR and a native application;
Under the condition that the target XR glasses support WebXR, converting the current XR scene into codes based on WebXR standard, and packaging the corresponding codes into offline application packages, wherein the content based on WebXR standard comprises HTML5 codes or WebGL codes;
And calling the SDK provided by the Unity engine to generate corresponding engineering resources and application packages under the condition that the target XR glasses need the native application.
In one implementation of the present application, the method further includes:
Starting a real-time preview mode in the editing process, and pushing a current XR scene to target XR glasses under the condition that a user is detected to be connected with the target XR glasses;
Simulating a field angle parameter of the target XR glasses based on a local lightweight rendering engine, and rendering the current XR scene;
Receiving interactive test operation of the user, and dynamically updating the space coordinate mapping relation under the condition that the user adjusts the position of the component;
recording the modified material characteristic description and re-triggering the AIGC model to generate optimized XR content materials under the condition that a modification instruction of the user on the XR content materials generated by the AIGC model is detected;
And dynamically updating the preview picture according to the optimized XR content material, and feeding back interaction delay parameters.
In one implementation of the present application, the real-time preview mode is initiated during editing, and specifically includes:
Connecting actual target XR eyeglass equipment and synchronizing head posture data;
and projecting a rendering picture of the current XR scene on the equipment screen of the target XR glasses in real time, recording an interactive error log of a user in the testing process, and highlighting the marked abnormal nodes.
In one implementation of the present application, before converting the current XR scene and interaction logic into the executable format of the target XR glasses by the adaptation layer, the method further comprises:
Identifying unused model vertex data in the current XR scene, and compressing the unused model vertex data to perform grid simplification on the three-dimensional model;
And reducing the sampling rate of the high-resolution texture in the current XR scene according to a preset strategy so as to match the equipment computing power of the target XR glasses.
In another aspect, an embodiment of the present application further provides an XR device, the device comprising:
At least one processor;
And a memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform an application development method as described above.
In another aspect, an embodiment of the present application further provides an application development apparatus, where the apparatus includes:
the system comprises a visualization module, a user interface module and a user interface module, wherein the visualization module is used for receiving an XR scene component which is preset by a user and dragged into an XR scene template through a visualization editor;
The model calling module is used for responding to a user request, calling AIGC the model to generate XR content materials, and inserting the XR content materials into the appointed position of the current XR scene, wherein the XR content materials comprise at least one of a three-dimensional model, an animation sequence or an interaction script;
the scene conversion module is used for receiving the configuration of the interaction logic by a user, converting the current XR scene and the interaction logic into executable formats of the target XR glasses through the adaptation layer, outputting application packages of the adaptation target XR glasses, and configuring interaction rules comprising a drag trigger or natural language description.
The embodiment of the application provides an application development method, an application development device and XR equipment, which at least comprise the following beneficial effects:
A user without professional programming capability can quickly construct an XR scene and interactive logic through a visual dragging assembly and natural language interaction mode, the problem that traditional XR development depends on three-dimensional graphic programming and art design capability is effectively solved, XR content materials such as a three-dimensional model, an animation sequence and an interactive script are automatically generated by utilizing a AIGC model, manual modeling and coding work which originally needs days or even weeks is shortened to a minute level, the conversion process from creative to prototype is greatly accelerated, a matched 3D model and interactive logic are automatically generated through analyzing natural language description, a developer can express creative requirements in a more visual mode, meanwhile, the materials generated by AI can be directly embedded into the scene and adjusted in real time, tedious links of repeated modification and format conversion in the traditional process are avoided, scene contents are automatically converted into executable formats of target XR glasses through an adaptation layer, and the ecological cracking problem of different XR devices is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is an application scenario schematic diagram of an application development method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an application development method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for generating XR content materials by calling AIGC a model according to an embodiment of the present application;
Fig. 4 is a flow chart of an application package output method for adapting to target XR glasses according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an internal structure of an XR device according to an embodiment of the application;
Fig. 6 is a schematic diagram of an internal structure of an application development device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application development method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. As shown in fig. 1, the application environment may include a development terminal 101, AIGC cloud service platform 102, XR running engine 103, target XR glasses 104, and resource management database 105.
The development terminal 101 is used as a main development tool for running a visual editor, providing a drag type XR scene construction interface and a natural language interaction function, and realizing real-time preview through a local lightweight rendering engine.
AIGC the cloud service platform 102 is connected to the development terminal 101, and integrates various generation-type AI models, including a 3D model generator, an animation sequence generator, and a code script generator, for automatically generating XR content materials according to a request of the development terminal.
The XR running engine 103 serves as an adaptation layer core component and is responsible for converting the edited XR scene into a target device executable format, supporting WebXR standard and SDK interface conversion of the mainstream XR engine.
The target XR glasses 104 are connected to the development terminal 101 through a wired or wireless manner, and are used for receiving the deployed application package and running the XR scene, and feeding back the device parameters and the interaction data to the development terminal.
The resource management database 105 stores and manages XR scene template libraries, prefabricated component libraries, AI-generated content, and user project data, supporting version control and team collaborative development. The components are interconnected by a high-speed network to form a complete XR application development, generation, adaptation, and deployment closed loop.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 2 is a flow chart of an application development method according to an embodiment of the present application.
The implementation of the analysis method according to the embodiment of the present application may be a terminal device or a server, which is not particularly limited in the present application. For ease of understanding and description, the following embodiments are described in detail with reference to a server.
It should be noted that the server may be a single device, or may be a system formed by a plurality of devices, that is, a distributed server, which is not particularly limited in the present application.
As shown in fig. 2, an application development method provided by an embodiment of the present application includes:
step 201, receiving, by a visual editor, an XR scene component preset by a user to drag into the XR scene template.
It should be noted that, in the embodiment of the present application, the XR scene component includes a virtual object, interaction logic, and UI component.
XR refers to a generic term for all immersive Reality technologies including Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR). XR emphasizes the fused interactions of virtual digital content with real environments.
VR refers to a technique of generating an immersive full-virtual digital environment by a computer, completely isolating a user from the real world and immersing in a virtual scene. AR refers to a technology of superimposing digital virtual contents on a real world basis, so that a user perceives a real environment and virtual information superimposed thereon at the same time. MR refers to the fusion form of AR and VR, virtual content can be overlapped in reality and interacted with a real object in real time, and virtual-real fusion is tighter.
AIGC refers to content, such as images, three-dimensional models, animations, text scripts, etc., that are automatically generated using artificial intelligence (and in particular, generative models). AIGC techniques can automatically create the required materials or codes from the AI model based on user-provided inputs, such as a text description or sketch.
The application discloses an XR (X-ray) eyeglass application development platform based on AIGC, which adopts a C/S (computer/subscriber server) architecture and comprises a user front end and cloud service, and has local operation capability. The front end part of the user runs on a computer or a tablet of a developer, and the main functions are to provide a visual editing interface and a local preview function for the user based on a visual editor, and the cloud service part provides calculation support for AI content generation, complex scene rendering and the like, and cloud storage and collaboration of project resources.
In this embodiment, a graphical development interface is provided through the development terminal, including a what-you-see-is-what-you-get scene editing view and a component drag panel. The user can build an application by dragging a pre-fabricated XR interaction component into the scene without the need for handwriting code, such as UI interface controls, virtual objects, interaction triggers, and the like. Meanwhile, the user is allowed to describe the requirement in natural language, then the system analyzes the intention and recommends corresponding components or settings, and a humanized interactive design flow is realized.
The visual editor is a GUI application that the developer directly interacts with, and has sub-interfaces such as a scene editing window, a component panel, a property setting panel, and an event flow editor. After the developer initiates the visual editor, the target XR glasses device type is first selected and the desired XR scene template is selected in the visual editor. Then, the virtual object, the UI control and the interaction logic are dragged into the selected XR scene template through a mouse keyboard or gesture equipment in the scene window, the position and the attribute of the dragged virtual object or UI control are adjusted, and a simple path or area trigger is drawn. The visual editor can capture user operations in real time and feed back display effects, such as displaying an added three-dimensional model, simulating simple interactive triggers.
In this embodiment, items are first created and scenes are initialized. The user activates the visual editor, selects "new XR item", enters the item name and specifies the type of target device, e.g., a model AR glasses. The editor loads the corresponding device presets (e.g., angle of view, resolution parameters) and default null scenes accordingly. The user can select a proper scene template from the built-in template library, such as an indoor display scene or an outdoor street scene, and the platform loads a pre-established basic scene environment into the editor for the user to modify.
Then, the object and interface components are added. The user drags the desired XR object into the scene through the component panel. For example, in AR teaching applications, a user drags one "machine device" object as a teaching subject, or drags several "3D character" objects in a game as interactive characters. The platform provides a base object library (basic geometry, light sources, cameras, UI panels, etc.) for selection.
Step 202, in response to a user request, invoking AIGC a model to generate an XR content material, and inserting the XR content material into a specified position of the current XR scene.
It should be noted that, in the embodiment of the present application, the XR content material includes at least one of a three-dimensional model, an animation sequence, or an interaction script.
Fig. 3 is a flow chart of an XR content material generation method for calling AIGC a model according to an embodiment of the present application. As shown in fig. 3, the XR content material generation method for calling AIGC model provided by the embodiment of the application specifically includes the following steps:
Step 301, receiving a user request under the condition that a preset basic object library does not contain an object required by a user;
Step 302, under the condition that XR content materials to be generated are three-dimensional models, analyzing a user request to extract key characteristic parameters corresponding to the three-dimensional models, wherein the user request comprises natural language description, images or videos;
Step 303, responding to a user request, calling AIGC a model based on key characteristic parameters, and generating a three-dimensional model file matched with natural language description;
Step 304, inserting the three-dimensional model file into the current XR scene, and rendering the three-dimensional model in real time at the appointed position of the current XR scene;
step 305, receiving an adjustment instruction of a user for a rendering result, so as to perform initial position adjustment and scaling adjustment on the rendered three-dimensional model.
In this embodiment, AIGC content generation systems employ an intelligent multimodal request handling mechanism. It will be appreciated that as a user searches the underlying object library in the visual editor, the system monitors in real time the matching of search keywords to resources within the library. It should be noted that, the basic object library is stored by adopting a graph database structure, and the relevance between the user input and the existing resource is calculated through a semantic similarity algorithm. Specifically, when the user searches the "ancient telephone booth model", the system will search the accurate matching term in the ontology library first, if no result, expand the search to the near-meaning term, such as "old telephone booth", and finally activate AIGC to generate the flow after confirming that there is no match in the basic object library. For example, it is triggered when it is detected that the user searches three times in succession for no matching results or the user actively clicks the AI generation button. It can be appreciated that the request parsing module adopts a multimodal input processing architecture, and specifically includes a text semantic parser, an image feature extractor, and a video keyframe parser.
When the user needs to generate the Paris subway entrance three-dimensional model with new artistic movement style, firstly, the natural language analysis module can deconstruct the description sentence, identify the new artistic movement style as artistic genre characteristic and the Paris subway entrance as main object, secondly, if the user supplements and uploads the historical photo reference, the image analysis sub-module can extract the visual characteristics of iron art curve, colored glass and the like, and finally, the formed characteristic parameter set comprises main structure parameters, decoration characteristic parameters and historical style parameters, wherein the main structure parameters comprise arch size, ladder layout, the decoration characteristic parameters comprise vine pattern density and material reflectivity, and the historical style parameters comprise typical color scheme in the 1900 s.
Note that AIGC model calls employ a hierarchical generation policy. Specifically, the system firstly converts abstract features into control parameters of a generation model, uses a potential diffusion model to generate a basic geometric form, refines surface details through a nerve radiation field, and finally optimizes material performance by using a physical renderer. It will be appreciated that the generation process maintains real-time interactivity and that the user can adjust the parameter weights at any time. For example, in an industrial training scenario, transparent component duty cycles may be dynamically adjusted to better demonstrate internal mechanical configuration when generating a numerically controlled machine tool internal structural model.
In this embodiment, the model insertion and adjustment process uses a scene perception technique. For example, when the anatomical AR application is developed in the field of educational training, the generated "cardiovascular system" model will automatically adapt to the anatomical table scale of the current scene and inherit the preset physical interaction parameters of the scene, such as the peelable layer setting. Specifically, the system determines the optimal placement position through space semantic analysis, identifies a 'teaching demonstration area' space mark in a scene, calculates the topological relation between the model and the surrounding anatomical model, and automatically adjusts the initial orientation so as to facilitate teaching observation. It can be understood that the user can carry out secondary adjustment on the model through the gesture control or parameter panel, and the system can feed back the collision detection result in real time, so that the adjusted model can not penetrate through the model with the existing object in the scene.
In this embodiment, the intelligent generation of the interaction script employs a semantic, code bi-directional mapping mechanism. It should be noted that the mechanism includes three core components of an intent understanding module, a spatial relationship parser, and a code generator. It can be understood that when the user describes "when the handle approaches the control panel, an operation menu is displayed", the intent understanding module determines an interaction mode for displaying touch trigger information, the spatial relationship analyzer calculates the approaching threshold distance, and the final code generator outputs a complete script including collision detection and UI control logic, so as to realize platform-adapted script output.
The code generation process employs a context-aware constraint generation policy. Specifically, when a script for playing a maintenance animation after a technician looks at a device fault point for 3 seconds is generated for an industrial field guidance application, if a target platform is Unity, the generated C# script is inherited from a project basic interaction class, and if the target platform is WebXR, an asynchronous loading mode oriented to performance optimization is adopted. Illustratively, the generated code will contain detailed annotation descriptions that interpret the original natural language description corresponding to each logical block.
It will be appreciated that the code binding mechanism employs a hybrid programming mode of visualization nodes and scripts. When an AR guide application is developed in the marketing and travel interaction field, when a user describes that a history story is automatically played before a tourist walks to an exhibit, a system firstly positions the 'exhibit' virtual object in a scene graph, then creates a new event monitor node on an interaction component, and finally creates association between the generated script logic and the node. It should be noted that, the binding process keeps bidirectional synchronization, when the user adjusts node connection in the visual editor, the bottom layer script will be updated correspondingly, otherwise, the node relation will be refreshed automatically when the script is modified.
Illustratively, in a personal creative game development scenario, when a developer describes "open hidden checkpoints when a player collects three gems simultaneously," the system will intelligently recognize that this is a composite conditional event, automatically generating a complete script framework containing a state manager. Specifically, the generated code not only comprises basic condition detection logic, but also adds auxiliary code segments for enhancing immersion sense such as particle special effect triggering, sound effect playing and the like according to project styles. It will be appreciated that this context-based code replenishment capability greatly reduces the implementation burden on the developer.
In this embodiment, AIGC cloud service platforms are deployed on a cloud server cluster, integrate a plurality of AIGC models, and interact with a front-end editor through a unified interface. The AIGC cloud service platform comprises an image/texture generation sub-module, a three-dimensional model generation sub-module, an animation generation sub-module and a code/script generation sub-module. It should be noted that, the image/texture generation submodule generates a scene map or a sky box image based on a diffusion model, the three-dimensional model generation submodule generates a simple 3D model according to a text or a reference map by using a trained 3D generation model, outputs a common format file such as GLB/OBJ, the animation generation submodule generates a character skeleton animation or an object motion curve, and the code/script generation submodule generates a corresponding script code section such as a Unity c# script or a JS code section of WebXR according to user term description based on a large language model.
When a user issues a content generation request in the visual editor, such as the text description "generate a tall oak model," the front end sends the request to the cloud AI service. And after the AI service calls the corresponding model to generate content, returning the result data such as model files, images or script texts to the front-end editor. After receiving the result data, the visual editor presents the generated content in the current XR scene for user preview and confirmation. If the result of the generation is not satisfactory, the user can adjust the description to be generated again, or manually edit the details. The entire AI generation invoking process is transparent to the user without requiring the user to know the complexity of the underlying AI model. The generated content can be directly applied to the scene in the visual editor, so that the workload of manually manufacturing materials is greatly reduced.
In this embodiment, when the desired object does not exist in the library, the user can describe the desired object in natural language, created by the AI content generation module. For example, the user inputs "generate a red industrial water pump model with pipe interface", the AI model will generate a 3D model of the water pump from the description, and the editor will place the model in the scene after receiving it. For interface interactive elements, the user may also drag into a UI component such as a button, progress bar, text prompt box, etc., and place it in the field of view interface for overlay display on the XR glasses.
Step 203, receiving configuration of interaction logic by a user, converting the current XR scene and the interaction logic into an executable format of the target XR glasses through an adaptation layer, and outputting an application package of the adapted target XR glasses.
It should be noted that, the configuration in the embodiment of the present application includes a drag trigger or an interaction rule described in a natural language.
In this embodiment, the interactive logic configuration system employs a bimodal input fusion architecture. It will be appreciated that the architecture converts the different input modes into standardized interactive logic descriptions through a unified semantic understanding middle layer. It should be noted that, the event node in the visual editor adopts a color coding classification system, such as blue representing an input event, green representing processing logic, and red representing an output action, and this design significantly improves the configuration efficiency of the user in a complex scene.
Specifically, in the development scenario of industrial training application, when a user drags an event node of "gesture recognition" to establish a connection with an action node of "equipment disassembling animation", the system automatically analyzes parameter compatibility between the nodes, and intelligently inserts necessary conversion logic nodes, such as adding a mapping relationship of gesture force to animation playing speed.
For example, for a professional field such as precision instrument maintenance training, the system automatically supplements safety operation verification logic based on the field knowledge base, such as detecting whether a user has performed a power-off operation before starting the disassembly step. It can be appreciated that this intelligent node connection assistance mechanism maintains both configuration flexibility and operational security in professional scenarios.
The conversion process from natural language to logic node adopts progressive analysis strategy. It should be noted that, when a developer in the medical training field inputs "vibration prompt and record the number of errors when a learner operates a surgical instrument by mistake", the system first decomposes the compound sentence into discrete interactive elements, for example, the triggering condition is "error operation", the feedback action is "vibration prompt", the data is recorded as "number of errors", and then clarifies the fuzzy expression through multiple rounds of dialogue, for example, confirms the intensity level of "vibration prompt". Specifically, the generated logical nodes can retain semantic tags of the original description and support subsequent modification and adjustment through natural language.
For example, in the interactive application development of the tour, the user may describe the basic rule through natural language, "display the introduction when the tourist looks at the exhibit", and drag the "gazing time" parameter to the "fade-in speed" control node of the introduction panel through the visual editor to supplement the details. It can be appreciated that the hybrid editing mode fully plays the advantages of two input modes, the natural language quickly builds the frame, and the details are accurately adjusted visually. It should be noted that the system will maintain the consistency of the two representations in real time, and any one of the modifications will immediately synchronize to the other representation.
In this embodiment, the user clicks to select a virtual object in the current XR scene, and adjusts various attributes of the virtual object in the attribute panel. Such as location coordinates, scaling, texture mapping, etc. Furthermore, the user can also use the AI to assist in setting complex attributes, such as letting the AI generate and automatically apply texture maps of the wall surface according to the brief "this wall requires some industrial style of graffiti poster". In terms of behavior configuration, the platform supports adding interaction components to the object, such as preset behaviors of 'collision body', 'grabber object', 'timed rotation', and the like. The user can give corresponding interactive characteristics to the object through simple selection or parameter setting without handwriting codes.
If more complex or custom behavioral logic is required, the user may automatically generate from the natural language description through the script generation sub-module. For example, in a VR puzzle game where it is desired to trigger the door to open after a prop is placed in the correct position, the user describes the logic "when prop a is placed in position B, trigger the door opening animation of object C", AI will generate the corresponding script code or visual script node configuration and append it to the event of the associated virtual object. The user may view or fine tune the generated logic. In this way, even users unfamiliar with programming can create complex interactive behaviors with the aid of the AI.
Fig. 4 is a flow chart of an application package output method for adapting to target XR glasses according to an embodiment of the present application. As shown in fig. 4, the method for outputting an application package adapted to target XR glasses according to the embodiment of the present application specifically includes the following steps:
Step 401, detecting an operation environment supported by target XR glasses, wherein the operation environment comprises WebXR and a native application;
step 402, under the condition that target XR glasses support WebXR, converting the current XR scene into codes based on WebXR standard, and packaging the corresponding codes into offline application packages, wherein the content based on WebXR standard comprises HTML5 codes or WebGL codes;
step 403, calling the SDK provided by the Unity engine to generate corresponding engineering resources and application packages under the condition that the target XR glasses need the native application.
In this embodiment, the cross-platform adaptation system employs an intelligent environment detection and adaptive conversion architecture. It can be appreciated that the architecture dynamically determines the optimal packaging strategy through a combination of device feature libraries and real-time performance analysis. It should be noted that, the environment detection module may collect multi-dimensional device parameters, such as GPU model, memory capacity, sensor configuration and API characteristics supported by the system of the XR glasses, and these data are standardized by the device fingerprint technology.
Specifically, when outputting in WebXR environments, the system firstly converts the scene resources into glTF2.0 standard format to ensure cross-browser compatibility, secondly automatically injects device specific performance tuning parameters, such as optimizing WebGL shaders for MAGIC LEAP browsers, and finally generates service worker scripts which can realize complete offline operation capability. Illustratively, in a tour guide application, the system may intelligently identify panoramic video assets, automatically convert to WebXR applicable 360 degree video player components, while maintaining interactive consistency with other 3D objects in the scene.
The native application packaging process employs a modular SDK integration scheme. It should be noted that, different SDK plug-in packages are dynamically loaded according to the characteristics of the target device, for the Windows MR device such as Hololens, the Mixed Reality Toolkit core module is automatically integrated, and for the AR glasses of Android-based, ARCore extended functions are preferentially used. Specifically, the conversion process keeps bidirectional traceability, a developer can fall back to a visual editor to adjust the original scene at any time, and all modifications can be automatically synchronized into the original engineering file.
Illustratively, in industrial remote assistance application development, when the target device is detected to be Vuzix M a 400, the system automatically enables the device's waveguide display calibration parameters, adjusts the contrast settings of the UI elements to accommodate industrial ambient lighting conditions, and maps key interaction instructions to physical keys on the device side. It will be appreciated that this depth device adaptation capability enables the generated application to fully exploit the hardware characteristics of the various XR glasses without requiring the developer to manually handle complex platform differences.
In this embodiment, during the editing process, the user may click the "preview" button at any time to perform the simulation operation on the current scene. The editor will switch to preview mode, render the scene through the XR engine, and simulate the viewing angle and interaction of the XR glasses, if the XR glasses device is connected, preview directly on the device. The user may test interactions in the preview, such as clicking a button, moving a view angle to see the overlay effect, etc. For problems found in the preview, such as unsuitable object position and insensitive interaction triggering, the user directly adjusts scene content or parameter configuration after exiting the preview, and immediately previews again for verification. The quick view is the loop, so that a developer can efficiently and iteratively optimize experience details.
In this embodiment, when the developer is satisfied with the preview effect, the application may be deployed to the actual XR glasses for testing or publishing through the project management module. The platform provides a one-key release function, and after a user selects a target device or a distribution channel, the system automatically completes the rest flow. For example, for WebXR mode deployment, the platform can generate URL links of applications at the cloud, users can operate by wearing XR glasses and accessing through a browser, and for the mode of needing to install packages, the platform can generate installation files of the adaptive device system and push the installation files to the glasses or provide downloading. During deployment, the system may also optimize resources, such as compressing textures, cropping unused model parts, to ensure smooth operation over the limited computational power of XR glasses. At the same time, the platform records this build version for future update iterations. Thus, a complete XR eyeglass application development flow is completed, and the XR application which can be actually experienced is produced.
In this embodiment, the real-time preview realizes the frame synchronization status sharing of the development environment and the XR glasses by establishing a bidirectional data channel. It should be noted that when the target XR glasses connection is detected, the system automatically loads the device-specific display configuration files, such as the angle of view distortion parameters, the adaptive range of interpupillary distances, and the screen color gamut characteristics, which are applied to the shader program of the local lightweight rendering engine in real time.
Specifically, a reverse projection matrix algorithm is adopted, and the optical deformation effect of the XR glasses is accurately reproduced at the opening end. Illustratively, in developing medical AR applications, the system would simulate the special optical path of microscopic XR glasses in particular, ensuring that the preview view seen by the developer is completely consistent with what the doctor is actually using. It will be appreciated that such accurate simulation avoids the problem of content position deviations due to optical differences, particularly for surgical navigation-type applications where accuracy in the millimetre scale of positioning is required.
It should be noted that, when the user adjusts the virtual object position in the preview process, the system establishes a spatial anchor point association relationship, so as to not only maintain the original coordinates of the object in the scene logic, but also record the offset relative to the current view angle of the user. In particular, this dual coordinate system allows the same set of interaction logic to be self-adaptive to XR devices of different specifications without requiring the developer to manually adjust the position parameters. For example, in an industrial maintenance guidance application, the tool tip panel will automatically adjust the hover distance based on the field angle of different XR glasses, always maintaining the most comfortable reading position.
It will be appreciated that when the user adjusts the AI-generated 3D model, the system records modification vectors, such as scaling changes, texture parameter adjustments, etc., and re-triggers the generation process using these feedback data as conditional inputs. It should be noted that this reinforcement learning mechanism based on human feedback enables the AI model to continuously adapt to the artistic style and technical requirements of the current project. Specifically, in educational application development, as the teacher adjusts organ transparency of the anatomical model, the system will automatically learn this preference, presetting the same transparency parameters on the other anatomical models that are subsequently generated.
Illustratively, the rendering load distribution is displayed using a thermodynamic diagram, with a time series diagram showing the interaction delay variation. It will be appreciated that this intuitive feedback mechanism helps developers quickly locate performance bottlenecks, particularly when developing industrial operational guidance applications that require a strict real-time response. Specifically, when the gesture recognition delay exceeds the standard, the system can intelligently recommend to reduce the skeleton number of the hand model or optimize the collision detection algorithm, so that the whole interaction smoothness is maintained.
In this embodiment, the device-level live preview system employs a low-latency data synchronization architecture. The high-speed USB or Wi-Fi 6 is connected and is used for transmitting head gesture data, and the millimeter wave wireless screen throwing technology is used for guaranteeing real-time performance of pictures. The attitude data is synchronously subjected to a prediction compensation algorithm, and in scenes with extremely high precision requirements such as medical training, the system dynamically calculates the full-link delay from sensor sampling to picture updating and compensates by a space-time interpolation technology.
Specifically, when industrial equipment maintenance guidance application is developed, when an engineer wears XR glasses to test the disassembly and assembly process, the system can compare the virtual guidance mark with a key datum point of actual equipment in real time, and automatically calibrate the space mapping relation. For example, for a large-scale mechanical maintenance scene, the system can intelligently identify equipment characteristic points, and an accurate corresponding relation between a world coordinate system and an eyeglass SLAM system is established. It can be appreciated that the spatial calibration mechanism based on the actual device can effectively solve the problem of dislocation of the virtual content and the real object.
It should be noted that, the system can structurally record three key data of a user behavior sequence (such as a head moving track and gesture operation), a system response event (such as a rendering frame rate and a script execution state) and an environment parameter (such as illumination change and space anchor point stability). Specifically, when an anomaly is detected, the system builds a causal relationship graph. In a travel guidance application, if the user attempts to not trigger the explanation multiple times before drawing, the system will mark possible problems with the interaction zone setup and automatically suggest to expand the trigger range or increase the visual cues.
It will be appreciated that in the educational training class application test, when a learner operates incorrectly, the XR glasses frame will be overlaid with a semi-transparent alert frame and three-dimensional arrow directions, while the originating end simultaneously displays the call stack of the incorrect node. In particular, this bi-directional feedback mechanism allows the developer to quickly understand the context in which the problem occurred, rather than just seeing the final error condition. It should be noted that the system can intelligently distinguish between occasional errors and systematic defects, and can automatically generate optimization suggestions for frequently occurring interaction problems, such as adjusting the size of the collision volume or adding haptic feedback.
In this embodiment, before the current XR scene and the interaction logic are converted into the executable format of the target XR glasses by the adaptation layer, the optimizable resource object is accurately identified by establishing a dependency graph of the three-dimensional scene. It should be noted that, the strategy of combining view cone rejection and occlusion detection is adopted, so that not only the visible patches in the static scene are analyzed, but also the visible areas possibly occurring in the dynamic interaction process are predicted.
Specifically, in the development of training applications for industrial equipment, when complex mechanical assembly models are processed, the system recognizes the geometry of the embedded part that is completely occluded by the shell, automatically removes the vertex data of the invisible patches, and simultaneously retains the contour features for visible but far away components using a progressive simplification algorithm. For example, for standard components such as screws and gaskets, a system can call a preset optimization template, and the complexity of a model is greatly reduced on the premise of guaranteeing function demonstration. It can be appreciated that the semantic-based optimization method can maintain the accuracy of teaching guidance more than simple geometric simplification.
It should be noted that the system analyzes the content features of the texture image, keeps high precision sampling for areas containing important details (e.g. device nameplate, instrument panel scale), and adopts block compression for large-area solid or gradient areas. Specifically, in medical anatomy application development, organ texture is dynamically adjusted according to viewing distance, medium precision mapping is used under normal viewing distance, and high definition detail mapping is automatically loaded when a user focuses on the viewing. Illustratively, the adaptive hierarchical loading mechanism ensures visual effects and remarkably reduces memory occupation.
When the target equipment is replaced by the XR glasses with higher performance, the system can restore the original high-precision resources by one key, and repeated importing and setting are avoided. It should be noted that all the optimization operations are recorded in the version control system, so that a developer can compare the effect differences before and after optimization at any time, and the key visual elements are ensured not to be affected. In particular, in a tour guide application, for models of important cultural relic exhibits, the system builds a "protectively optimized" whitelist, ensuring that these core exhibits always maintain the best visual effect.
In one embodiment of the application, in the field of educational training, a vocational training agency wishes to make a practice teaching application that runs on AR glasses for training a trainee in maintaining complex equipment. Through the platform, a training teacher does not need programming, only needs to select an industrial equipment training scene template, adjusts the position of an equipment model, and generates a 3D part model and a fault animation of a specific model by utilizing the AIGC module. Step descriptions and gesture prompts are then configured by dragging the interactive component. Finally, the generated AR teaching application can be deployed on XR glasses worn by students, so that teaching of superposition digital guidance of real equipment is realized, and training efficiency and safety are improved.
In one embodiment of the application, in an industrial field guidance scenario, a company provides XR glasses for field engineers for equipment inspection and maintenance guidance. The platform can help the company to quickly generate customized inspection application, engineers introduce CAD models of a factory workshop into a visual editor or directly generate a workshop 3D environment through AI description by the platform, add inspection point marks, and write inspection item descriptions according to the existing documents by the AI. The interaction of the display maintenance steps when the engineer's line of sight focuses on a part is set by the drag logic component. Almost no manual coding is needed, a field guidance AR application can be generated, and a novice engineer is helped to independently complete the inspection of complex equipment.
In one embodiment of the application, in the marketing and travel interaction scenario, a scenic spot plans an AR treasured seeking game to promote guest interaction experience. The planner selects a game rule template by means of the platform, generates a virtual treasury model and a character animation related to scenic spot culture by utilizing AIGC, and places the digital elements at corresponding positions of a scenic spot map. Through the platform setting, virtual treasures appear and the puzzle is triggered when the tourist wears the AR glasses provided in the scenic spot at a specific place. The content and logic design of the whole game can be completed in a short time without a professional program development team, so that marketing creative can be quickly implemented on the ground.
In one embodiment of the application, for a personal creative game development scenario, an independent creator wishes to develop an immersive puzzle that runs on VR glasses. The VR indoor scene template is selected through the platform, then, the AI is enabled to generate an indoor environment and an organ model in a ancient fort style, the dragging component sets object interaction, such as picking up a key unlocking organ, and the AI is required to generate part of scenario dialogue scripts. The creator completes the game prototype making within a few days, and directly exports the application program adapting to a certain VR glasses for testing, thereby greatly reducing the difficulty of developing VR games for individuals.
The above is a method embodiment of the present application. Based on the same inventive concept, an embodiment of the present application also provides an XR device, the structure of which is shown in fig. 5.
Fig. 5 is a schematic diagram illustrating an internal structure of an XR device according to an embodiment of the application. As shown in fig. 5, the apparatus includes:
At least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to:
receiving an XR scene component which is preset by a user and dragged into an XR scene template through a visual editor, wherein the XR scene component comprises a virtual object, interaction logic and a UI component;
responding to a user request, calling AIGC a model to generate XR content materials, and inserting the XR content materials into a specified position of a current XR scene, wherein the XR content materials comprise at least one of a three-dimensional model, an animation sequence or an interaction script;
And receiving configuration of interaction logic by a user, converting the current XR scene and the interaction logic into an executable format of the target XR glasses through an adaptation layer, and outputting an application package adapting to the target XR glasses, wherein the configuration comprises a drag trigger or an interaction rule described by natural language.
As shown in fig. 6, the embodiment of the present specification further provides an application development apparatus. As can be seen from fig. 6, in one or more embodiments of the present disclosure, an application development apparatus 600 includes:
the visualization module 601 is configured to receive, through a visualization editor, an XR scene component preset by a user to drag into the XR scene template, where the XR scene component includes a virtual object, interaction logic, and UI component;
The model calling module 602 is configured to call AIGC a model to generate an XR content material in response to a user request, and insert the XR content material into a specified position of a current XR scene;
the scene conversion module 603 is configured to receive a configuration of interaction logic by a user, convert the current XR scene and the interaction logic into an executable format of the target XR glasses through the adaptation layer, and output an application package of the adapted target XR glasses, where the configuration includes a drag trigger or an interaction rule described in natural language.
In some embodiments, in response to a user request, invoking AIGC the model to generate XR content material and inserting the XR content material into a specified location of the current XR scene, including:
receiving a user request under the condition that a preset basic object library does not contain an object required by a user;
under the condition that the XR content material to be generated is a three-dimensional model, analyzing a user request to extract key characteristic parameters corresponding to the three-dimensional model, wherein the user request comprises natural language description, images or videos;
Responding to a user request, calling AIGC a model based on key characteristic parameters, and generating a three-dimensional model file matched with natural language description;
Inserting the three-dimensional model file into the current XR scene, and rendering the three-dimensional model in real time at the appointed position of the current XR scene;
and receiving an adjustment instruction of a user for a rendering result so as to perform initial position adjustment and scaling adjustment on the rendered three-dimensional model.
In some of these embodiments, further comprising:
Analyzing natural language sentences of user description behavior logic under the condition that XR content materials to be generated are interaction scripts, and determining spatial position relations and event triggering conditions in the natural language sentences;
and generating corresponding executable code segments through the large language model, and binding the executable code segments to interaction event nodes of the specified virtual objects in the visual editor to generate interaction scripts, wherein the executable code segments comprise Unity C# scripts or WebXR JavaScript scripts.
In some embodiments, receiving a user configuration of interaction logic specifically includes:
Receiving connection operation of a user to the event node and the action node through a visual editor so as to form an interactive logic chain;
Under the condition that the user describes the interaction rule through natural language, corresponding script codes or visual logic nodes are generated through an AI model.
In some embodiments, the adaptation layer converts the current XR scene and interaction logic into an executable format of the target XR glasses, and outputs an application package of the adapted target XR glasses, which specifically includes:
detecting an operating environment supported by the target XR glasses, wherein the operating environment comprises WebXR and a native application;
under the condition that target XR glasses support WebXR, converting the current XR scene into codes based on WebXR standard, and packaging the corresponding codes into offline application packages, wherein the content based on WebXR standard comprises HTML5 codes or WebGL codes;
and under the condition that the target XR glasses need the native application, calling the SDK provided by the Unity engine to generate corresponding engineering resources and application packages.
In some of these embodiments, further comprising:
Starting a real-time preview mode in the editing process, and pushing the current XR scene to the target XR glasses under the condition that the user is detected to be connected with the target XR glasses;
Based on the local lightweight rendering engine, simulating the field angle parameters of the target XR glasses, and rendering the current XR scene;
Receiving interactive test operation of a user, and dynamically updating the space coordinate mapping relation under the condition that the user adjusts the position of the component;
Recording the modified material characteristic description and re-triggering AIGC model to generate optimized XR content materials under the condition that a modification instruction of a user on the XR content materials generated by the AIGC model is detected;
And dynamically updating the preview picture according to the optimized XR content material, and feeding back interaction delay parameters.
In some embodiments, the real-time preview mode is started in the editing process, and specifically includes:
Connecting actual target XR eyeglass equipment and synchronizing head posture data;
And projecting a rendering picture of the current XR scene on an equipment screen of the target XR glasses in real time, recording an interactive error log of a user in the testing process, and highlighting the marked abnormal nodes.
In some of these embodiments, before converting the current XR scene and interaction logic into the executable format of the target XR glasses by the adaptation layer, the method further comprises:
identifying unused model vertex data in the current XR scene, and compressing the unused model vertex data to grid simplify the three-dimensional model;
and reducing the sampling rate of the high-resolution texture in the current XR scene according to a preset strategy so as to match the equipment computing power of the target XR glasses.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511300001.2A CN120803438B (en) | 2025-09-12 | 2025-09-12 | Application development method, application development device and XR equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511300001.2A CN120803438B (en) | 2025-09-12 | 2025-09-12 | Application development method, application development device and XR equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120803438A CN120803438A (en) | 2025-10-17 |
| CN120803438B true CN120803438B (en) | 2026-01-13 |
Family
ID=97320798
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202511300001.2A Active CN120803438B (en) | 2025-09-12 | 2025-09-12 | Application development method, application development device and XR equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120803438B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119883229A (en) * | 2024-07-23 | 2025-04-25 | 福州市勘测院有限公司 | Construction method of meta-universe application |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AU2020101686B4 (en) * | 2018-11-13 | 2021-05-13 | Unbnd Group Pty Ltd | Technology adapted to provide a user interface via presentation of two-dimensional content via three-dimensional display objects rendered in a navigable virtual space |
| CN109887097A (en) * | 2019-02-01 | 2019-06-14 | 河南众诚信息科技股份有限公司 | A kind of VR content development platform and method |
| CN112184917A (en) * | 2019-07-05 | 2021-01-05 | 上海璨然信息技术有限公司 | 3D model conversion method adopting RPA (robot process automation) |
| CN116302366B (en) * | 2023-05-26 | 2023-10-20 | 阿里巴巴(中国)有限公司 | Terminal development-oriented XR application development system, method, equipment and medium |
| WO2025090852A1 (en) * | 2023-10-27 | 2025-05-01 | SimX, Inc. | Scalable systems and methods for creating interactive, dynamic, and clinically realistic medical simulations using artificial intelligence-based assistance |
| CN117688632B (en) * | 2023-12-19 | 2025-07-08 | 金陵科技学院 | A method of intelligent 3D modeling in SolidWorks based on AIGC |
| US20250238991A1 (en) * | 2024-01-18 | 2025-07-24 | Purdue Research Foundation | System and method for authoring context-aware augmented reality instruction through generative artificial intelligence |
-
2025
- 2025-09-12 CN CN202511300001.2A patent/CN120803438B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119883229A (en) * | 2024-07-23 | 2025-04-25 | 福州市勘测院有限公司 | Construction method of meta-universe application |
Non-Patent Citations (2)
| Title |
|---|
| AIGC与XR技术在电商视觉设计中的应用探索;丁立;玩具世界;20250725(第7期);第158-160页 * |
| VR虚拟科技馆系统设计与实现;魏本宏等;中国教育信息化;20200510(第10期);第30-33页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120803438A (en) | 2025-10-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10599405B2 (en) | Application system having an LLVM compiler | |
| CN101048210B (en) | Object oriented mixed reality and video game authoring tool system and method background of the invention | |
| CN109783659A (en) | Based on the pre- visual automation Storyboard of natural language processing and 2D/3D | |
| CN106200983A (en) | A kind of combined with virtual reality and BIM realize the system of virtual reality scenario architectural design | |
| KR20160073750A (en) | Apparatus and method for text-based performance pre-visualization | |
| CN116452786A (en) | Virtual reality content generation method, system, computer device and storage medium | |
| CN119359915A (en) | Three-dimensional scene generation method and system based on large language model multi-agent collaboration system | |
| CN115170365A (en) | Virtual simulation teaching system and method based on user configuration generation | |
| Gonzalez et al. | Introducing bidirectional programming in constructive solid geometry-based CAD | |
| AU2017310075A1 (en) | System for composing or modifying virtual reality sequences, method of composing and system for reading said sequences | |
| Gao et al. | [Retracted] Realization of Music‐Assisted Interactive Teaching System Based on Virtual Reality Technology | |
| CN120633605A (en) | Document processing method, device, equipment, medium and program product | |
| CN120803438B (en) | Application development method, application development device and XR equipment | |
| Ledermann | An authoring framework for augmented reality presentations | |
| US20150269781A1 (en) | Rapid Virtual Reality Enablement of Structured Data Assets | |
| Yang et al. | Building Information Modeling and Virtual Reality: Workflows for Design and Facility Management | |
| Dukkardt et al. | Informational system to support the design process of complex equipment based on the mechanism of manipulation and management for three-dimensional objects models | |
| Mokhov et al. | Agile forward-reverse requirements elicitation as a creative design process: A case study of Illimitable Space System v2 | |
| Hempe et al. | A semantics-based, active render framework to realize complex eRobotics applications with realistic virtual testing environments | |
| Tang et al. | A model driven serious games development approach for game-based learning | |
| Sung et al. | Build your own 2D game engine and create great web games | |
| Rossmann et al. | Virtual BIM Testbeds: The eRobotics Approach to BIM and Its Integration into Simulation, Rendering, Virtual Reality and More | |
| Freitas | Enabling Co-Creation for Augmented Reality: A User-Friendly Narrative Editor for Cultural Heritage Experiences | |
| Levante | Data Management and Virtual Reality Applications of BIM models | |
| Ma et al. | Innovative Applications of Digital Art and Augmented Reality in the Construction Industry through Building Information Modeling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |