Disclosure of Invention
The embodiment of the application provides a scene rendering method, a scene building method, electronic equipment, a storage medium and a program product, which are used for alleviating or solving one or more technical problems in the prior art.
In a first aspect, an embodiment of the present application provides a scene rendering method, including:
Obtaining a text-based first model file in response to a target instruction indicating to display a first scene model, the first scene model comprising at least one projection graphic representing a projection of a physical object in a target physical scene;
acquiring monitoring data of a first entity object according to configuration information of the first model file on a first projection graph in the first scene model, wherein the first entity object corresponds to the first projection graph in the target entity scene;
and rendering the first scene model according to the first model file and the monitoring data of the first entity object, wherein the rendering effect of the first projection graph is determined according to the monitoring data of the first entity object.
In some embodiments, the obtaining the monitoring data of the first entity object according to the configuration information of the first model file on the first projection graph in the first scene model includes:
determining a data acquisition interface of the monitoring data of the first entity object according to the configuration information of the first projection graph;
and acquiring the monitoring data of the first entity object based on the data acquisition interface.
In some embodiments, the first projected pattern comprises a plurality of layers, the rendering the first scene model from the first model file and the monitored data of the first physical object comprises:
rendering is conducted according to the monitoring data of the first entity object aiming at a first layer bound with the monitoring data in the first projection graph.
In some embodiments, after rendering the first scene model from the first model file and the monitored data of the first physical object, the method further comprises:
according to the configuration information of the second projection graph in the first scene model in the first model file, monitoring a first interaction operation of the second projection graph and/or monitoring a target event associated with a second entity object, wherein the second projection graph is used for representing projection of the second entity object in the target entity scene;
In response to the first interactive operation and/or the target event, performing at least one of the following rendering operations:
displaying or hiding a second layer in the second projected pattern;
Changing the attribute value of the second projection graph, and re-rendering the second projection graph according to the changed attribute value of the second projection graph;
hiding the second projected pattern;
Acquiring and displaying the monitoring data of the second entity object;
ejecting a preset icon, wherein the preset icon is used for expanding and displaying the monitoring data of the second entity object after being triggered;
Rendering a third hidden projection graphic in the first scene model, the third projection graphic being used to represent a projection of a third physical object in the target physical scene, the third physical object being associated with the second physical object;
and acquiring a second model file based on the text and rendering a second scene model according to the second model file.
In some embodiments, the target physical scene is a building park, the physical object comprises a building for constituting the building park, floors in the building, devices in the floors, projection patterns of the building are formed by stacking projection patterns of the floors in the building, and the projection patterns of the devices in the floors are displayed after the second interaction of the projection patterns of the floors is monitored.
In some embodiments, after obtaining the text-based first model file, the method further comprises:
displaying a tree structure diagram of the first scene model according to the first model file, wherein the tree structure diagram is used for representing the hierarchical relation of each entity object in the target entity scene;
And in response to a third interaction operation on a fourth entity object in the tree structure diagram, rendering a fourth projection graph for representing the projection of the fourth entity object.
In a second aspect, an embodiment of the present application provides a scene building method, including:
In response to the drawing operation, creating a projection graph for representing the projection of each entity object in the target entity scene;
Generating configuration information for acquiring monitoring data of a first entity object in the target entity scene aiming at a first projection graph corresponding to the first entity object;
Determining a mapping relation between the rendering effect of the first projection graph and the monitoring data of the first entity object;
and generating a first model file based on text according to the created projection graph, the configuration information and the mapping relation, wherein the first model file is used for rendering a first scene model representing the target entity scene.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor implements the method of any one of the embodiments of the present application when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, the computer program, when executed by a processor, implementing a method according to any of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present application.
Based on the scene display method provided by the first aspect, the application has at least the following beneficial effects or advantages:
The method comprises the steps of responding to a target instruction for indicating to display a first scene model, acquiring a first model file based on text, wherein the first scene model comprises at least one projection graph used for representing projection of an entity object in a target entity scene, acquiring monitoring data of the first entity object according to configuration information of the first projection graph in the first scene model by the first model file, wherein the first entity object corresponds to the first projection graph in the target entity scene, and rendering the first scene model according to the first model file and the monitoring data of the first entity object, wherein the rendering effect of the first projection graph is determined according to the monitoring data of the first entity object, so that the rendering effect of the first projection graph is related to the monitoring data of the first entity object, and interaction and mapping between a virtual scene and a real entity are supported, thereby realizing a digital twin model, and the entity object in the entity scene is represented by the simplified plane projection graph, so that the rendering difficulty can be reduced, the scene model can be constructed by using text, the size of the first model file can be reduced, the smoothness of the twin model can be improved, and the bandwidth and the computing capacity required by the digital twin model can be rendered.
The foregoing description is only an overview of the present application, and is intended to provide a better understanding of the technical means of the present application, as it is embodied in the present specification, and is intended to provide a better understanding of the above and other objects, features and advantages of the present application, as it is embodied in the following description.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those skilled in the pertinent art, the described embodiments may be modified in numerous different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following describes related technologies of the embodiments of the present application. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application.
It should be noted that, the application scenario or the application example provided in the present application is for ease of understanding, and the application of the technical solution in the embodiment of the present application is not specifically limited.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the foregoing technical problems in detail with specific embodiments. The specific embodiments illustrated may be combined with one another and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In the technical field of digital twinning, a digital twinning model is usually built based on a 3D model, but the 3D model has a plurality of interactive elements, is complex to operate, usually requires more complex programming and specific 3D engine support, and has relatively high implementation difficulty. In a digital twinning scene, the interactivity can enable a user to more intuitively explore and understand twinning data, such as clicking equipment icons to view detailed information, displaying running state changes of equipment through animation, and the like. The 3D model is relatively complex in terms of data-driven dynamic updates, requiring more development effort to achieve similar effects. In the development process, the 3D model and the sensor data are required to be associated and bound, the development and implementation process of binding and rendering is complex, and the difficulty is high. Therefore, the cost of building the 3D scene model is high.
The 3D models are usually stored in binary format, such as the common 3D model file format of obj, fbx, etc., which contains complex parameter information, and not only the geometry of the object (such as vertex coordinates, patch information, etc.), but also texture information (such as texture map, reflectivity, etc.), illumination information (such as light source position, illumination intensity, etc.), and spatial transformation information of the model, etc. are required to be stored. Thus, a simple 3D model may have several hundred KB, while complex, high-precision 3D models, such as building models with fine textures and complex structures or machine part models with high-resolution textures, may have file sizes of several MB or even tens of MB. For example, a 3D model file for an automobile with a 4K texture may exceed 100MB. Since the 3D model file is larger, more storage space is required. If a large number of 3D model scenes are to be stored, devices such as a large-capacity hard disk, a storage server, and the like are required to be equipped, and as the data volume increases, the costs of storage management and maintenance increase accordingly.
After the cloud or server deploys the 3D model, the client of the user needs to have a larger network bandwidth and computer processing capability, so that a display effect without jamming can be achieved. Due to the large file size and long transmission time of the 3D model, in the case of limited network bandwidth, it may take several seconds or even tens of seconds to transmit a 3D model, which may lead to long waiting time for a user to access the digital twin scene, and affect the user experience. For some digital twinning applications that require high real-time performance, such as remote device monitoring, such long transmission delays may be unacceptable. Therefore, the 3D model requires a higher network bandwidth to guarantee the transmission speed. If fast transmission is to be realized, a user is required to have a high-speed stable network connection, such as a fiber network or a high-speed Wi-Fi, which limits the application of the 3D model digital twin scene to some areas or devices with poor network conditions and increases the cost of improving the network bandwidth to ensure the transmission quality. While the 3D model format and rendering engine may have compatibility issues on different platforms and devices, additional adaptations and processing are required.
Aiming at the problems of difficult modeling, large model files and high requirements on bandwidth and computing capacity of access users after cloud deployment, the embodiment of the application provides a scene rendering method and a scene building method, wherein the rendering effect of a first projection graph is associated with monitoring data of a first entity object, and interaction and mapping between a virtual scene and a real entity are supported, so that a digital twin model is realized, and the entity object in the entity scene is represented by a simplified plane projection graph, so that the rendering difficulty can be reduced, the scene model can be built by using texts, the size of the first model file is reduced, the fluency of displaying the digital twin model is improved, and the bandwidth and computing capacity required by rendering the digital twin model are reduced.
The scene rendering method provided by the embodiment of the application can be implemented in a client or a server, in some embodiments, the client may be an application program installed in a terminal, or an applet in a mobile terminal, for example, a browser of a personal computer, an applet in instant messaging software in a mobile phone, etc., and the server may be a server cluster or a cloud server, etc. It can be appreciated that the client is one end that directly performs an interactive operation with the user, the client can receive the interactive operation input by the user and display the rendering result to the user, and the server communicates with the client to provide a part of data processing capability for the client, for example, provide data required for rendering to the client based on the feedback of the user interactive operation received by the client.
Referring to a flowchart of a scene rendering method shown in fig. 1, the method specifically includes steps 101 to 103.
In step 101, a text-based first model file is acquired in response to a target instruction indicating that the first scene model is displayed.
The target instruction may be generated based on an input operation of a user, for example, when the user clicks an option for displaying a building park at a client, the client sends the target instruction to the cloud server based on the clicking operation of the user, and indicates that the server needs to render a first scene model representing the building park at the client.
The first scene model comprises at least one projection graphic for representing a projection of a physical object in the target physical scene. For example, the target physical scene represented by the first scene model is a building park, and the physical objects in the building park may include at least one building, floors in each building, devices in several floors, and so forth. The projection pattern represents a planar pattern of the target physical scene at one viewing angle, and it is understood that the projection pattern is not limited to representing projection and may be a simplified projection, which is identical to the projection of the physical object. For example, referring to fig. 2, a schematic diagram of three planes representing a floor is used to construct a pseudo 3D (2.5D visual effect) floor model scene by creating three simple fixed angle planes, each floor being a three plane model, further referring to fig. 3, a building can be formed by stacking multiple floors, the floors being stacked into a building, the building being constructed as a campus. In a floor, a plurality of rooms can be created through a construction surface, a plurality of rooms can be constructed into the floor, in the rooms, equipment in the rooms can be constructed through the construction surface, or decorations in the rooms can be represented through other graphic elements, and the like.
In some implementations, the text-based first model file may be in a SVG (Scalable Vector Graphics, scalable vector image) format, the SVG file describing the two-dimensional vector graphics based on a markup language of XML. The SVG file mainly stores descriptive information such as geometric shapes, colors, paths and the like of graphic elements, and is represented by a series of simple and direct labels and attributes, wherein the labels are used for representing types of the elements, such as vector graphic types (specific types such as points, lines, rectangles, circles and the like), characters, pixel images, animations, patterns and the like, and the attributes are used for describing positions, colors, paths and the like of the elements. The SVG file can be conveniently combined with JavaScript, CSS and other technologies to realize interactive effects and animations, and various sensor data, business data and the like are combined with graphic elements to realize visual display of the data.
Step 102, acquiring monitoring data of a first entity object according to the configuration information of the first model file on the first projection graph in the first scene model.
The first physical object corresponds to a first projected pattern in the target physical scene. The configuration information is used for representing the corresponding relation between the first projection graph and the first entity object and the acquisition mode of the monitoring data of the first entity object.
And step 103, rendering the first scene model according to the first model file and the monitoring data of the first entity object.
After the monitoring data of the first entity object is obtained, determining the rendering effect of the first projection graph based on the monitoring data of the first entity object. The configuration information provides an association between the monitoring data of the first entity object and the rendering effect of the first projection image, and takes an SVG file as an example, where the SVG file includes elements such as a vector image, a path, a text, and an image. Some example rendering effects referring to fig. 4, for example, a building park 40 includes two buildings, where a building 41 includes two floors, namely a floor 401 and a floor 402, the floor 402 being always displayed, the floor 401 being displayed in the event that the building monitors a fire alarm or the like, and hidden in a default normal state. For a certain floor 403 in the building, including a layer 404 and a layer 405, the layer 405 is always displayed by default, the layer 404 is displayed when the monitoring data of the floor is within a preset range (for example, the temperature is higher than 50), and the rest states are hidden. Alternatively, referring to FIG. 5, an alarm icon 502 is displayed alongside a device projected graphic 501.
The method comprises the steps of responding to a target instruction for indicating to display a first scene model, acquiring a first model file based on text, wherein the first scene model comprises at least one projection graph used for representing projection of an entity object in a target entity scene, acquiring monitoring data of the first entity object according to configuration information of the first projection graph in the first scene model by the first model file, wherein the first entity object corresponds to the first projection graph in the target entity scene, and rendering the first scene model according to the first model file and the monitoring data of the first entity object, wherein the rendering effect of the first projection graph is determined according to the monitoring data of the first entity object, so that the rendering effect of the first projection graph is related to the monitoring data of the first entity object, and interaction and mapping between a virtual scene and a real entity are supported, thereby realizing a digital twin model, and the entity object in the entity scene is represented by the simplified plane projection graph, so that the rendering difficulty can be reduced, the scene model can be constructed by using text, the size of the first model file can be reduced, the smoothness of the twin model can be improved, and the bandwidth and the computing capacity required by the digital twin model can be rendered.
In some embodiments, the step 102 may be performed to obtain the monitoring data of the first entity object according to the configuration information of the first model file on the first projection graph in the first scene model, and may include performing the step of determining a data obtaining interface of the monitoring data of the first entity object according to the configuration information of the first projection graph, and obtaining the monitoring data of the first entity object based on the data obtaining interface. For example, the first entity object may be a floor, and the monitored data of the first entity object may be a temperature monitored by a temperature sensor provided in the floor, and the data acquisition interface is used for acquiring the monitored temperature data of the temperature sensor. And a data acquisition interface for monitoring data is provided in the configuration information, so that the rendering efficiency can be improved.
In some embodiments, the first projection graph comprises a plurality of layers, and the first scene model is rendered according to the first model file and the monitoring data of the first entity object.
The first projected graphic may include a plurality of superimposed layers, each layer including at least one element, wherein a rendering effect of the first layer is rendered according to the monitored data of the first physical object. Rendering effects may include displaying or hiding a first layer, e.g., portions of the layer may be hidden in some states, referring to fig. 4, where a red layer 402 is superimposed on a projected pattern 401 of a floor at a temperature above 50, otherwise, where the temperature is not above 50, hiding the red layer 402. The first layer may include text elements, where the text elements are associated with the monitoring data of the device, and referring to fig. 5, parameter information such as three-phase current is displayed on a projection graph of the device, and parameters of the three-phase current change in real time according to the monitoring data.
By grouping the elements in the first projection graph into a plurality of layers, different types of information or elements can be distributed to different layers, complex scenes can be displayed more clearly, the content of each layer can be updated independently, the system can re-render the layers bound with the monitoring data according to the needs, the whole graph does not need to be re-rendered, and unnecessary calculation and rendering resource consumption is reduced.
In some embodiments, after performing step 103 to render the first scene model based on the first model file and the monitored data of the first entity object, the method may further include performing steps of listening for a first interaction with the second projected pattern based on configuration information in the first model file for the second projected pattern in the first scene model and/or listening for a target event associated with the second entity object. The first interactive operation comprises, but is not limited to, single click, double click, hovering, sliding, dragging, entering preset key position combinations, touching, voice input, user gesture operation monitored by a camera and the like. The target events include, but are not limited to, monitoring data exceeding a preset threshold, detecting a moving object entering a preset area, an alarm event, etc. The second projection graph is used to represent projections of a second physical object in the target physical scene.
In response to the first interactive operation and/or the target event, performing at least one of the following rendering operations:
The second layer in the second projected pattern is displayed or hidden. For example, in the case where a smoke alarm event is detected on a floor, a red layer is superimposed and displayed in the projected pattern of the floor to indicate a smoke alarm.
And changing the attribute value of the second projection graph, and re-rendering the second projection graph according to the changed attribute value of the second projection graph. For example, in the event that a mouse hovers over a floor is monitored, the contouring color (attribute) of that floor is changed from white to black.
The second projected pattern is hidden. For example, a mouse gesture operation of dragging a preset track in a projection graph display area of a building conceals the projection graph of the building.
And acquiring and displaying the monitoring data of the second entity object. For example, referring to fig. 5, in a projected pattern 501 of one device at a floor, monitoring parameter data of three-phase current is displayed.
And popping up a preset icon, wherein the preset icon is used for expanding and displaying the monitoring data of the second entity object after being triggered. For example, referring to FIG. 5, an alarm icon 502 is pop-up displayed alongside a projected graphic 501 of a single-click device, and the alarm icon 502 may be triggered to expand a detail page 503 to display the monitoring data of the second physical object.
Rendering the hidden third projected pattern in the first scene model. The third projection graph is used to represent a projection of a third physical object in the target physical scene, the third physical object being associated with the second physical object. It will be appreciated that a portion of the projected graphics may be hidden in the first scene model, for example, for a scene model of a building park, only the individual buildings and floors in the building are displayed in a default state, as shown in fig. 3, and after the user clicks on a floor of a building, an overhead view of the floor room structure is rendered. The third projected pattern may be directly above all layers displayed before the click operation is received, or may hide all content before the click operation is received, and only the third projected pattern is displayed.
And acquiring a second model file based on the text and rendering a second scene model according to the second model file. For example, the first model file includes an aerial view of a plurality of building parks, and after clicking one of the building parks, a second model file corresponding to the building park is obtained, and a second scene model corresponding to the building park is rendered.
Some application scenarios of the embodiments of the present application are provided based on the above examples, where the target entity scenario is a building park, and the entity object includes a building for constituting the building park, a floor in the building, and a device in the floor, and a projected pattern of the building is formed by stacking projected patterns of the floors in the building, and the projected pattern of the device in the floor is displayed after the second interactive operation on the projected pattern of the floor is monitored.
It will be appreciated that the different rendering modes described above may be combined, in one specific implementation, an attribute may be added to one layer in each floor or each building and an attribute value may be set, behavior = alarm, river to indicate interaction of alarm and mouse hover selection, after setting the above parameters, when the mouse moves into the area of a floor or building element or the space (entity object) associated with the layer group to which the layer belongs has an alarm event, the layer with parameter behavior changes its attribute, specifically, according to the specific setting of the attribute value in the layer, the description attribute is changed to yellow description after the mouse moves in, the transparency of the color is changed when the alarm event has changed from a default transparent color to red translucent color.
Through various rendering operation modes and combination thereof, rich operation experience can be provided for interactive operation, different rendering effects are provided for prompting based on target events, and the flexibility of a scene model is improved.
In some embodiments, after the text-based first model file is acquired, a tree structure diagram of the first scene model may be further displayed according to the first model file, and referring to fig. 6, the tree structure diagram is used to represent a hierarchical relationship of each entity object in the target entity scene, and in response to a third interaction on a fourth entity object in the tree structure diagram, a fourth projection graph for representing a projection of the fourth entity object is rendered.
In a specific implementation based on SVG files, attributes may be added and attribute values may be set in attributes of group labels corresponding to buildings or floors, where name=space, SPACENAME =space name, spaceId =space ID, and assignment of parameters refers to the following table, corresponding to the tree hierarchy derived in fig. 6.
Space ID |
Space name |
1735019390207 |
Park name |
1735019399241 |
Building and method for manufacturing the same |
1735019414757 |
Floor 1 |
1735019414758 |
Floor 2 |
1735019414759 |
Floor 3 |
Further, the added layer group label is set with the jump of the mouse behavior, for example, when clicking a certain level, the jump is performed to the scene model of the corresponding level, so that the details of different levels in the scene can be conveniently and rapidly checked.
In some embodiments, parameters are added to a designated layer or layer group in an SVG format graph of a device model, and the assigned binding of the parameters is the device id of the device in the system, so that the graph and the device in the system are mutually associated and bound, and a certain real-time data presentation and interaction effect is realized. Specifically, attributes can be added to the drawn devices, attribute values are set, namely name=space, ASSETNAME =device name, assetId =device ID, and the association binding of the layer and the devices is completed by filling the IDs of the corresponding devices. In the associated SVG graphics of the device, configuration information is added to a designated layer in the device, as shown in fig. 7, in the projected graphics 70 of the air conditioning device, four layers, namely, a layer 701-a layer 704, are included, and an attribute value are added for a temperature text "- ° C" in the layer 703, wherein the temperature text is associated with a temperature data interface control_temp of the device, and when the air conditioning device is displayed, current temperature data is displayed in real time.
The SVG file is a vector graphics format based on XML, the SVG graphics are drawn through software for drawing the SVG graphics, the graphics in the SVG format are derived, ID or specified parameters are set for a certain formulated layer or group of layers to associate and bind a data interface, dynamic changes of graphics color, transparency, size, animation or text, style and the like can be realized based on the combination of the SVG file and JavaScript, CSS and the like, the rendering effect of real-time state display is achieved, and the state change of a physical entity can be intuitively reflected. Meanwhile, the operation of the mouse on the designated layer, such as mouse hovering, clicking events, animation effects and the like, can be edited, and the operations of popup window or bubble prompt and the like are realized, so that the rich user interaction function is realized.
The technical scheme of the scene rendering method provided by the embodiment of the application can realize the following beneficial effects:
1. cost of data storage
1.1 File format reduction:
Unlike 3D model formats, which require the storage of complex information, the present invention is an XML-based vector graphics format that describes graphics in a textual manner. This means that SVG files mainly store descriptive information of geometry, color, path, etc. of graphics, represented by a series of simple straightforward labels and attributes.
1.2 Storage capacity improvement:
Unlike 3D models with several hundred M and several G sizes, the present invention is usually small in file size due to the text format and only focusing on vector information of graphics. For simple scene graphs, such as some basic industrial equipment outlines or building plan views, SVG files may have only a few KB to tens KB. Even relatively complex SVG scenes, containing multiple graphic elements and interactive elements, often have file sizes that can be controlled within a few hundred KB.
1.3 Memory resource occupancy reduction:
unlike 3D models that require significant hardware storage requirements, the present invention occupies less storage resources and can store more scene files on a relatively smaller storage device when storing a large number of digital twinned scene graphs. This can effectively reduce storage costs and reduce the need for mass storage devices for digital twin systems that need to store a large number of different scene versions or historical scene data.
2. Cost aspect of data transmission
2.1 Network transmission efficiency improves:
Unlike 3D models, which have long transmission times, the smaller size of the present invention makes it a significant advantage in network transmission. In digital twinning applications, SVG files can complete transmission faster when a scene graph needs to be transmitted from a server side to a client (e.g., browser or mobile device). For example, in a web-based digital twinning system, an SVG scene graph may be transmitted in hundreds of milliseconds, and the user may quickly see the basic contours of the scene and interact.
2.2 Bandwidth demand reduction:
Unlike 3D models, which have higher bandwidth requirements, the present invention has relatively lower requirements on network bandwidth. In low bandwidth environments, such as when accessing digital twinning scenarios through a mobile network or some old network infrastructure, SVG scene graphs can still be transmitted more smoothly because their data size is small and do not place too much burden on the network. This enables the digital twin system to be used under a wider range of network conditions, expanding its range of applications.
3. Interaction and dynamic aspects
3.1 Element operability is simpler:
different from a 3D model which needs complex interaction and special rendering engine support, each graphic element in the SVG can be independently selected, modified and operated, and developers can easily add interaction effects such as mouse hovering, clicking events, animation effects and the like to the SVG graphics through JavaScript, CSS and other technologies, so that rich user interaction functions are realized.
3.2 Data driven update is more convenient, and associated binding reduces the workload of development:
different from a more complex 3D model which is updated by data driving, the method can be conveniently bound with the data, and the attribute and style of the graph are dynamically updated according to the real-time data, so that the state change of the digital twin object is reflected in real time. For example, according to the temperature data collected by the sensor, the graphic color of the representing equipment in the SVG is dynamically changed, and the temperature state of the equipment is intuitively displayed.
4. Cross-platform and compatibility aspects
4.1 Better compatibility:
Unlike 3D model with poor compatibility, the SVG of the invention is an open standard formulated by W3C, has good cross-platform property and compatibility based on XML language, can be normally displayed and used on various operating systems, browsers and devices, and does not need to worry about graphic display problems caused by platform differences.
Correspondingly to the scene rendering method provided by the embodiment of the present application, the embodiment of the present application further provides a scene building method, which can be understood that the scene building method provided by the embodiment of the present application is used for building a scene model in the scene rendering method of the embodiment of the present application, and generating a corresponding model file, so that an optional implementation of the scene building method provided by the embodiment of the present application may refer to the scene rendering method provided by the embodiment of the present application, and will not be described herein. It can be appreciated that the method for setting up a scene provided in the embodiment of the present application may be applied to the client or the server that implements the method for rendering a scene, or in some implementations, the method for rendering a scene and the method for setting up a scene may be implemented by different clients, for example, a developer implements setting up a scene through a development terminal, and a user opens a scene model through a client such as a browser. In some implementations, the SVG file can be edited by way of a text editor (e.g., notepad++, sublime Text, VSCode, etc.), specialized software, online tools, etc. to build a model, the text editor creates and edits the SVG file.
Referring to fig. 8, the scene building method provided by the embodiment of the application includes the following steps:
step 801, in response to a drawing operation, creating a projection graph for representing projections of each entity object in a target entity scene;
Step 802, generating configuration information for acquiring monitoring data of a first entity object aiming at a first projection graph corresponding to the first entity object in a target entity scene;
step 803, determining a mapping relationship between the rendering effect of the first projection graph and the monitoring data of the first entity object;
And 804, generating a first model file based on the text according to the created projection graph, the configuration information and the mapping relation, wherein the first model file is used for rendering a first scene model representing the target entity scene.
In some embodiments, drawing the projection graph can be combined through simple dotted lines and planes, so that a 2.5D pseudo 3D scene display live-action based on the SVG format can be drawn. The configuration information may be a configuration of the projected pattern or a layer/group of layers in the projected pattern, referring to fig. 9, a single layer 901 is added for the alert hover interaction for a certain floor 90. The layer is configured in its configuration page 902 with the name behavior and the add attribute value being alarm, river, indicating interaction of alarm and mouse-over selection. Referring to the configuration of the configuration information page 903 shown in fig. 9, when the mouse moves into the element area or the space associated with the layer group to which the layer belongs has an alarm event, the layer with the name behavior changes its attribute, when the mouse moves into the element area, the description attribute changes, the yellow description is displayed, and when the alarm event has an alarm event, the transparency of the color changes from the default transparent filling color to the red semitransparent filling color.
Fig. 10 is a block diagram of an electronic device for implementing an embodiment of the application. As shown in fig. 10, the electronic device includes a memory 1001 and a processor 1002, and the memory 1001 stores a computer program executable on the processor 1002. The processor 1002 when executing the computer program implements the methods of the above embodiments. The number of memories 1001 and processors 1002 may be one or more. In a specific implementation, the electronic device may further include a communication interface 1003, configured to communicate with an external device, and perform data interaction transmission.
In a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the memory 1001, the processor 1002, and the communication interface 1003 may be connected to each other by a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on a single chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through internal interfaces.
The embodiment of the application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the method provided in the embodiment of the application.
The embodiment of the application provides a computer program product, comprising a computer program, which when being executed by a processor, realizes the method provided in the embodiment of the application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication equipment provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip which comprises an input interface, an output interface, a processor and a memory, wherein the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It is to be appreciated that the Processor described above can be CPU (Central Processing Unit), but can also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), field programmable gate arrays (Field Programmable GATE ARRAY, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (ADVANCED RISC MACHINES, ARM) architecture.
Further alternatively, the memory may include a read-only memory and a random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static random access memory (STATIC RAM, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNC LINK DRAM, SLDRAM), and Direct memory bus random access memory (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method described in flow charts or otherwise herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps described in the flowcharts or otherwise described herein, e.g., may be considered a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is merely an exemplary embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered in the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.