[go: up one dir, main page]

CN116016876A - Method, system and terminal based on fusion of three-dimensional geographic information and video information - Google Patents

Method, system and terminal based on fusion of three-dimensional geographic information and video information Download PDF

Info

Publication number
CN116016876A
CN116016876A CN202310022066.XA CN202310022066A CN116016876A CN 116016876 A CN116016876 A CN 116016876A CN 202310022066 A CN202310022066 A CN 202310022066A CN 116016876 A CN116016876 A CN 116016876A
Authority
CN
China
Prior art keywords
information
dimensional
video information
monitoring
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310022066.XA
Other languages
Chinese (zh)
Inventor
张辉
郭亚涛
贺洁
王海庚
杨永波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Information Planning And Design Institute Co ltd
Original Assignee
Shanxi Information Planning And Design Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Information Planning And Design Institute Co ltd filed Critical Shanxi Information Planning And Design Institute Co ltd
Priority to CN202310022066.XA priority Critical patent/CN116016876A/en
Publication of CN116016876A publication Critical patent/CN116016876A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Alarm Systems (AREA)

Abstract

The application relates to the field of monitoring, in particular to a method for fusing three-dimensional geographic information and video information, which comprises the following steps: displaying a three-dimensional real-scene graph of a target position through a visual interface, wherein a plurality of monitoring points and sensor arrangement points are displayed on the three-dimensional real-scene graph; responding to a monitoring point selection instruction, and displaying video information of a target monitoring point on the visual interface; in response to the sensor placement point selection instruction, geographic environment information acquired at the target sensor placement point is displayed on the visual interface. According to the method, the system and the device, through the three-dimensional geographic information video fusion technology, from the perspective of geographic information application, the three-dimensional live-action model is used, dynamic monitoring videos are matched, the 3D visual twin network of network resources is intuitively and accurately built, and the method has the effect of improving the association degree between the monitoring system and the sensing system, so that the whole data processing center is conveniently and intuitively managed.

Description

Method, system and terminal based on fusion of three-dimensional geographic information and video information
Technical Field
The invention relates to the field of monitoring, in particular to a method, a system and a terminal based on fusion of three-dimensional geographic information and video information.
Background
In daily work of enterprises, government and other units, a machine room is required to be used as a data processing center and a data storage center, and in order to ensure normal operation of equipment in the machine room, the machine room is required to be monitored and managed. The method for managing the machine room is to install a monitoring system and a sensing system in the machine room to monitor and manage the equipment and the running condition of the equipment in the data center.
In the prior art, equipment such as a monitoring host, a protocol gateway and the like is usually designed and installed in a machine room, and the equipment is connected to a server system of the main machine room through a network so as to achieve the purpose of real-time monitoring through video information; and the equipment for collecting temperature, humidity and smoke sensors is arranged in the machine room to collect various data indexes in the machine room so as to monitor the live condition of the environment in the machine room.
However, some of the machine rooms are provided with a plurality of jurisdictional devices, and the machine rooms of the subordinate units are distributed in a scattered manner, the monitoring system and the sensing system are used in an independent monitoring manner, the association degree between the monitoring system and the sensing system is low, and the whole data processing center cannot be managed conveniently and intuitively.
Disclosure of Invention
In order to conveniently and intuitively manage the whole data processing center, the application provides a method, a system and a terminal based on fusion of three-dimensional geographic information and video information.
In a first aspect, the method based on three-dimensional geographic information and video information fusion provided by the application adopts the following technical scheme:
the method based on the fusion of the three-dimensional geographic information and the video information comprises the following steps:
displaying a three-dimensional real-scene graph of a target position through a visual interface, wherein a plurality of monitoring points and sensor arrangement points are displayed on the three-dimensional real-scene graph; responding to a monitoring point selection instruction, and displaying video information of a target monitoring point on the visual interface; in response to the sensor placement point selection instruction, geographic environment information acquired at the target sensor placement point is displayed on the visual interface.
By adopting the technical scheme, the three-dimensional live-action diagram of the target position is displayed on the visual interface, the monitoring point and the sensor arrangement point are displayed on the three-dimensional live-action diagram, the video information of the target monitoring point is visually seen on the visual interface, the geographic environment information acquired by the visual target sensor arrangement point of the visual interface is comprehensively monitored and managed by taking the 3D visualization as an important management means, and the operation conditions of the asset equipment and the asset equipment of the data center are fused in the 3D scene, so that unified monitoring, unified early warning, unified asset management and unified space planning are conveniently carried out, and the management efficiency and operation and maintenance efficiency of the machine room are improved.
Preferably, the step displays a three-dimensional real-scene graph of the target position through a visual interface, wherein the three-dimensional real-scene graph is displayed with a plurality of monitoring points and sensor arrangement points, and the method comprises the following steps:
displaying configuration information of the target position in response to the retrieval information of the target position; responding to the selection information of the target position, and displaying the three-dimensional live-action graph of the target position on the visual interface; and identifying the monitoring point and the sensor arrangement point of the target position on the three-dimensional live-action graph through a visual interface.
By adopting the technical scheme, the search frame is displayed on the visual interface, and the user inputs search information in the search frame to acquire configuration information of the target position, wherein the configuration information comprises machine position information, server occupation ratio in the machine room and the like, for example, all the machine position information in the park and the server occupation ratio in the machine room and the like are acquired through searching the name of the park, all the machine position information of the building in the park and the server occupation ratio in the machine room and the like can be acquired through inquiring the building information in the park, and all the machine position information of the corresponding floor and the server occupation ratio in the machine room and the like can be acquired through searching the floor information of the park, so that the user can conveniently and rapidly position the machine room, and the management efficiency and the operation and maintenance efficiency of the machine room are further improved.
Preferably, the identifying, by a visual interface, the monitoring point of the target position and the sensor arrangement point diagram on the three-dimensional live-action diagram includes: and on the three-dimensional live-action graph, identifying the monitoring points through the first icons, and identifying the sensor arrangement points through the second icons.
By adopting the technical scheme, the purpose of marking the monitoring points and the sensor arrangement points on the three-dimensional live-action graph is achieved, unified monitoring, unified early warning and unified asset management are convenient to build, and therefore the management efficiency and the operation and maintenance efficiency of a machine room are improved.
Preferably, the responding to the monitoring point selection instruction includes: responding to the monitoring point selection instruction, and highlighting the target monitoring point; displaying a first window on the three-dimensional live-action graph, and playing video information of the target monitoring point in the first window; marking time nodes of picture variation in video information based on the video information; and generating segment video information of the picture variation based on the time node.
By adopting the technical scheme, the real-time video, the playback video and the geographic information are unified, the purpose of real scene restoration is realized, and the user can conveniently and accurately find the time of picture change and the corresponding picture through the setting of the time node and the video information, so that the monitoring data is integrated, and the efficiency of monitoring management is improved.
Preferably, the step of generating the segment video information based on the time node includes:
and taking the node time of the time node as the list title of the segment video information.
Through adopting above-mentioned technical scheme, through list title, the user can too quick view the time point and the corresponding time quantum of picture change, solves the problem such as video dispersion, rupture, frequent manual control.
Preferably, the responding to the sensor arrangement point selection instruction includes, in displaying the geographical environment information acquired at the target sensor arrangement point on the visual interface: highlighting the sensor placement point in response to the sensor placement point selection instruction; and displaying a second window in the three-dimensional live-action graph, wherein the second window is internally provided with the geographical environment information at the sensor arrangement point, and the geographical environment information comprises machine room power information and environment index information.
Through adopting above-mentioned technical scheme, through the collection to computer lab power information, acquire the equipment running state in the computer lab, through the collection to computer lab environmental index information, acquire the environmental condition in the computer lab, show computer lab power information and environmental index information through visual interface, realize cross-system monitoring analysis and visual management to promote the management efficiency and the fortune dimension efficiency of computer lab.
Preferably, comparing the geographical environment information with a preset threshold range, and judging that fault information exists if the geographical environment information is not in the threshold range; and recording the fault information and generating a fault statistical report.
By adopting the technical scheme, the geographical environment information is compared with the preset threshold range, the fault is automatically checked, and the checked fault information is generated into a fault statistical report, so that a user can manage uniformly.
In a second aspect, the present application discloses a system based on three-dimensional geographic information and video information fusion, which adopts the method based on three-dimensional geographic information and video information fusion, and includes: the first display module is used for displaying the three-dimensional live-action graph of the target position through the visual interface; the second display module responds to the monitoring point selection instruction and displays the video information of the target monitoring point on the visual interface; and a third display module for displaying the geographic environment information at the target sensor arrangement point on the visual interface in response to the sensor arrangement point selection instruction.
By adopting the technical scheme, the three-dimensional live-action diagram of the target position is displayed on the visualized interface through the first display module, the monitoring points and the sensor arrangement points are displayed on the three-dimensional live-action diagram, the video information of the target monitoring points is visually seen on the visualized interface through the second display module, the geographic environment information acquired by the visual target sensor arrangement points on the visualized interface is comprehensively monitored and managed by taking the 3D visualization as an important management means, and the scattered monitoring systems and sensing systems in the data center machine room are fused in the 3D scene so as to facilitate unified monitoring, unified early warning, unified asset management and unified space planning, thereby improving the management efficiency and operation and maintenance efficiency of the machine room.
In a third aspect, the application discloses a terminal device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the method based on three-dimensional geographic information and video information fusion is adopted when the processor loads and executes the computer program.
By adopting the technical scheme, the computer program is generated by the method based on the fusion of the three-dimensional geographic information and the video information and is stored in the memory to be loaded and executed by the processor, so that the terminal equipment is manufactured according to the memory and the processor, and the use of a user is facilitated.
In a fourth aspect, the present application discloses a computer readable storage medium, which adopts the following technical scheme: the computer readable storage medium stores a computer program, and when the computer program is loaded and executed by a processor, the method based on the three-dimensional geographic information and video information fusion is adopted.
By adopting the technical scheme, the computer program is generated by the method based on the fusion of the three-dimensional geographic information and the video information and is stored in the computer readable storage medium to be loaded and executed by the processor, and the computer program is convenient to read and store by the computer readable storage medium.
Drawings
Fig. 1 is a flowchart of a method of steps S1-S5 in a method for fusing three-dimensional geographic information with video information according to an embodiment of the present application.
Fig. 2 is a flowchart of a method of steps S10-S12 in a method for fusing three-dimensional geographic information with video information according to an embodiment of the present application.
Fig. 3 is a flowchart of a method of steps S20-S23 in a method for fusing three-dimensional geographic information with video information according to an embodiment of the present application.
Fig. 4 is a flowchart of a method of steps S30-S31 in a method for fusing three-dimensional geographic information with video information according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below in conjunction with figures 1-4.
The geographic information system (Geographic Information System is called GIS for short) can process the space data according to the geographic coordinates or the space position under the support of the software and hardware of the computer, can rapidly acquire the information meeting the application requirement through the comprehensive analysis of multiple factors, and can represent the processing result in the form of a map, a graph or data.
The embodiment of the application discloses a method based on three-dimensional geographic information and video information fusion, referring to fig. 1 and 2, the method based on three-dimensional geographic information and video information fusion comprises the following steps:
s1: displaying a three-dimensional real-scene graph of the target position through a visual interface, wherein a plurality of monitoring points and sensor arrangement points are displayed on the three-dimensional real-scene graph;
the method comprises the steps of establishing a network model with geographical environment information through a GIS system to establish a three-dimensional live-action graph, displaying monitoring points and sensor arrangement points on the three-dimensional live-action graph, and enabling the three-dimensional live-action graph to incline and rotate in a manual dragging mode through 360-degree multi-view real-time scene browsing of the three-dimensional live-action graph so as to facilitate user browsing.
S10: displaying configuration information of the target location in response to the retrieval information of the target location;
the method comprises the steps that a search frame is displayed on a visual interface, a user inputs search information in the search frame to obtain configuration information of a target position, the configuration information comprises machine position information, in-machine-room server occupation ratio and the like, and specifically, if all the machine position information in a park and the in-machine-room server occupation ratio and the like are obtained through searching the name of the park, all the machine position information of a building and the in-machine-room server occupation ratio and the like can be obtained through searching the building information in the park, and all the machine position information of corresponding floors and the in-machine-room server occupation ratio and the like can be obtained through searching the floor information of the park.
S11: responding to the selection information of the target position, and displaying a three-dimensional live-action diagram of the target position by a visual interface;
the user selects the target position according to the management requirement, for example, the three-dimensional real image of the corresponding target park is obtained through park name retrieval, the three-dimensional real image of the corresponding building can be obtained through inquiring building information in the park, and the three-dimensional real image of the corresponding floor can be obtained through retrieving floor information of the park.
S12: marking monitoring points and sensor arrangement points of the target position on the three-dimensional live-action graph through a visual interface;
on the three-dimensional live-action graph, the monitoring points are marked through a first icon so that a user can conveniently identify the positions of the monitoring points in the three-dimensional live-action graph, and the sensor arrangement points are marked through a second icon so that the user can conveniently identify the positions of the sensor arrangement points in the three-dimensional live-action graph;
after identifying the monitoring points and the sensor arrangement points of the target position on the three-dimensional live-action graph through the visualized interface in the step, referring to fig. 1 and 3, the following steps are performed,
s2: responding to the monitoring point selection instruction, and displaying video information of a target monitoring point on a visual interface;
s20: responding to the monitoring point selection instruction, and highlighting a target monitoring point;
and after receiving a monitoring point selection instruction sent by a user, highlighting the selected target monitoring point. The highlighting mode of the method can be that the first icon of the target monitoring point is enlarged and highlighted, so that the first icon of the selected target monitoring point is larger than the first icons of other monitoring points, and highlighting is carried out; the first icon of the target monitoring point can be displayed in a flashing mode, and the first icons of other monitoring points can be kept in a static state so as to be highlighted, so that the position of the target monitoring point selected by a user is prompted.
S21: displaying a first window in the three-dimensional live-action graph, and playing video information of a target monitoring point in the first window;
through setting the first window and displaying the real-time monitoring video and the historical monitoring video of the target monitoring point in the first window, a user can browse the picture information of the historical monitoring video by dragging a progress bar of the historical monitoring video, so that the user can conveniently view the video information.
S22: marking time nodes of picture variation in video information based on the video information;
based on the motion detection technology, a time period in which the picture changes in the video information are identified, and a time point at which the picture starts to change is recorded as a time node for the picture change.
S23: generating segment video information of picture variation based on the time node;
specifically, the node time of the time node is taken as the list title of the video information of the segment to identify the video with each segment having the picture variation, if the video with the point-out time variation is 9:00-9:10 and 9:15-9:42, two pieces of video information with picture variation are recorded, and 9:00 as 9:00-9: list title of 10 video segments, 9:15 as 9:15-9: list title of 42 video segments for easy viewing and management by the user.
After the step of generating the segment video information of the picture variation, referring to fig. 1 and 4, the following steps are performed,
s3: responding to a sensor arrangement point selection instruction, and displaying geographic environment information acquired at a target sensor arrangement point on a visual interface;
s30: highlighting a sensor placement point in response to a sensor placement point selection instruction;
after a sensor arrangement point selection instruction sent by a user is received, the selected sensor arrangement point is highlighted, and the highlighting mode can enable a second icon of the target sensor arrangement point to be enlarged and highlighted, so that the second icon of the selected target sensor arrangement point is larger than the second icons of other sensor arrangement points, and highlighting is carried out. It is also possible to make the second icon of the target sensor arrangement point flash and display, and the second icons of the other target sensor arrangement points remain stationary for highlighting, thereby prompting the user to select the position of the target sensor arrangement point.
S31: displaying a second window in the three-dimensional live-action graph, wherein geographic environment information at the sensor arrangement point is displayed in the second window, and the geographic environment information comprises machine room power information and environment index information;
the running state of equipment in the machine room is recorded by collecting and recording the power information of the machine room, such as the voltage, the current and the like of the equipment. The state of the environment in the machine room is recorded by collecting and recording environmental index information such as temperature, humidity, smoke concentration and the like in the machine room.
And combining the power information of the machine room and the environmental index information in the machine room, and displaying the data in a second window to synchronize the environmental live condition in the machine room, so that a user can intuitively see the running state and the environmental state of equipment in the machine room, and the management is facilitated.
S4: comparing the geographical environment information with a preset threshold range, and judging that fault information exists if the geographical environment information is not in the threshold range;
by setting a threshold range, comparing geographical environment information with a preset threshold range, such as a current threshold range, a voltage threshold range, a temperature threshold range, a humidity threshold range, a smoke concentration threshold range and the like, judging whether the collected current data is within the current threshold range, whether the collected voltage data is within the voltage threshold range, whether the collected temperature threshold range is within the temperature threshold range, whether the collected humidity data is within the humidity threshold range and whether the collected smoke concentration data is within the smoke concentration threshold range or not, if one item of data is not within the preset threshold range, judging that fault information exists, and displaying alarm information on a visual interface.
S5: recording fault information and generating a fault statistical report;
and recording the fault information, and generating a fault statistical report for recording the fault data, the time data and the position information, so that the user can check the fault information conveniently, and further, the user can analyze and troubleshoot the fault conveniently.
The implementation principle of the method based on the fusion of the three-dimensional geographic information and the video information disclosed by the embodiment of the application is as follows: the three-dimensional live-action diagram of the target position is displayed on the visualized interface, the monitoring points and the sensor arrangement points are displayed on the three-dimensional live-action diagram, video information of the target monitoring points is visually seen on the visualized interface, geographic environment information collected by the visualized target sensor arrangement points on the visualized interface is comprehensively monitored and managed by taking 3D visualization as an important management means, and the operation conditions of data center asset equipment, asset equipment and the like are comprehensively monitored and managed, so that scattered monitoring systems and sensing systems in a data center machine room are fused in a 3D scene, and unified monitoring, unified early warning, unified asset management and unified space planning are conveniently carried out, and the management efficiency and operation and maintenance efficiency of the machine room are improved.
By means of a three-dimensional geographic information video fusion technology, from the perspective of geographic information application, a three-dimensional live-action model is used for matching dynamic monitoring videos, a 3D visual twin network of network resources is intuitively and accurately built, a user is better served, information islands between sensing systems and monitoring systems are broken, data application value is improved, meanwhile, accurate matching fusion is carried out with three-dimensional important scenes according to picture content of the videos, and the effect of synchronous display of the videos and the scenes in the three-dimensional important scenes is achieved.
The embodiment of the application also discloses a system based on the fusion of the three-dimensional geographic information and the video information, which comprises: the first display module displays a three-dimensional live-action diagram of the target position through a visual interface;
the second display module responds to the monitoring point selection instruction and displays video information of the target monitoring point on the visual interface;
and the third display module is used for responding to the sensor arrangement point selection instruction and displaying geographic environment information at the target sensor arrangement point on the visual interface.
The implementation principle of the method based on the fusion of the three-dimensional geographic information and the video information disclosed by the embodiment of the application is as follows: the method comprises the steps of displaying a three-dimensional live view image of a target position on a visual interface through a first display module, displaying monitoring points and sensor arrangement points on the three-dimensional live view image, visually seeing video information of the target monitoring points on the visual interface through a second display module, comprehensively monitoring and managing asset equipment, asset equipment running conditions and the like of a data center by taking 3D visualization as an important management means through geographic environment information acquired by the visual target sensor arrangement points on the visual interface through a third display module, and fusing scattered monitoring systems and sensing systems in a data center machine room in a 3D scene so as to perform unified monitoring, unified early warning, unified asset management and unified space planning, thereby improving management efficiency and operation and maintenance efficiency of the machine room.
The embodiment of the application also discloses a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the method based on the three-dimensional geographic information and the video information fusion of the embodiment is adopted when the processor executes the computer program.
The terminal device may be a computer device such as a desktop computer, a notebook computer, or a cloud server, and the terminal device includes, but is not limited to, a processor and a memory, for example, the terminal device may further include an input/output device, a network access device, a bus, and the like.
The processor may be a Central Processing Unit (CPU), or of course, according to actual use, other general purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), ready-made programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., and the general purpose processor may be a microprocessor or any conventional processor, etc., which is not limited in this application.
The memory may be an internal storage unit of the terminal device, for example, a hard disk or a memory of the terminal device, or may be an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Memory Card (SMC), a secure digital card (SD), or a flash memory card (FC) equipped on the terminal device, or the like, and may be a combination of the internal storage unit of the terminal device and the external storage device, where the memory is used to store a computer program and other programs and data required by the terminal device, and the memory may be used to temporarily store data that has been output or is to be output, which is not limited in this application.
The method based on the fusion of the three-dimensional geographic information and the video information in the embodiment is stored in a memory of the terminal device through the terminal device, and is loaded and executed on a processor of the terminal device, so that the user can use the method conveniently.
The embodiment of the application also discloses a computer readable storage medium, and the computer readable storage medium stores a computer program, wherein the method based on the three-dimensional geographic information and the video information fusion of the embodiment is adopted when the computer program is executed by a processor.
The computer program may be stored in a computer readable medium, where the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file form, or some middleware form, etc., and the computer readable medium includes any entity or device capable of carrying the computer program code, a recording medium, a usb disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunication signal, a software distribution medium, etc., where the computer readable medium includes, but is not limited to, the above components.
The method based on the fusion of the three-dimensional geographic information and the video information in the embodiment is stored in the computer readable storage medium through the computer readable storage medium, and is loaded and executed on a processor, so that the storage and the application of the method based on the fusion of the three-dimensional geographic information and the video information are convenient.
The foregoing description of the preferred embodiments of the present application is not intended to limit the scope of the application, in which any feature disclosed in this specification (including abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.

Claims (10)

1. The method for fusing the three-dimensional geographic information and the video information is characterized by comprising the following steps of:
displaying a three-dimensional real-scene graph of a target position through a visual interface, wherein a plurality of monitoring points and sensor arrangement points are displayed on the three-dimensional real-scene graph;
responding to a monitoring point selection instruction, and displaying video information of a target monitoring point on the visual interface;
in response to the sensor placement point selection instruction, geographic environment information acquired at the target sensor placement point is displayed on the visual interface.
2. The method of claim 1, wherein the step of displaying a three-dimensional live-action map of the target location through a visual interface, the three-dimensional live-action map having a plurality of monitoring points and sensor arrangement points displayed thereon comprises:
displaying configuration information of the target position in response to the retrieval information of the target position;
responding to the selection information of the target position, and displaying the three-dimensional live-action graph of the target position on the visual interface;
and identifying the monitoring point and the sensor arrangement point of the target position on the three-dimensional live-action graph through a visual interface.
3. The method of claim 2, wherein the identifying the monitoring point and the sensor arrangement point of the target position on the three-dimensional live-action map through the visualized interface comprises:
and on the three-dimensional live-action graph, identifying the monitoring point through a first icon, and identifying the sensor arrangement point through a second icon.
4. A method of fusing three-dimensional geographical information with video information according to any one of claims 1-3, wherein the displaying video information of a target monitoring point at the visual interface in response to a monitoring point selection instruction comprises:
responding to the monitoring point selection instruction, and highlighting the target monitoring point;
displaying a first window on the three-dimensional live-action graph, and playing video information of the target monitoring point in the first window;
marking a time node of picture variation in the video information based on the video information;
and generating segment video information of the picture variation based on the time node.
5. The method of claim 4, wherein the generating the segment of video information based on the time node comprises:
and taking the node time of the time node as the list title of the segment video information.
6. A method of three-dimensional geographic information based fusion with video information according to any one of claims 1-3, wherein the displaying of the geographic environment information collected at the target sensor placement point on the visual interface in response to the sensor placement point selection instruction comprises:
highlighting the sensor placement point in response to the sensor placement point selection instruction;
and displaying a second window in the three-dimensional live-action graph, wherein the second window is internally provided with the geographical environment information at the sensor arrangement point, and the geographical environment information comprises machine room power information and environment index information.
7. The method of three-dimensional geographic information based fusion with video information of claim 6, further comprising:
comparing the geographical environment information with a preset threshold range, and judging that fault information exists if the geographical environment information is not in the threshold range;
and recording the fault information and generating a fault statistical report.
8. A system based on three-dimensional geographic information and video information fusion, characterized in that a method based on three-dimensional geographic information and video information fusion according to any one of claims 1-7 is used, comprising:
the first display module is used for displaying the three-dimensional live-action graph of the target position through the visual interface;
the second display module responds to the monitoring point selection instruction and displays the video information of the target monitoring point on the visual interface;
and a third display module for displaying the geographic environment information at the target sensor arrangement point on the visual interface in response to the sensor arrangement point selection instruction.
9. Terminal device comprising a memory, a processor and a computer program stored in the memory and capable of running on the processor, characterized in that the method of fusion of three-dimensional geographical information with video information according to any one of claims 1-7 is used when the computer program is loaded and executed by the processor.
10. A computer readable storage medium having a computer program stored therein, wherein the computer program, when loaded and executed by a processor, employs the method of fusing three-dimensional geographic information with video information according to any one of claims 1-7.
CN202310022066.XA 2023-01-07 2023-01-07 Method, system and terminal based on fusion of three-dimensional geographic information and video information Pending CN116016876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310022066.XA CN116016876A (en) 2023-01-07 2023-01-07 Method, system and terminal based on fusion of three-dimensional geographic information and video information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310022066.XA CN116016876A (en) 2023-01-07 2023-01-07 Method, system and terminal based on fusion of three-dimensional geographic information and video information

Publications (1)

Publication Number Publication Date
CN116016876A true CN116016876A (en) 2023-04-25

Family

ID=86020880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310022066.XA Pending CN116016876A (en) 2023-01-07 2023-01-07 Method, system and terminal based on fusion of three-dimensional geographic information and video information

Country Status (1)

Country Link
CN (1) CN116016876A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779677A (en) * 2021-09-10 2021-12-10 贵州华泰智远大数据服务有限公司 Digital twin data rapid generation platform based on geographic position information service

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779677A (en) * 2021-09-10 2021-12-10 贵州华泰智远大数据服务有限公司 Digital twin data rapid generation platform based on geographic position information service

Similar Documents

Publication Publication Date Title
US8982130B2 (en) Cluster mapping to highlight areas of electrical congestion
US9251608B2 (en) Data visualization of a datacenter
US7660883B2 (en) Network monitoring device
CN103532780B (en) O&M for IT field monitors integral system and integrated monitoring method
CN109388791B (en) Dynamic diagram display method and device, computer equipment and storage medium
CN111158983A (en) Integrated operation and maintenance management system
CN105373460B (en) The alarm method and system of monitoring message
CN106095651A (en) A kind of 3D virtual computer room method for managing and monitoring and system
CN113407764B (en) Audio and video equipment state graphical display equipment and method based on physical position
WO2015099669A1 (en) Smart shift selection in a cloud video service
CN112688806A (en) Method and system for presenting network assets
CN112035556A (en) Data center cabinet management method and device and electronic equipment
CN112699447A (en) Three-dimensional visual management method for intelligent water affair equipment
CN116016876A (en) Method, system and terminal based on fusion of three-dimensional geographic information and video information
CN113872681A (en) Optical cable supervision method and system of mobile terminal and storage medium
CN110609864B (en) Chemical supply chain-oriented data visualization management method and device
CN119697218A (en) Unified coding method and system for collecting park equipment assets using the Internet of Things
CN114063546A (en) Method, device and medium for checking working state of equipment
CN112989150A (en) Operation and maintenance diagram acquisition method, device, equipment and readable storage medium
CN117349493B (en) Method and device for visual display of power system data based on cim model
CN115061872B (en) Alarm record generation method, device, alarm equipment and storage medium
CN117931564A (en) Operation and maintenance monitoring method and device, electronic equipment and storage medium
CN118113908A (en) 3D twin intelligent campus visualization system and method
CN117076543A (en) Performance measurement method and device, cloud native platform and computer equipment
CN116880840A (en) Service interface generation method, service interface generation device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination