[go: up one dir, main page]

CN114565819B - A remote control method, apparatus, device and storage medium - Google Patents

A remote control method, apparatus, device and storage medium

Info

Publication number
CN114565819B
CN114565819B CN202210190753.8A CN202210190753A CN114565819B CN 114565819 B CN114565819 B CN 114565819B CN 202210190753 A CN202210190753 A CN 202210190753A CN 114565819 B CN114565819 B CN 114565819B
Authority
CN
China
Prior art keywords
controlled
target
identifier
information
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210190753.8A
Other languages
Chinese (zh)
Other versions
CN114565819A (en
Inventor
尹越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210190753.8A priority Critical patent/CN114565819B/en
Publication of CN114565819A publication Critical patent/CN114565819A/en
Application granted granted Critical
Publication of CN114565819B publication Critical patent/CN114565819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control
    • G16Y40/35Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Selective Calling Equipment (AREA)

Abstract

本公开实施例公开了一种远程控制方法、装置、设备及存储介质,其中,该方法包括:获取预设区域对应的场景图像和所述场景图像对应的识别结果;所述识别结果包括所述预设区域中至少一个待控制装置中每一所述待控制装置的装置信息;向终端发送所述场景图像;所述终端用于展示所述场景图像,并接收输入的操作指令;接收所述终端发送的所述操作指令,并基于所述操作指令和每一所述待控制装置的装置信息,确定目标待控制装置和所述目标待控制装置对应的控制指令;向所述目标待控制装置发送所述控制指令,所述控制指令用于指示所述目标待控制装置执行所述控制指令对应的动作。

This disclosure provides a remote control method, apparatus, device, and storage medium. The method includes: acquiring a scene image corresponding to a preset area and a recognition result corresponding to the scene image; the recognition result includes device information of each of at least one controllable device in the preset area; sending the scene image to a terminal; the terminal is used to display the scene image and receive input operation instructions; receiving the operation instructions sent by the terminal, and based on the operation instructions and the device information of each controllable device, determining a target controllable device and a corresponding control instruction; sending the control instruction to the target controllable device, the control instruction instructing the target controllable device to perform the action corresponding to the control instruction.

Description

Remote control method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the field of data processing, in particular to a remote control method, a device, equipment and a storage medium.
Background
Currently, intelligent monitoring is mostly unidirectional, and video streams are transmitted to a display screen from a camera or stored in a hard disk for viewing by people or program analysis. In practical application scenes, such as high-temperature environments, cold storage, laboratories with severe environmental requirements and dangerous environments, management personnel often need to be fully armed before reaching the site to execute related operations. This implementation generally consumes more manpower and material resources and presents a certain risk, and therefore a remote control scheme is needed to solve the above technical problems.
Disclosure of Invention
The embodiment of the disclosure provides a remote control method, a remote control device, remote control equipment and a storage medium.
In a first aspect, a remote control method is provided, including:
acquiring a scene image corresponding to a preset area and an identification result corresponding to the scene image, wherein the identification result comprises device information of each device to be controlled in at least one device to be controlled in the preset area;
the terminal is used for displaying the scene image and receiving an input operation instruction;
receiving the operation instruction sent by the terminal, and determining a target device to be controlled and a control instruction corresponding to the target device to be controlled based on the operation instruction and device information of each device to be controlled;
and sending the control instruction to the target device to be controlled, wherein the control instruction is used for indicating the target device to be controlled to execute the action corresponding to the control instruction.
In some embodiments, the operation instruction includes a selection sub-instruction, the receiving the operation instruction sent by the terminal, and determining a target device to be controlled based on the operation instruction and device information of each device to be controlled, including:
Receiving a selection sub-instruction sent by the terminal;
analyzing the selection sub-instruction to obtain selection information;
And determining the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled.
In some embodiments, the selection information includes input selection coordinates of the target device to be controlled, the device information includes relative coordinates of the devices to be controlled, and determining the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each of the devices to be controlled includes:
And determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and the relative coordinates of each device to be controlled.
In some embodiments, the selection coordinates include first selection coordinates of the target device to be controlled based on the image coordinate system, the relative coordinates include first relative coordinates of the target device to be controlled based on the image coordinate system, the determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and each of the device to be controlled relative coordinates includes:
The target device to be controlled is determined in the at least one device to be controlled based on a first selected coordinate of the target device to be controlled based on the image coordinate system and a first relative coordinate of each of the devices to be controlled based on the image coordinate system.
In some embodiments, the selection coordinates include a second selection coordinate of the target device to be controlled based on the region coordinate system, the determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and the relative coordinates of each of the devices to be controlled includes:
The target device to be controlled is determined in the at least one device to be controlled based on a second selected coordinate of the target device to be controlled based on the region coordinate system and a second relative coordinate of each of the devices to be controlled based on the region coordinate system.
In some embodiments, the method further includes converting a first relative coordinate of each device to be controlled based on the image coordinate system to a second relative coordinate of each device to be controlled based on the region coordinate system based on a coordinate conversion relationship between the image coordinate system and a region coordinate system corresponding to the preset region;
The method comprises the steps of sending the scene image to a terminal, wherein the step of sending the scene image and second relative coordinates of each device to be controlled based on the region coordinate system to the terminal, and the second relative coordinates are used for prompting the real position of each device to be controlled in the preset region in the process of displaying the scene image by the terminal.
In some embodiments, the selection information includes an input target identification of the target device to be controlled, the relative coordinates include a first relative coordinate of the device to be controlled based on the image coordinate system, and the determining the target device to be controlled from the at least one device to be controlled based on the selection information and device information of each device to be controlled includes:
Determining a first visual identifier corresponding to each device to be controlled based on a first relative coordinate corresponding to each device to be controlled and a first identifier coordinate corresponding to at least one first visual identifier obtained by identifying and detecting the scene image;
and determining the target device to be controlled in the at least one device to be controlled based on the target identifier and the first visual identifier corresponding to each device to be controlled.
In some embodiments, the determining the first visual identifier corresponding to each device to be controlled based on the first relative coordinates corresponding to each device to be controlled and the first identifier coordinates corresponding to at least one first visual identifier obtained by performing identifier detection on the scene image includes any one of the following:
Performing identification detection on the scene image based on a preset identification recognition model to obtain a first identification coordinate corresponding to each visual identification in at least one visual identification corresponding to the scene image; determining a first visual identifier corresponding to each device to be controlled based on a first identifier coordinate corresponding to each first visual identifier and a first relative coordinate corresponding to each device to be controlled;
Dividing the scene image based on the first identification coordinates corresponding to each device to be controlled to obtain device sub-images corresponding to each device to be controlled, and carrying out identification detection on the devices to be controlled corresponding to each device to be controlled based on a preset identification recognition model to determine the first visual identification corresponding to each device to be controlled.
In some embodiments, the determining, in the at least one device to be controlled, the target device to be controlled based on the target identifier and the first visual identifier corresponding to each device to be controlled includes:
Under the condition that the target identifier is a target visual identifier, determining a device to be controlled corresponding to a first visual identifier matched with the target visual identifier as the target device to be controlled;
And under the condition that the target identifier is a target device identifier, determining a device to be controlled corresponding to a first visual identifier matched with the target device identifier as the target device to be controlled based on a mapping relation between the device identifier and the visual identifier.
In some embodiments, the operation instruction includes an action sub-instruction, the receiving and analyzing the operation instruction sent by the terminal, and determining a control instruction corresponding to the target device to be controlled includes:
Receiving an action sub-instruction sent by the terminal;
Analyzing the action sub-instruction to obtain action information;
And generating a control instruction corresponding to the target device to be controlled based on the action information.
In some embodiments, the generating, based on the action information, a control instruction corresponding to the target device to be controlled includes:
Converting the motion information into target motion information based on a conversion relationship between an image coordinate system and a region coordinate system in the case that the motion information is based on the image coordinate system;
and generating a control instruction corresponding to the target device to be controlled based on the target action information.
In some embodiments, the action sub-instruction includes at least one of a move sub-instruction, a steer sub-instruction, and a job sub-instruction, wherein,
The motion information carried by the moving sub-instruction is moving information, and the moving information comprises at least one of a target moving position, a moving speed and a moving track;
The action information carried by the steering sub-instruction is steering information, and the steering information comprises at least one of a target steering angle, a steering speed and a steering track;
the operation information carried by the operation sub-instruction is operation information, and the operation information can comprise at least one of operation state, operation target, operation time and operation content.
In some embodiments, the acquiring the scene image corresponding to the preset area and the recognition result corresponding to the scene image includes any one of the following:
Receiving a scene image sent by camera equipment and a recognition result corresponding to the scene image, wherein the recognition result is generated by the camera equipment based on the scene image;
Receiving the scene image sent by the camera equipment; the recognition result is generated based on the scene image.
In some embodiments, the method further comprises:
Acquiring a communication address corresponding to the target device to be controlled based on a mapping relation between the device identifier and the communication address;
The sending the control instruction to the target device to be controlled comprises the following steps:
And sending the control instruction to the target device to be controlled based on the communication address corresponding to the target device to be controlled.
In a second aspect, there is provided a remote control apparatus comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a scene image corresponding to a preset area and an identification result corresponding to the scene image, and the identification result comprises device information of each device to be controlled in at least one device to be controlled in the preset area;
The terminal is used for displaying the scene image and receiving an input operation instruction;
the receiving module is used for receiving the operation instruction sent by the terminal and determining a target device to be controlled and a control instruction corresponding to the target device to be controlled based on the operation instruction and the device information of each device to be controlled;
the second sending module is used for sending the control instruction to the target device to be controlled, and the control instruction is used for indicating the target device to be controlled to execute the action corresponding to the control instruction.
In a third aspect, there is provided a remote control device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect, a computer storage medium is provided, the computer storage medium storing one or more programs executable by one or more processors to implement the steps in the above method.
In the embodiment of the disclosure, the acquired scene image corresponding to the preset area is sent to the terminal and displayed to the user, meanwhile, the real-time operation instruction based on the scene image by the user is received, the corresponding control instruction is sent to the device to be controlled in the preset area, compared with the scheme that a manager needs to enter a site to execute related operations in full assistance in the prior art, the scheme for remotely controlling the device to be controlled is realized, the material resource cost is reduced, the labor cost is saved, the risk that the manager enters the site is avoided, meanwhile, the generated control instruction can be used for determining the target device to be controlled from a plurality of devices to be controlled in the preset area based on the identification result corresponding to the scene image and the operation instruction received by the terminal, the control accuracy of the remote control method in the embodiment of the disclosure is improved, and the target device to be controlled can execute actions corresponding to the real-time operation of the user through the control instruction generated based on the operation instruction in the embodiment of the disclosure, so that a more friendly interaction mode is realized.
Drawings
Fig. 1 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
Fig. 4 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
Fig. 5 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
Fig. 6 is a schematic flow chart of a remote control method according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of an alternative implementation scenario provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an alternative information flow provided by an embodiment of the present disclosure;
Fig. 9 is a schematic diagram of a composition structure of a remote control device according to an embodiment of the disclosure;
fig. 10 is a schematic hardware entity diagram of a remote control device according to an embodiment of the present disclosure.
Detailed Description
The technical scheme of the present disclosure will be specifically described below by way of examples and with reference to the accompanying drawings. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
It should be noted that, in the examples of the present disclosure, "first", "second", and the like are used to distinguish similar objects, and are not necessarily used to describe the order or precedence of the objects. In addition, the embodiments of the present disclosure may be arbitrarily combined without any collision.
Fig. 1 is a schematic flow chart of a remote control method according to an embodiment of the disclosure, where the method includes:
S101, acquiring a scene image corresponding to a preset area and an identification result corresponding to the scene image, wherein the identification result comprises device information of each device to be controlled in at least one device to be controlled in the preset area.
The preset area is an area in the real scene, for example, the preset area may be an area of a warehouse, a laboratory, a square, etc. In this embodiment, a real-time scene image corresponding to a preset area may be acquired by a camera device disposed in the preset area. The camera device may comprise at least one camera assembly, each camera assembly may be arranged in a different sub-area of the predetermined area, all corners of the predetermined area may be covered by all camera assemblies. For example, in the case that the preset area is a warehouse, if only one camera assembly may not cover the whole preset area due to the existence of obstacles such as goods and shelves, a plurality of camera assemblies may be disposed to cover the whole preset area, and in the case that the preset area is a square, only one camera assembly may be disposed to cover the whole preset area due to the wide field of view.
In some embodiments, at least one device to be controlled exists in the preset area, and accordingly, the acquired scene image also includes a real-time image of each device to be controlled.
In some embodiments, the step S101 may be implemented by receiving a recognition result, which is sent by the camera device and corresponds to the scene image, where the recognition result is generated by the camera device based on the scene image.
In the process of acquiring a scene image corresponding to a preset area, the camera equipment identifies the scene image based on an identification algorithm stored locally by the camera equipment to obtain device information corresponding to each device to be controlled, and then the camera packs and sends the scene image and the device information corresponding to each device to be controlled in the scene image to a server.
In some embodiments, the step S101 may be further implemented by receiving the scene image sent by the camera device, and generating the identification result based on the scene image.
After the camera equipment acquires the scene image corresponding to the preset area, the scene image is directly sent to the server in a video stream mode, and after the server receives the scene image, the server can identify the scene image based on an identification algorithm stored locally by the server so as to obtain device information corresponding to each device to be controlled.
S102, sending the scene image to a terminal, wherein the terminal is used for displaying the scene image and receiving an input operation instruction.
In some embodiments, the terminal is a device used on the user side, and the server may directly send the scene image to the terminal after receiving the scene image corresponding to the preset area. In the actual deployment process, the terminal can display the scene image to a user through a preset display device. Meanwhile, the terminal is also provided with an input device, and in the process of displaying the scene image, the terminal can receive an operation instruction input by a user based on the scene image.
In some embodiments, the input device may be a touch display screen. In this embodiment, the input device and the display device are the same device, and in the process of displaying the scene image on the touch display screen, the touch operation of the user may be received through the touch display screen, and the touch operation is used as an operation instruction input by the user. For example, the touch operation may be a click operation, a drag action, a box action, or the like.
In some embodiments, the input device may be a mouse. In this embodiment, in the process of displaying the scene image by the display device, a mouse operation of a user may be received through a mouse, and the mouse operation is used as an operation instruction input by the user. For example, the mouse operation may be a click operation, a drag action, a box action, or the like.
In some embodiments, the input device may be a voice input device. In the process of displaying the scene image by the display device, control voice of a user can be received through the voice input device, the control voice is converted into control text based on a voice recognition technology, and then a corresponding operation instruction is generated based on the control text. For example, the user may say "device a starts working", and after converting the control voice into a control text, the terminal may determine that the target device to be controlled in the control text is "device a" based on a preset device library, and the corresponding control instruction is "start working".
S103, receiving the operation instruction sent by the terminal, and determining a target device to be controlled and a control instruction corresponding to the target device to be controlled based on the operation instruction and device information of each device to be controlled.
In some embodiments, after receiving an operation instruction input by a user based on the scene image, the terminal sends the operation instruction to the server, and correspondingly, after receiving the operation instruction, the server analyzes the operation instruction and determines a target device to be controlled and a control instruction corresponding to the target device to be controlled based on device information of each device to be controlled.
In some embodiments, the operation instruction may include at least one of a select sub-instruction, an action sub-instruction. The selection sub-instruction can be used for determining a target device to be controlled from the at least one device to be controlled, and the action sub-instruction is used for determining actions to be executed by the target device to be controlled.
And S104, sending the control instruction to the target device to be controlled, wherein the control instruction is used for indicating the target device to be controlled to execute the action corresponding to the control instruction.
In the embodiment of the disclosure, the acquired scene image corresponding to the preset area is sent to the terminal and displayed to the user, meanwhile, the real-time operation instruction based on the scene image by the user is received, the corresponding control instruction is sent to the device to be controlled in the preset area, compared with the scheme that a manager needs to enter a site to execute related operations in full assistance in the prior art, the scheme for remotely controlling the device to be controlled is realized, the material resource cost is reduced, the labor cost is saved, the risk that the manager enters the site is avoided, meanwhile, the generated control instruction can be used for determining the target device to be controlled from a plurality of devices to be controlled in the preset area based on the identification result corresponding to the scene image and the operation instruction received by the terminal, the control accuracy of the remote control method in the embodiment of the disclosure is improved, and the target device to be controlled can execute actions corresponding to the real-time operation of the user through the control instruction generated based on the operation instruction in the embodiment of the disclosure, so that a more friendly interaction mode is realized.
Fig. 2 is a schematic flow chart of a remote control method according to an embodiment of the disclosure, as shown in fig. 2, S103 may include S201 to S203 based on fig. 1, and will be described with reference to the steps shown in fig. 2.
S201, receiving a selection sub-instruction sent by the terminal.
S202, analyzing the selection sub-instruction to obtain selection information.
S203, determining the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled.
In some embodiments, the selection information includes user-entered selection coordinates of the target device to be controlled, and the device information includes relative coordinates of the device to be controlled. The above determination of the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled may be performed by S2031.
S2031, determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and the relative coordinates of each device to be controlled.
In some embodiments, the selection coordinates include first selection coordinates of the target device to be controlled based on the image coordinate system, the relative coordinates include first relative coordinates of the target device to be controlled based on the image coordinate system, the determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and each of the device to be controlled relative coordinates includes:
The target device to be controlled is determined in the at least one device to be controlled based on a first selected coordinate of the target device to be controlled based on the image coordinate system and a first relative coordinate of each of the devices to be controlled based on the image coordinate system.
In some embodiments, the selection coordinates include a second selection coordinate of the target device to be controlled based on the region coordinate system, the determining the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and the relative coordinates of each of the devices to be controlled includes:
The target device to be controlled is determined in the at least one device to be controlled based on a second selected coordinate of the target device to be controlled based on the region coordinate system and a second relative coordinate of each of the devices to be controlled based on the region coordinate system.
In some embodiments, the selection information includes a target identification of the target device to be controlled entered by a user, and the relative coordinates include a first relative coordinate of the device to be controlled based on the image coordinate system. The above determination of the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled may be achieved through S2032 to S2033, including:
s2032, determining a first visual identifier corresponding to each device to be controlled based on the first relative coordinates corresponding to each device to be controlled and the first identifier coordinates corresponding to at least one first visual identifier obtained by identifying and detecting the scene image.
The step 2032 may be implemented by performing identification detection on the scene image based on a preset identification model to obtain a first identification coordinate corresponding to each visual identification in at least one visual identification corresponding to the scene image, and determining a first visual identification corresponding to each device to be controlled based on the first identification coordinate corresponding to each first visual identification and a first relative coordinate corresponding to each device to be controlled.
The step 2032 may be further implemented by dividing the scene image based on the first identification coordinates corresponding to each device to be controlled to obtain device sub-images corresponding to each device to be controlled, and performing identification detection on the device to be controlled corresponding to each device to be controlled based on a preset identification model to determine the first visual identification corresponding to each device to be controlled.
S2033, determining the target device to be controlled in the at least one device to be controlled based on the target identifier and the first visual identifier corresponding to each device to be controlled.
And determining the device to be controlled corresponding to the first visual identifier matched with the target visual identifier as the target device to be controlled under the condition that the target identifier is the target visual identifier.
And determining a device to be controlled corresponding to a first visual identifier matched with the target device identifier as the target device to be controlled based on a mapping relation between the device identifier and the visual identifier under the condition that the target identifier is the target device identifier.
In the embodiment of the disclosure, the target to-be-controlled device is determined from a plurality of to-be-controlled devices in the scene image by the selection coordinates of the target to-be-controlled device input by the user and the relative coordinates of each to-be-controlled device obtained by identifying the scene image, so that the remote control precision is realized, and the target to-be-controlled device can be determined from a plurality of to-be-controlled devices based on various information (the first selection coordinates, the second selection coordinates, the visual identifications and the device identifications) of the target to-be-controlled devices input by the user, so that a rich interaction mode is provided for the remote control, and meanwhile, the accurate control effect is ensured.
Referring to fig. 3, fig. 3 is a schematic flowchart of an alternative remote control method according to an embodiment of the present disclosure, taking fig. 1 as an example based on any of the foregoing embodiments, where the method further includes S301, S102 may be updated to S302, and the steps shown in fig. 3 will be described.
S301, converting a first relative coordinate of each device to be controlled based on the image coordinate system into a second relative coordinate of each device to be controlled based on the region coordinate system based on the coordinate conversion relation between the image coordinate system and the region coordinate system corresponding to the preset region.
In some embodiments, the image coordinate system is a coordinate system established based on the scene image, the first relative coordinate characterizes a relative position of the device to be controlled in the scene image, the region coordinate system is a coordinate system established based on the preset region, and the second relative coordinate characterizes a relative position of the device to be controlled in the preset region.
And S302, sending the scene image and second relative coordinates of each device to be controlled based on the region coordinate system to the terminal, wherein the second relative coordinates are used for prompting the real position of each device to be controlled in the preset region in the process of displaying the scene image by the terminal.
In some embodiments, the server sends the scene image to the terminal, and simultaneously sends the second relative coordinates corresponding to each device to be controlled in the scene image to the terminal. The terminal can display the real position of each device to be controlled in the preset area in the process of displaying the scene image, so that a user can conveniently acquire the real positions of all the devices to be controlled, and further an accurate control instruction is provided.
In the embodiment of the disclosure, because the first relative coordinate corresponding to each device to be controlled is converted into the second relative coordinate representing the real position of the device to be controlled based on the coordinate conversion relation between the established image coordinate system and the area coordinate system, and the second relative coordinate and the scene image are simultaneously sent to the terminal, the user can master the real positions of all the devices to be controlled, and the remote control efficiency is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of an alternative remote control method according to an embodiment of the present disclosure, based on any of the foregoing embodiments, taking fig. 1 as an example, S103 may include S401 to S403, and the steps shown in fig. 4 will be described.
S401, receiving an action sub-instruction sent by the terminal.
In some embodiments, the action sub-instructions include at least one of a move sub-instruction, a steer sub-instruction, and a job sub-instruction.
The motion information carried by the moving sub-instruction is moving information, the moving information comprises at least one of a target moving position, a moving speed and a moving track, the motion information carried by the steering sub-instruction is steering information, the steering information comprises at least one of a target steering angle, a steering speed and a steering track, the motion information carried by the operation sub-instruction is operation information, and the operation information can comprise at least one of an operation state, an operation target, operation time and operation content.
S402, analyzing the action sub-instruction to obtain action information.
S403, generating a control instruction corresponding to the target device to be controlled based on the action information.
In some embodiments, the generating the control instruction corresponding to the target device to be controlled may be implemented through S4031 to S4032 based on the action information.
S4031, converting the motion information into target motion information based on a conversion relationship between an image coordinate system and a region coordinate system when the motion information is based on the image coordinate system, and taking the motion information as the target motion information when the motion information is based on the region coordinate system.
In some embodiments, in the event that a user inputs an action sub-instruction based on the scene image, the action information carried in the action sub-instruction is based on the image coordinate system. For example, in the process of displaying the scene image by the touch display screen, after the user performs touch operation on the device to be controlled by the scene image of the touch display screen, the terminal can receive a motion sub-instruction based on the scene image, namely, the carried motion information is based on the image coordinate system, and for example, in the process of displaying the scene image by the display screen, after the user performs mouse operation on the scene image by using a mouse to control a corresponding mouse cursor, the terminal can receive a motion sub-instruction based on the scene image, namely, the carried motion information is based on the image coordinate system. At this time, the motion information may be converted into target motion information based on the region coordinate system based on the conversion relationship between the image coordinate system and the region coordinate system, and a control instruction generated based on the target motion information may be recognized by a device to be controlled in the real scene, so as to perform a corresponding motion.
In some embodiments, in the case that the user inputs an action sub-instruction based on a preset area, the action information carried in the action sub-instruction is based on the area coordinate system. For example, a control voice based on a preset area may be received through a voice input device, and the control voice may be, for example, "apparatus a moves to the middle of the area", "apparatus a turns to the right south direction", or the like. At this time, the action information may be directly used as target action information, and a control instruction generated based on the target action information may be recognized by a device to be controlled in the real scene, so as to perform a corresponding action.
S4032, generating a control instruction corresponding to the target device to be controlled based on the target action information.
In the embodiment of the present disclosure, since the operation instruction further includes an action sub-instruction, after receiving the action sub-instruction, the server may generate a corresponding control instruction based on the action information carried in the action sub-instruction, so as to complete accurate control on the target device to be controlled.
Referring to fig. 5, fig. 5 is a schematic flowchart of an alternative remote control method according to an embodiment of the present disclosure, based on any of the foregoing embodiments, taking fig. 1 as an example, where the method further includes S501, S104 may be updated to S502, and the description will be made with reference to the steps shown in fig. 5.
S501, based on the mapping relation between the device identification and the communication address, the communication address corresponding to the target device to be controlled is obtained.
In some embodiments, a device identifier corresponding to each device to be controlled and a communication address corresponding to each device identifier are preset in the server. After determining the target device to be controlled, the communication address of the target device to be controlled may be determined based on the device identifier corresponding to the target device to be controlled. The communication address may be an IP address, mac address, or the like of the device to be controlled.
S502, sending the control instruction to the target device to be controlled based on the communication address corresponding to the target device to be controlled.
In the embodiment of the disclosure, the control instruction can be accurately sent to the target to-be-controlled device through the communication address corresponding to the target to-be-controlled device, so that compared with a broadcast sending mode, the channel occupation is reduced, the situation that other to-be-controlled devices are erroneously received can be avoided, and the system stability is improved.
Referring to fig. 6, fig. 6 is a schematic flow chart of an alternative remote control method according to an embodiment of the present disclosure, and the steps shown in fig. 6 will be described
S601, the camera equipment sends the collected scene image to a server.
S602, the server receives the scene image and identifies the scene image to obtain device information of each device to be controlled in the scene image.
After receiving the scene image, the server may detect the devices to be controlled existing in the scene image by using a preset object recognition model, so as to obtain detection frames corresponding to each device to be controlled, and determine the first relative coordinates of each device to be controlled in the scene image based on the detection frames corresponding to each device to be controlled.
In another embodiment, the real-time sub-image corresponding to each device to be controlled may be determined based on the detection frame corresponding to each device to be controlled, the real-time sub-image corresponding to each device to be controlled is identified by using a preset identification model, the visual identification corresponding to each device to be controlled may be obtained, and the device identification corresponding to each device to be controlled may be obtained based on the mapping relationship corresponding to the preset visual identification and the device identification.
In another embodiment, the scene image can be directly identified based on a preset identification model to obtain at least one visual identification and an identification position corresponding to each visual identification, the visual identification corresponding to each device to be controlled is determined based on the identification position corresponding to each visual identification and a first relative coordinate of each device to be controlled in the scene image, and the device identification corresponding to each device to be controlled can be obtained based on a mapping relation between the preset visual identification and the device identification.
And S603, the server sends the scene image and the first relative coordinates of each device to be controlled in the scene image to the terminal.
S604, the terminal receives the scene image and the first relative coordinates of each device to be controlled in the scene image, and displays the first relative coordinates.
S605, the terminal receives an operation instruction input for a target device to be controlled in the process of displaying the scene image.
In some embodiments, the operation instruction includes a selection sub-instruction for the target device to be controlled, where the selection sub-instruction carries selection information of the target device to be controlled, where the selection information may be a device identifier/visual identifier of the target device to be controlled, a first selection coordinate of the target device to be controlled in the scene image, or a second selection coordinate of the target device to be controlled in a preset area.
In some embodiments, the operation instruction further includes an action sub-instruction for the target device to be controlled, the action sub-instruction carrying action information of the target device to be controlled. The action sub-instruction may be a movement instruction, and the action information corresponding to the movement instruction may be movement information corresponding to the target device to be controlled, where the movement information may include at least one of a target movement position, a movement speed, and a movement track. The action sub-instruction can be a steering instruction, and the action information corresponding to the steering instruction can be steering information corresponding to the target device to be controlled, wherein the steering information comprises at least one of a target steering angle, a steering speed and a steering track. The action sub-instruction may also be a job instruction, and the action information corresponding to the job instruction may be job information corresponding to the target device to be controlled, where the job information may include at least one of a job status, a job target, a job time, a job content, and the like.
S606, the terminal sends the operation instruction to the server.
In some embodiments, the terminal may send the selection sub-instruction and the action sub-instruction to the server. The selection sub-instruction is used for determining a target device to be controlled, and the action sub-instruction is used for determining an execution action corresponding to the target device to be controlled.
In some embodiments, the terminal may send the action sub-instruction to the server. And under the condition that the target device to be controlled is not determined, taking all the devices to be controlled in the preset area as the target device to be controlled. In an actual application scenario, the present embodiment may be used to control the working states of all the devices to be controlled, for example, to switch all the devices to be controlled from an energy-saving state to an efficient state, or to switch all the devices to be controlled from the working state to a sleep state, etc.
S607, the server receives the operation instruction and determines the target device to be controlled and the control instruction corresponding to the target device to be controlled.
In some embodiments, in the case that the operation instruction includes a selection sub-instruction, the server may determine, based on selection information of the target device to be controlled carried by the selection sub-instruction, a device identifier of the target device to be controlled, and then determine, based on a mapping relationship between a preset device identifier and a communication address, the communication address of the target device to be controlled. And when the selection information is a first selection coordinate of the target device to be controlled in the scene image, taking the device to be controlled corresponding to the first relative coordinate matched with the first selection coordinate as the target device to be controlled based on the first relative coordinate of each device to be controlled in the scene image. And under the condition that the selection information is the second selection coordinate of the target device to be controlled in the preset area, taking the device to be controlled corresponding to the second relative coordinate matched with the second selection coordinate as the target device to be controlled based on the second relative coordinate of each two devices to be controlled in the preset area.
In some embodiments, in a case where the operation instruction does not include a selection sub-instruction, all devices to be controlled in the preset area may be determined as target devices to be controlled.
In some embodiments, where an operation instruction includes an action sub-instruction, the action sub-instruction is converted to the control instruction. For example, in the case where the action sub-instruction is a movement instruction, a corresponding control instruction may be generated based on the information type of the movement information and the action sub-instruction. When the type of the movement information is based on the image coordinate system, the movement information may be converted based on a conversion relationship between the image coordinate system and the region coordinate system, and the converted movement information may be used as the control command.
In some embodiments, the operation instruction further includes an action sub-instruction for the target device to be controlled, the action sub-instruction carrying action information of the target device to be controlled. The action sub-instruction may be a movement instruction, and the action information corresponding to the movement instruction may be movement information corresponding to the target device to be controlled, where the movement information may include at least one of a target movement position, a movement speed, and a movement track. The action sub-instruction can be a steering instruction, and the action information corresponding to the steering instruction can be steering information corresponding to the target device to be controlled, wherein the steering information comprises at least one of a target steering angle, a steering speed and a steering track. The action sub-instruction may also be a job instruction, and the action information corresponding to the job instruction may be job information corresponding to the target device to be controlled, where the job information may include at least one of a job status, a job target, a job time, a job content, and the like.
And S608, the server sends a control instruction generated based on the operation information to the target device to be controlled.
S609, the target device to be controlled executes the action corresponding to the control instruction.
In the embodiment of the disclosure, the acquired scene image corresponding to the preset area is sent to the terminal and displayed to the user, meanwhile, the real-time operation instruction based on the scene image by the user is received, the corresponding control instruction is sent to the device to be controlled in the preset area, the effect of remotely controlling the device to be controlled can be achieved, meanwhile, the generated control instruction can be used for determining the target device to be controlled from a plurality of devices to be controlled in the preset area based on the identification result corresponding to the scene image and the operation instruction received by the terminal, the control accuracy of the remote control method in the embodiment of the disclosure is improved, and the target device to be controlled can execute the action corresponding to the real-time operation of the user through the control instruction generated based on the operation instruction in the embodiment of the disclosure, so that a more friendly interaction mode is achieved.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Currently, intelligent monitoring is mostly unidirectional, and video streams are transmitted to a display screen from a camera or stored in a hard disk for viewing by people or program analysis. In the practical application scene, people cannot conveniently and directly enter areas, such as high-temperature environments, refrigeration houses, laboratories with severe environmental requirements, dangerous environments and the like. In order to manage these areas, it is necessary to observe not only the conditions in these areas but also to perform operations such as moving positions, taking out, etc. of some articles located in the areas without the entry of a person.
In order to achieve the above effects, the embodiments of the present disclosure may combine an object detection function of a monitoring device, and at the same time, preset a device capable of remotely controlling movement in advance in an area, so as to achieve a touch remote control device that is obtained immediately. The device capable of remotely controlling movement can be an identifiable carrier vehicle or the like.
In the related art, a single target is often controlled by a control button or a control lever, and a monitoring video only provides a picture display function for an operator to observe the target. If there are multiple targets in the area, multiple joysticks or more complex ways of manipulation are required. In order to solve the problem, according to the embodiment of the disclosure, each device to be controlled can be identified from the monitoring picture by utilizing intelligent monitoring, the position of each device to be controlled is obtained, and in the process that the monitoring picture is transmitted to the touch screen interface for displaying, a person can select a target device to be controlled in the touch screen interface and perform corresponding control. Thus, the effect obtained by the user can be realized, and the interaction mode is more friendly.
Referring to fig. 7, fig. 7 shows an optional implementation scenario schematic diagram provided by the embodiment of the present disclosure, in a preset area a10 shown in fig. 7, at least one device a11 to be controlled is distributed, where the device a11 to be controlled may be connected to a server in a wireless/wired manner, where at least one camera assembly a12 is further provided in the preset area a10, a scene image corresponding to the preset area a10 may be obtained by the at least one camera assembly a12, a relative position of each device a11 to be controlled in a scene image may be determined by analyzing the scene image, and a relative position of each device a11 to be controlled in the preset area may be determined based on a coordinate mapping relationship between the scene image and the preset area. In the present embodiment, in identifying the devices to be controlled included therein based on the scene image, each device to be controlled may be distinguished by identifying a unique visual identifier set in each device to be controlled a11, for example, the visual identifier set in the device to be controlled may be a two-dimensional code, a text, a graphic, or the like.
In some scenes, the device to be controlled can be used for bearing target objects, can be used for dividing the preset area, and can be used for combining with other devices to be controlled to obtain target devices in any form.
Referring to fig. 8, fig. 8 illustrates an alternative information flow diagram provided by an embodiment of the present disclosure. The information transmission process in the embodiment of the present disclosure will be described with reference to the implementation scenario diagram in fig. 7.
In some embodiments, the at least one camera assembly a12 may transmit the continuously acquired multi-frame scene images to the server a13 in the form of a video stream. Then, the server may transmit the video stream carrying the multi-frame scene image to the terminal a14, and the terminal a14 may display the video stream to the user and receive a control instruction of the user for any device to be controlled, where the control instruction at least includes a movement instruction for controlling the position where the device to be controlled is changed, a steering instruction for controlling the device to be controlled to change the relative direction, a job instruction for controlling the operation target where the device to be controlled is changed, and so on.
In some embodiments, the camera component a12 directly analyzes the scene image after collecting the scene image including the device a11 to be controlled, obtains the relative position of the device a11 to be controlled in the preset area, and sends the original scene image and the identified relative position of the device to be controlled to the server in the form of structured data, and forwards the structured data to the terminal, or in another embodiment, the camera component a12 may directly send the original scene image to the server, and the server analyzes the scene image, obtains the relative position of the device a11 to be controlled in the preset area, and sends the original scene image and the identified relative position of the device to be controlled to the terminal.
In the process of analyzing the scene image, the visual identification arranged in each device to be controlled can be identified, so that each device to be controlled can be distinguished. For example, in the case that a first device to be controlled and a second device to be controlled exist in a preset area, it may be determined that a first visual identifier is provided on the first device to be controlled based on the scene image, a second visual identifier is provided on the second device to be controlled, an ID of the first device to be controlled and an ID of the second device to be controlled may be determined based on a correspondence between the preset visual identifier and the device to be controlled, a communication address corresponding to each device to be controlled may be obtained based on a correspondence between the preset ID and the communication address, and communication between the target devices to be controlled may be achieved based on the communication address.
In some embodiments, after receiving the video stream carrying the multi-frame scene image, the terminal may display the video stream through a display component. And then, the terminal can receive a selection instruction of a user for a target device to be controlled in a plurality of devices to be controlled contained in the scene image, and the target device to be controlled is determined to be the device which the user needs to control at the moment.
The terminal can also comprise an input component, such as a mouse, a keyboard, a voice control device and the like, and can receive a selection instruction of the user through the input component, and determine the target device to be controlled in a plurality of devices to be controlled in the scene image based on the selection instruction.
In some embodiments, the selection instruction received by the terminal carries a selection position for the display interface, the terminal may send the selection position to the server, and the server determines, based on the selection position and the relative position of each device to be controlled, a target device to be controlled selected by the user, so as to obtain a communication address corresponding to the target device to be controlled.
In some implementation scenarios, after the user selects the target to-be-controlled device, the user may receive, through the terminal, a control operation of the user on the target to-be-controlled device, to obtain corresponding control information, the terminal sends the control information to the server, the server forwards the control information to the target to-be-controlled device based on a communication address corresponding to the target to-be-controlled device, and the target to-be-controlled device may perform a corresponding operation based on the control information.
Fig. 9 is a schematic diagram of a composition structure of a remote control device according to an embodiment of the disclosure, and as shown in fig. 9, a remote control device 900 includes:
the acquisition module 901 is used for acquiring a scene image corresponding to a preset area and an identification result corresponding to the scene image, wherein the identification result comprises device information of each device to be controlled in at least one device to be controlled in the preset area;
the first sending module 902 is configured to send the scene image to a terminal, where the terminal is configured to display the scene image and receive an input operation instruction;
The receiving module 903 is configured to receive the operation instruction sent by the terminal, and determine a target device to be controlled and a control instruction corresponding to the target device to be controlled based on the operation instruction and device information of each device to be controlled;
The second sending module 904 is configured to send the control instruction to the target device to be controlled, where the control instruction is configured to instruct the target device to be controlled to execute an action corresponding to the control instruction.
In some embodiments, the operation instruction includes a selection sub-instruction, and the receiving module 903 is further configured to receive the selection sub-instruction sent by the terminal, parse the selection sub-instruction, obtain selection information, and determine the target device to be controlled from the at least one device to be controlled based on the selection information and device information of each device to be controlled.
In some embodiments, the selection information includes input selection coordinates of the target device to be controlled, the device information includes relative coordinates of the devices to be controlled, and the receiving module 903 is further configured to determine the target device to be controlled in the at least one device to be controlled based on the selection coordinates of the target device to be controlled and the relative coordinates of each of the devices to be controlled.
In some embodiments, the selection coordinates include first selection coordinates of the target device to be controlled based on the image coordinate system, the relative coordinates include first relative coordinates of the target device to be controlled based on the image coordinate system, and the receiving module 903 is further configured to determine the target device to be controlled in the at least one device to be controlled based on the first selection coordinates of the target device to be controlled based on the image coordinate system, and the first relative coordinates of each of the devices to be controlled based on the image coordinate system.
In some embodiments, the selection coordinates include a second selection coordinate of the target device to be controlled based on the area coordinate system, and the receiving module 903 is further configured to determine the target device to be controlled in the at least one device to be controlled based on the second selection coordinate of the target device to be controlled based on the area coordinate system, and a second relative coordinate of each of the devices to be controlled based on the area coordinate system.
In some embodiments, the device further includes a conversion module, where the conversion module is configured to convert, based on a coordinate conversion relationship between the image coordinate system and an area coordinate system corresponding to the preset area, a first relative coordinate of each device to be controlled based on the image coordinate system to a second relative coordinate of each device to be controlled based on the area coordinate system. The second sending module 904 is further configured to send the scene image and second relative coordinates of each device to be controlled based on the region coordinate system to the terminal, where the second relative coordinates are used to prompt a real position of each device to be controlled in the preset region during the process that the terminal displays the scene image.
In some embodiments, the selection information includes a target identifier of the target device to be controlled, which is input by a user, the relative coordinates include a first relative coordinate of the device to be controlled based on the image coordinate system, and the receiving module 903 is further configured to determine a first visual identifier corresponding to each of the devices to be controlled based on the first relative coordinate corresponding to each of the devices to be controlled and a first identifier coordinate corresponding to at least one first visual identifier obtained by performing identifier detection on the scene image, and determine the target device to be controlled in the at least one device to be controlled based on the target identifier and the first visual identifier corresponding to each of the devices to be controlled.
In some embodiments, the receiving module 903 is further configured to perform identification detection on the scene image based on a preset identification model to obtain a first identification coordinate corresponding to each visual identification in at least one visual identification corresponding to the scene image, determine a first visual identification corresponding to each device to be controlled based on the first identification coordinate corresponding to each first visual identification and a first relative coordinate corresponding to each device to be controlled, or segment the scene image based on the first identification coordinate corresponding to each device to be controlled to obtain a device sub-image corresponding to each device to be controlled, perform identification detection on each device to be controlled corresponding to each device to be controlled based on the preset identification model, and determine the first visual identification corresponding to each device to be controlled.
In some embodiments, the receiving module 903 is further configured to determine, when the target identifier is a target visual identifier, a device to be controlled corresponding to a first visual identifier that matches the target visual identifier as the target device to be controlled, and determine, when the target identifier is a target device identifier, a device to be controlled corresponding to the first visual identifier that matches the target device identifier as the target device to be controlled based on a mapping relationship between device identifiers and visual identifiers.
In some embodiments, the operation instruction includes an action sub-instruction, and the receiving module 903 is further configured to receive the action sub-instruction sent by the terminal, parse the action sub-instruction, obtain action information, and generate a control instruction corresponding to the target device to be controlled based on the action information.
In some embodiments, the receiving module 903 is further configured to convert the motion information into target motion information based on a conversion relationship between an image coordinate system and a region coordinate system when the motion information is based on the image coordinate system, use the motion information as the target motion information when the motion information is based on the region coordinate system, and generate a control instruction corresponding to the target device to be controlled based on the target motion information.
In some embodiments, the action sub-instruction comprises at least one of a moving sub-instruction, a steering sub-instruction and a working sub-instruction, wherein the action information carried by the moving sub-instruction is moving information, the moving information comprises at least one of a target moving position, a moving speed and a moving track, the action information carried by the steering sub-instruction is steering information, the steering information comprises at least one of a target steering angle, a steering speed and a steering track, the action information carried by the working sub-instruction is working information, and the working information can comprise at least one of a working state, a working target, a working time and working content.
In some embodiments, the obtaining module 901 is further configured to receive a recognition result sent by the camera device and corresponding to the scene image, where the recognition result is generated by the camera device based on the scene image, or receive the scene image sent by the camera device, and generate the recognition result based on the scene image.
In some embodiments, the conversion module is further configured to obtain a communication address corresponding to the target device to be controlled based on a mapping relationship between a device identifier and the communication address, and the second sending module 904 is further configured to send the control instruction to the target device to be controlled based on the communication address corresponding to the target device to be controlled.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the description of the embodiments of the method of the present disclosure for understanding.
It should be noted that, in the embodiment of the present disclosure, if the remote control method is implemented in the form of a software function module and sold or used as a separate product, the remote control method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be essentially or partially contributing to the related art, and may be embodied in the form of a software product stored in a storage medium, including several instructions to cause an apparatus to perform all or part of the methods of the embodiments of the present disclosure. The storage medium includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. As such, embodiments of the present disclosure are not limited to any target hardware and software combination.
Fig. 10 is a schematic diagram of a hardware entity of a remote control device according to an embodiment of the present disclosure, as shown in fig. 10, where the hardware entity of the remote control device 1000 includes a processor 1001 and a memory 1002, where the memory 1002 stores a computer program that can be run on the processor 1001, and the processor 1001 implements steps in the method of any of the foregoing embodiments when executing the program.
The memory 1002 stores a computer program executable on a processor, and the memory 1002 is configured to store instructions and applications executable by the processor 1001, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the processor 1001 and the remote control device 1000, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
The processor 1001 performs the steps of the remote control method of any one of the above when executing a program. The processor 1001 generally controls the overall operation of the remote control device 1000.
The disclosed embodiments provide a computer storage medium storing one or more programs executable by one or more processors to implement the steps of the remote control method of any of the embodiments above.
It should be noted here that the description of the storage medium and the device embodiments above is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, please refer to the description of the embodiments of the method of the present disclosure for understanding.
The Processor may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic devices implementing the above-described processor functions may be other, and embodiments of the present disclosure are not particularly limited.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or any combination thereof, and may be any terminal including one or more of the above, such as a mobile phone, a computer, a tablet device, a personal digital assistant, or the like.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "embodiments of the present disclosure" or "the foregoing embodiments" or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "an embodiment of the present disclosure" or "the foregoing embodiments" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the described features, structures, or characteristics of the objects may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not constitute any limitation on the implementation of the embodiments of the present disclosure. The foregoing embodiment numbers of the present disclosure are merely for description and do not represent advantages or disadvantages of the embodiments.
Without being specifically illustrated, a remote control device may perform any of the steps of the disclosed embodiments, and may be a processor of the remote control device performing the steps. Unless specifically stated, the disclosed embodiments do not limit the order in which the remote control device performs the following steps. In addition, the manner in which the data is processed in different embodiments may be the same method or different methods. It should also be noted that any step in the embodiments of the present disclosure may be performed by the remote control device independently, that is, the remote control device may not depend on the execution of other steps when performing any step in the embodiments described above.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is merely a logical function division, and there may be additional divisions of actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place or distributed on a plurality of network units, and may select some or all of the units according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the disclosure may be integrated in one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of hardware plus a form of software functional unit.
The methods disclosed in the several method embodiments provided in the present disclosure may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present disclosure may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present disclosure may be arbitrarily combined without any conflict to obtain new method embodiments or apparatus embodiments.
It will be appreciated by those of ordinary skill in the art that implementing all or part of the steps of the above method embodiments may be implemented by hardware associated with program instructions, where the above program may be stored in a computer readable storage medium, where the program when executed performs the steps comprising the above method embodiments, where the above storage medium includes various media that may store program code, such as a removable storage device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Or the integrated units of the present disclosure may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in essence or a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a remote control device, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present disclosure. The storage medium includes various media capable of storing program codes such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
In embodiments of the present disclosure, descriptions of the same steps and the same content in different embodiments may be referred to each other. In the presently disclosed embodiments, the term "and" does not affect the order of steps.
The foregoing is merely an embodiment of the present disclosure, but the protection scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think about the changes or substitutions within the technical scope of the present disclosure, and should be covered by the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1.一种远程控制方法,其特征在于,所述方法包括:1. A remote control method, characterized in that the method comprises: 获取预设区域对应的场景图像和所述场景图像对应的识别结果;所述识别结果包括所述预设区域中至少一个待控制装置中每一所述待控制装置的装置信息;Acquire a scene image corresponding to a preset region and a recognition result corresponding to the scene image; the recognition result includes device information of each of the at least one device to be controlled in the preset region. 向终端发送所述场景图像;所述终端用于展示所述场景图像,并接收输入的操作指令;所述操作指令包括选择子指令;The scene image is sent to the terminal; the terminal is used to display the scene image and receive input operation instructions; the operation instructions include selection sub-instructions; 接收所述终端发送的所述操作指令,并基于所述操作指令和每一所述待控制装置的装置信息,确定目标待控制装置和所述目标待控制装置对应的控制指令;所述装置信息包括所述待控制装置的相对坐标;The system receives the operation command sent by the terminal, and determines the target device to be controlled and the corresponding control command based on the operation command and the device information of each device to be controlled; the device information includes the relative coordinates of the device to be controlled. 向所述目标待控制装置发送所述控制指令,所述控制指令用于指示所述目标待控制装置执行所述控制指令对应的动作;The control command is sent to the target device to be controlled, and the control command is used to instruct the target device to perform the action corresponding to the control command. 所述接收所述终端发送的所述操作指令,并基于所述操作指令和每一所述待控制装置的装置信息,确定目标待控制装置,包括:The step of receiving the operation command sent by the terminal and determining the target device to be controlled based on the operation command and the device information of each device to be controlled includes: 接收所述终端发送的选择子指令;解析所述选择子指令,获取选择信息;所述选择信息包括输入的所述目标待控制装置的目标标识;基于所述选择信息和每一所述待控制装置的装置信息,从所述至少一个待控制装置中确定所述目标待控制装置;Receive a selection sub-instruction sent by the terminal; parse the selection sub-instruction to obtain selection information; the selection information includes the target identifier of the target device to be controlled; based on the selection information and the device information of each device to be controlled, determine the target device to be controlled from the at least one device to be controlled; 所述相对坐标包括所述待控制装置基于图像坐标系的第一相对坐标,所述基于所述选择信息和每一所述待控制装置的装置信息,从所述至少一个待控制装置中确定所述目标待控制装置,包括:The relative coordinates include the first relative coordinates of the device to be controlled based on the image coordinate system. The step of determining the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled includes: 基于每一所述待控制装置对应的第一相对坐标,和对所述场景图像进行标识检测得到的至少一个第一视觉标识对应的第一标识坐标,确定每一所述待控制装置对应的第一视觉标识;基于所述目标标识和每一所述待控制装置对应的第一视觉标识,在所述至少一个待控制装置中确定所述目标待控制装置。Based on the first relative coordinates corresponding to each of the devices to be controlled and the first identifier coordinates corresponding to at least one first visual identifier obtained by identifier detection of the scene image, a first visual identifier corresponding to each of the devices to be controlled is determined; based on the target identifier and the first visual identifier corresponding to each device to be controlled, the target device to be controlled is determined among the at least one device to be controlled. 2.根据权利要求1所述的方法,其特征在于,所述基于所述选择信息和每一所述待控制装置的装置信息,从所述至少一个待控制装置中确定所述目标待控制装置,还包括:2. The method according to claim 1, characterized in that, determining the target controllable device from the at least one controllable device based on the selection information and the device information of each controllable device further includes: 基于所述目标待控制装置的选择坐标和每一所述待控制装置相对坐标,在所述至少一个待控制装置中确定所述目标待控制装置。Based on the selected coordinates of the target device to be controlled and the relative coordinates of each of the devices to be controlled, the target device to be controlled is determined among the at least one device to be controlled. 3.根据权利要求2所述的方法,其特征在于,所述选择坐标包括所述目标待控制装置基于图像坐标系的第一选择坐标,所述相对坐标包括所述待控制装置基于所述图像坐标系的第一相对坐标;所述基于所述目标待控制装置的选择坐标和每一所述待控制装置相对坐标,在所述至少一个待控制装置中确定所述目标待控制装置,包括:3. The method according to claim 2, wherein the selected coordinates include first selected coordinates of the target controllable device based on an image coordinate system, and the relative coordinates include first relative coordinates of the controllable device based on the image coordinate system; determining the target controllable device among the at least one controllable device based on the selected coordinates of the target controllable device and the relative coordinates of each controllable device includes: 基于所述目标待控制装置基于所述图像坐标系的第一选择坐标、和每一所述待控制装置基于所述图像坐标系的第一相对坐标,在所述至少一个待控制装置中确定所述目标待控制装置。The target device to be controlled is determined among the at least one device to be controlled based on the first selected coordinates of the target device to be controlled based on the image coordinate system and the first relative coordinates of each device to be controlled based on the image coordinate system. 4.根据权利要求3所述的方法,其特征在于,所述选择坐标包括所述目标待控制装置基于区域坐标系的第二选择坐标;所述基于所述目标待控制装置的选择坐标和每一所述待控制装置相对坐标,在所述至少一个待控制装置中确定所述目标待控制装置,包括:4. The method according to claim 3, wherein the selected coordinates include second selected coordinates of the target controllable device based on a regional coordinate system; and the step of determining the target controllable device among the at least one controllable device based on the selected coordinates of the target controllable device and the relative coordinates of each controllable device includes: 基于所述目标待控制装置基于所述区域坐标系的第二选择坐标、和每一所述待控制装置基于所述区域坐标系的第二相对坐标,在所述至少一个待控制装置中确定所述目标待控制装置。The target device to be controlled is determined among the at least one device to be controlled based on the second selected coordinates of the target device to be controlled based on the regional coordinate system and the second relative coordinates of each device to be controlled based on the regional coordinate system. 5.根据权利要求4所述的方法,其特征在于,所述方法还包括:基于所述图像坐标系和所述预设区域对应的区域坐标系之间的坐标转换关系,将每一所述待控制装置基于所述图像坐标系的第一相对坐标转换为每一所述待控制装置基于所述区域坐标系的第二相对坐标;5. The method according to claim 4, wherein the method further comprises: based on the coordinate transformation relationship between the image coordinate system and the region coordinate system corresponding to the preset region, converting the first relative coordinate of each device to be controlled based on the image coordinate system into the second relative coordinate of each device to be controlled based on the region coordinate system; 所述向终端发送所述场景图像,包括:向所述终端发送所述场景图像和每一所述待控制装置基于所述区域坐标系的第二相对坐标;所述第二相对坐标用于在所述终端展示所述场景图像的过程中提示每一所述待控制装置在所述预设区域中的真实位置。Sending the scene image to the terminal includes: sending the scene image and a second relative coordinate of each of the devices to be controlled based on the regional coordinate system to the terminal; the second relative coordinate is used to indicate the actual position of each device to be controlled in the preset area during the display of the scene image on the terminal. 6.根据权利要求1所述的方法,其特征在于,所述基于每一所述待控制装置对应的第一相对坐标,和对所述场景图像进行标识检测得到的至少一个第一视觉标识对应的第一标识坐标,确定每一所述待控制装置对应的第一视觉标识,包括如下任意一项:6. The method according to claim 1, wherein determining the first visual identifier corresponding to each of the controlled devices based on the first relative coordinates corresponding to each of the controlled devices and the first identifier coordinates corresponding to at least one first visual identifier obtained by identifier detection of the scene image includes any one of the following: 基于预设的标识识别模型对所述场景图像进行标识检测,得到所述场景图像对应的至少一个视觉标识中每一所述视觉标识对应的第一标识坐标;基于每一所述第一视觉标识对应的第一标识坐标,和每一所述待控制装置对应的第一相对坐标,确定每一所述待控制装置对应的第一视觉标识;Based on a preset identifier recognition model, the scene image is detected to obtain the first identifier coordinates of each visual identifier in at least one visual identifier corresponding to the scene image; based on the first identifier coordinates of each first visual identifier and the first relative coordinates of each device to be controlled, the first visual identifier corresponding to each device to be controlled is determined. 基于每一所述待控制装置对应的第一标识坐标,对所述场景图像进行分割,得到每一所述待控制装置对应的装置子图像;基于预设的标识识别模型对每一所述待控制装置对应的待控制装置进行标识检测,确定每一所述待控制装置对应的第一视觉标识。Based on the first identifier coordinates corresponding to each of the devices to be controlled, the scene image is segmented to obtain a device sub-image corresponding to each device to be controlled; based on a preset identifier recognition model, the identifier of each device to be controlled is detected to determine the first visual identifier corresponding to each device to be controlled. 7.根据权利要求6所述的方法,其特征在于,所述基于所述目标标识和每一所述待控制装置对应的第一视觉标识,在所述至少一个待控制装置中确定所述目标待控制装置,包括:7. The method according to claim 6, wherein determining the target device to be controlled among the at least one device to be controlled based on the target identifier and the first visual identifier corresponding to each device to be controlled comprises: 在所述目标标识为目标视觉标识的情况下,将与所述目标视觉标识匹配的第一视觉标识对应的待控制装置确定为所述目标待控制装置;When the target identifier is a target visual identifier, the device to be controlled corresponding to the first visual identifier that matches the target visual identifier is determined as the target device to be controlled; 在所述目标标识为目标装置标识的情况下,基于装置标识与视觉标识之间的映射关系,将与所述目标装置标识匹配的第一视觉标识对应的待控制装置确定为所述目标待控制装置。When the target identifier is a target device identifier, based on the mapping relationship between device identifiers and visual identifiers, the device to be controlled corresponding to the first visual identifier that matches the target device identifier is determined as the target device to be controlled. 8.根据权利要求1至7任一项所述的方法,其特征在于,所述操作指令包括动作子指令;所述接收并解析所述终端发送的所述操作指令,确定所述目标待控制装置对应的控制指令,包括:8. The method according to any one of claims 1 to 7, characterized in that the operation instruction includes an action sub-instruction; the step of receiving and parsing the operation instruction sent by the terminal to determine the control instruction corresponding to the target controllable device includes: 接收所述终端发送的动作子指令;Receive the action sub-instruction sent by the terminal; 解析所述动作子指令,获取动作信息;Parse the action sub-instruction to obtain action information; 基于所述动作信息,生成所述目标待控制装置对应的控制指令。Based on the action information, control commands corresponding to the target controllable device are generated. 9.根据权利要求8所述的方法,其特征在于,所述基于所述动作信息,生成所述目标待控制装置对应的控制指令,包括:9. The method according to claim 8, wherein generating the control command corresponding to the target controllable device based on the action information includes: 在所述动作信息基于所述图像坐标系的情况下,基于图像坐标系和区域坐标系之间的转换关系,将所述动作信息转换为目标动作信息;在所述动作信息基于所述区域坐标系的情况下,将所述动作信息作为所述目标动作信息;When the motion information is based on the image coordinate system, the motion information is converted into target motion information based on the transformation relationship between the image coordinate system and the region coordinate system; when the motion information is based on the region coordinate system, the motion information is used as the target motion information. 基于所述目标动作信息生成所述目标待控制装置对应的控制指令。Based on the target motion information, control commands corresponding to the target controllable device are generated. 10.根据权利要求8所述的方法,其特征在于,所述动作子指令包括以下至少之一:移动子指令、转向子指令和作业子指令;其中,10. The method according to claim 8, wherein the action sub-instruction includes at least one of the following: a movement sub-instruction, a turning sub-instruction, and a work sub-instruction; wherein, 所述移动子指令携带的动作信息为移动信息,所述移动信息包括以下至少之一:目标移动位置、移动速度、移动轨迹;The movement sub-command carries movement information, which includes at least one of the following: target movement position, movement speed, and movement trajectory. 所述转向子指令携带的动作信息为转向信息,所述转向信息包括以下至少之一:目标转向角度、转向速度和转向轨迹;The steering sub-command carries action information that is steering information, which includes at least one of the following: target steering angle, steering speed, and steering trajectory; 所述作业子指令携带的动作信息为作业信息,所述作业信息包括以下至少之一:作业状态、作业目标、作业时间、作业内容。The action information carried by the sub-instruction is the task information, which includes at least one of the following: task status, task objective, task time, and task content. 11.根据权利要求1至7任一项所述的方法,其特征在于,所述获取预设区域对应的场景图像和所述场景图像对应的识别结果,包括如下任意一项:11. The method according to any one of claims 1 to 7, wherein obtaining the scene image corresponding to the preset region and the recognition result corresponding to the scene image includes any one of the following: 接收摄像头设备发送的所述场景图像和所述场景图像对应的识别结果;所述识别结果为所述摄像头设备基于所述场景图像生成的;The system receives the scene image and the corresponding recognition result sent by the camera device; the recognition result is generated by the camera device based on the scene image. 接收所述摄像头设备发送的所述场景图像;基于所述场景图像生成所述识别结果。Receive the scene image sent by the camera device; generate the recognition result based on the scene image. 12.根据权利要求1至7任一项所述的方法,其特征在于,所述方法还包括:12. The method according to any one of claims 1 to 7, characterized in that the method further comprises: 基于装置标识与通信地址之间的映射关系,获取所述目标待控制装置对应的通信地址;Based on the mapping relationship between device identifier and communication address, obtain the communication address corresponding to the target device to be controlled; 所述向所述目标待控制装置发送所述控制指令,包括:Sending the control command to the target device to be controlled includes: 基于所述目标待控制装置对应的通信地址,向所述目标待控制装置发送所述控制指令。The control command is sent to the target device based on the communication address corresponding to the target device to be controlled. 13.一种远程控制装置,其特征在于,包括:13. A remote control device, characterized in that it comprises: 获取模块,用于获取预设区域对应的场景图像和所述场景图像对应的识别结果;所述识别结果包括所述预设区域中至少一个待控制装置中每一所述待控制装置的装置信息;The acquisition module is used to acquire a scene image corresponding to a preset area and a recognition result corresponding to the scene image; the recognition result includes device information of each of the at least one device to be controlled in the preset area. 第一发送模块,用于向终端发送所述场景图像;所述终端用于展示所述场景图像,并接收输入的操作指令;所述操作指令包括选择子指令;The first sending module is used to send the scene image to the terminal; the terminal is used to display the scene image and receive input operation instructions; the operation instructions include selection sub-instructions; 接收模块,用于接收所述终端发送的所述操作指令,并基于所述操作指令和每一所述待控制装置的装置信息,确定目标待控制装置和所述目标待控制装置对应的控制指令;所述装置信息包括所述待控制装置的相对坐标;A receiving module is configured to receive the operation command sent by the terminal, and determine the target device to be controlled and the control command corresponding to the target device to be controlled based on the operation command and the device information of each device to be controlled; the device information includes the relative coordinates of the device to be controlled; 第二发送模块,用于向所述目标待控制装置发送所述控制指令,所述控制指令用于指示所述目标待控制装置执行所述控制指令对应的动作;The second sending module is used to send the control command to the target device to be controlled, and the control command is used to instruct the target device to be controlled to perform the action corresponding to the control command. 所述接收模块,还用于接收所述终端发送的选择子指令;解析所述选择子指令,获取选择信息;所述选择信息包括输入的所述目标待控制装置的目标标识;基于所述选择信息和每一所述待控制装置的装置信息,从所述至少一个待控制装置中确定所述目标待控制装置;The receiving module is further configured to receive a selection sub-instruction sent by the terminal; parse the selection sub-instruction to obtain selection information; the selection information includes the target identifier of the target device to be controlled; and determine the target device to be controlled from the at least one device to be controlled based on the selection information and the device information of each device to be controlled. 所述相对坐标包括所述待控制装置基于图像坐标系的第一相对坐标,所述接收模块,还用于基于每一所述待控制装置对应的第一相对坐标,和对所述场景图像进行标识检测得到的至少一个第一视觉标识对应的第一标识坐标,确定每一所述待控制装置对应的第一视觉标识;基于所述目标标识和每一所述待控制装置对应的第一视觉标识,在所述至少一个待控制装置中确定所述目标待控制装置。The relative coordinates include the first relative coordinates of the device to be controlled based on the image coordinate system. The receiving module is further configured to determine the first visual identifier corresponding to each device to be controlled based on the first relative coordinates corresponding to each device to be controlled and the first identifier coordinates corresponding to at least one first visual identifier obtained by identifier detection of the scene image; and to determine the target device to be controlled among the at least one device to be controlled based on the target identifier and the first visual identifier corresponding to each device to be controlled. 14.一种远程控制设备,其特征在于,包括:存储器和处理器,14. A remote control device, characterized in that it comprises: a memory and a processor, 所述存储器存储有可在所述处理器上运行的计算机程序,The memory stores computer programs that can run on the processor. 所述处理器执行所述计算机程序时实现权利要求1至12任一项所述方法中的步骤。When the processor executes the computer program, it implements the steps of the method according to any one of claims 1 to 12. 15.一种计算机存储介质,其特征在于,所述计算机存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现权利要求1至12任一项所述方法中的步骤。15. A computer storage medium, characterized in that the computer storage medium stores one or more programs, the one or more programs being executable by one or more processors to implement the steps of the method according to any one of claims 1 to 12.
CN202210190753.8A 2022-02-28 2022-02-28 A remote control method, apparatus, device and storage medium Active CN114565819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210190753.8A CN114565819B (en) 2022-02-28 2022-02-28 A remote control method, apparatus, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210190753.8A CN114565819B (en) 2022-02-28 2022-02-28 A remote control method, apparatus, device and storage medium

Publications (2)

Publication Number Publication Date
CN114565819A CN114565819A (en) 2022-05-31
CN114565819B true CN114565819B (en) 2025-11-14

Family

ID=81715256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210190753.8A Active CN114565819B (en) 2022-02-28 2022-02-28 A remote control method, apparatus, device and storage medium

Country Status (1)

Country Link
CN (1) CN114565819B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562606A (en) * 2022-10-21 2023-01-03 联想(北京)有限公司 Control method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
JP2019162257A (en) * 2018-03-19 2019-09-26 株式会社バンダイナムコアミューズメント Remote operation system and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103221976A (en) * 2010-08-04 2013-07-24 P治疗有限公司 Teletherapy control system and method
CN102339062A (en) * 2011-07-11 2012-02-01 西北农林科技大学 Navigation and remote monitoring system for miniature agricultural machine based on DSP (Digital Signal Processor) and binocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
JP2019162257A (en) * 2018-03-19 2019-09-26 株式会社バンダイナムコアミューズメント Remote operation system and program

Also Published As

Publication number Publication date
CN114565819A (en) 2022-05-31

Similar Documents

Publication Publication Date Title
US10999635B2 (en) Image processing system, image processing method, and program
JP6684883B2 (en) Method and system for providing camera effects
US9639988B2 (en) Information processing apparatus and computer program product for processing a virtual object
JP5807686B2 (en) Image processing apparatus, image processing method, and program
CN107336243B (en) Robot control system and control method based on intelligent mobile terminal
CN112738402B (en) Shooting method, shooting device, electronic equipment and medium
US20130195367A1 (en) Appliance control apparatus, method thereof and program therefor
CN112492201B (en) Photographing method and device and electronic equipment
JP6575845B2 (en) Image processing system, image processing method, and program
CN111736709A (en) AR glasses control method, device, storage medium and apparatus
CN112486444A (en) Screen projection method, device, equipment and readable storage medium
CN114565819B (en) A remote control method, apparatus, device and storage medium
JP5266416B1 (en) Test system and test program
CN112437231B (en) Image shooting method and device, electronic equipment and storage medium
CN115767078A (en) Screen projection time delay testing method and device and storage medium
CN106155533B (en) A kind of information processing method and projection device
CN113556606A (en) Gesture-based TV control method, device, device and storage medium
JP6304305B2 (en) Image processing apparatus, image processing method, and program
CN119718061A (en) Interaction method, device, equipment and medium between smart wearable device and mobile terminal
CN116756099A (en) Display method and device
CN114745587A (en) Multimedia resource playing method and device, electronic equipment and medium
CN110622080B (en) UAV tracking processing method and control terminal
CN113726953A (en) Display content acquisition method and device
CN113190162A (en) Display method, display device, electronic equipment and readable storage medium
CN115237307B (en) Device control method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant