[go: up one dir, main page]

CN108089154B - Distributed sound source detection method and sound detection robot based on same - Google Patents

Distributed sound source detection method and sound detection robot based on same Download PDF

Info

Publication number
CN108089154B
CN108089154B CN201711221413.2A CN201711221413A CN108089154B CN 108089154 B CN108089154 B CN 108089154B CN 201711221413 A CN201711221413 A CN 201711221413A CN 108089154 B CN108089154 B CN 108089154B
Authority
CN
China
Prior art keywords
robot
sound source
sound
detection
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711221413.2A
Other languages
Chinese (zh)
Other versions
CN108089154A (en
Inventor
陈建峰
祁文涛
戚茜
李晓强
闫青丽
周荣艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201711221413.2A priority Critical patent/CN108089154B/en
Publication of CN108089154A publication Critical patent/CN108089154A/en
Application granted granted Critical
Publication of CN108089154B publication Critical patent/CN108089154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/026Acoustical sensing devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Manipulator (AREA)

Abstract

本发明涉及一种分布式声源探测方法及基于该方法的声音探测机器人,相比传统单机器人声源探测系统,基于分布式声源定位技术的多机器人协同声源探测系统具有探测范围较大,探测精度较高,环境适应性和容错性强等优势,作为群体机器人环境感知的新手段,大大提高机器人环境感知能力,为智能机器人编队协作奠定了良好基础。

Figure 201711221413

The invention relates to a distributed sound source detection method and a sound detection robot based on the method. Compared with the traditional single-robot sound source detection system, the multi-robot cooperative sound source detection system based on the distributed sound source localization technology has a larger detection range. It has the advantages of high detection accuracy, strong environmental adaptability and fault tolerance. As a new means of environmental perception of swarm robots, it greatly improves the environmental perception ability of robots and lays a good foundation for intelligent robot formation cooperation.

Figure 201711221413

Description

Distributed sound source detection method and sound detection robot based on same
Technical Field
The invention relates to a distributed sound source detection method and design and implementation of a sound detection robot system based on the method.
Background
In recent years, intelligent mobile robot technology has been developed vigorously, and images and sounds have some disadvantages as two main ways for a robot to perceive an environment. Specifically, the image-based environment sensing technology has the problems of large data volume, easiness in being influenced by light, limited detection visual angle and distance and the like, and the existing sound-based environment sensing technology has the problems of limited detection range, poor interference resistance, poor reliability and the like. Therefore, the research on the sound perception technology of the robot, which is robust and accurate, has important significance for improving the performance of the robot.
The main task of the sound perception technology of the robot is to enable the robot to accurately judge sound information in the environment in real time to obtain the spatial position of a target relative to the robot, namely sound source positioning. Most of the existing robot sound source positioning systems are systems based on a single robot platform and a microphone array sound source positioning technology. The system realizes the orientation of a sound source target to a certain extent, but is limited by the structure and the size of a single robot platform, and the size of a microphone cannot be overlarge, so that the detection range of the robot sound source positioning system is small, the precision is low, the reliability is not high, only the direction of the sound source can be obtained under the far field condition, and the accurate sound source positioning cannot be realized.
The distributed sound source positioning technology is a new sound source positioning method which utilizes a wireless sensor network, fuses data of a plurality of separated microphone arrays in space and coordinates positioning, compared with a single-array sound source positioning system, the distributed sound source positioning technology effectively improves the detection range and positioning precision, the single-array damage can not lead to the functional paralysis of the whole system, the distributed sound source positioning technology is high in reliability, and the defects of the single microphone array are overcome.
The multi-robot formation cooperation is a current research hotspot in the robot field, and has many common points with the distributed sound source positioning technology in the aspects of architecture and hardware resources. By referring to the existing data and paper, the existing multi-robot cooperative sound source detection technology based on the distributed sound source positioning method has no relevant research and application.
Disclosure of Invention
The technical problem solved by the invention is as follows: aiming at the defects in the prior art and the requirements of intelligent robots on the aspect of sound perception, a novel distributed sound source detection method based on a multi-robot system is provided by combining the distributed sound source positioning technology and the robot formation cooperation technology on the common points of communication systems and hardware resources.
The technical scheme of the invention is as follows: a new distributed sound source detection method is characterized in that:
the method is based on a multi-robot formation cooperative sound source detection system consisting of 1 monitoring computer and a plurality of sound detection robots, and comprises the following steps.
Step one, system layout: and (d is less than or equal to 500), dispersedly placing the M (more than or equal to 3) sound detection robots to an area needing to be monitored according to an optimal layout method, and placing a monitoring computer in the area or within d meters outside the area. The optimal layout method of the robot is defined as follows:
according to the characteristics of the area to be monitored and the number of robots in the formation, the optimal layout of the system refers to that when the layout is in a two-dimensional plane, a geometric figure formed by connecting lines between adjacent robots tends to be a regular polygon, for example, when the system comprises 3 robots, the layout is distributed as far as possible by adopting an equilateral triangle array under the condition that the field condition allows and the signal-to-noise ratio requirement of the system is met.
Step two, role allocation: m (not less than 3) sound detection robots are divided into 1 queue-length robot and M-1 slave robots according to the role division rule to form a robot formation. The captain robot is used as a communication and data processing center of formation, and the captain robot interacts with each slave robot through wireless communication. The role division rule is defined as follows:
after each robot is powered on, carrying out system initialization, sensing the position and the heading information of the robot through a sensor system in the system initialization stage, and collecting the environmental noise of the area where the robot is located; each robot uploads the information to a monitoring computer after initialization is finished;
the monitoring computer sequences the environmental noise of each robot in turn, the robot with low environmental noise is designated as the queue-leader robot, the queue serial number is 1, and the other robots are slave robots, the queue serial number is 2 … M in turn; the environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals for T seconds, and the average power of the signals is calculated to obtain the current environmental noise level;
step three, information perception and signal detection: all the sound detection robots sense own attitude information and position information in real time in a static state; and collecting the sound signals, comparing the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy of the received sound signals with the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value which are initially set, when the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value are all less than or equal to the respective threshold value, determining that effective signals are detected, obtaining the respective effective signal starting points, continuing to execute the fourth step, and otherwise, circularly executing.
Fourthly, sound source orientation and information interaction: each sound detection robot selects data of the same point number from the starting point of the effective sound signal detected by each sound detection robot, and calculates by adopting a generalized cross-correlation time delay estimation sound orientation algorithm to obtain the azimuth angle of the sound source target relative to the microphone array of the sound source target. The slave robot uploads the sound source detection result, the position information of the slave robot and the attitude information to the captain robot.
Step five, realizing sound source positioning through data fusion: the captain robot combines the detected data after obtaining the sound source detection results from each subordinate robot and the respective position and attitude information thereof, fuses the position information, the course information and the sound source target orientation results of all the robots in the system at present, obtains the sound source orientation azimuth angle of all the robots relative to the ground coordinate system, obtains the sound source target orientation by adopting a distributed sound source positioning algorithm, and uploads the target detection results to the monitoring computer.
And sixthly, the formation collaboratively realizes positioning optimization: the captain robot judges the type of the sound source according to the detected condition of the sound source target, and if the type of the sound source is in the time Tcontinue(TcontinueAnd more than or equal to 10s) continuously detecting the same sound source target m (m is more than or equal to 5) times, and judging the sound source as a suspected continuous sound source target, otherwise, judging the sound source as a suspected burst sound source.
And if the position of the sound source target is continuously detected to be kept unchanged, judging that the sound-producing object is a suspected static target, and otherwise, judging that the sound-producing object is a suspected dynamic target. And the captain robot plans a system task, and sends a control command to command all robots of the system to keep the current captain form in-situ static sound source detection target aiming at the suspected static and suspected dynamic burst sound sources.
Aiming at a suspected dynamic continuous sound source target, the captain robot analyzes the relative relation between the position of a continuous sound source and the space geometric connecting line topological structure of the current system sound detection robot according to an initial detection result, sequentially sends a control instruction containing the position coordinates of the appointed optimization point and task type information to each robot, commands each slave robot in the formation to autonomously move to different positions, optimizes the distributed detection formation of the system, enables the target to tend to be located at the central position of the space geometric connecting line topological structure of the current robot, and improves the precision of system sound source positioning and track tracking.
The further technical scheme of the invention is as follows: a sound detection robot based on the method comprises a robot platform, a robot control system and a robot sensor system. The robot platform includes a mechanical structure and an electrical drive mechanism. The control system comprises an ARM microcontroller, a power supply system and a motor driving module. The sensor system comprises a sound source orientation module, a positioning module, a wireless communication module, an obstacle avoidance sensor module and an attitude sensor module.
The sound source orientation module of the sensor system of the sound detection robot comprises a microphone array, an analog preprocessing circuit and a data acquisition and signal processing unit.
The microphone array of the sound source orientation module of the sensor system of the sound detection robot comprises two connecting rods and four microphones, wherein the two connecting rods are positioned on the same horizontal plane and connected to form a cross shape; the four microphones are respectively positioned at the rod end of the connecting rod and are equidistant from the axis of the cross center. The analog preprocessing circuit of the sound source orientation module of the sensor system adopts an operational amplifier to build an amplifying and filtering circuit, preprocesses analog signals output by a microphone array, transmits the output analog signals into a data acquisition and processing unit for sampling processing, and calculates the target direction of a sound source. The data acquisition and signal processing unit of the sound source orientation module is built by an AD chip and a DSP chip, wherein the AD chip is a multichannel synchronous acquisition AD chip with the highest sampling frequency more than or equal to 20 KHz.
The positioning module of the sensor system of the sound detection robot can adopt a GPS positioning module, a Beidou positioning module or a Glonass positioning module and is used for carrying out real-time positioning and time service on the sound detection robot.
The wireless communication module of the robot sensor system can be of various types, such as a wireless communication module supporting a 4G mobile network, ZigBee or WIFI protocol, and is used for information interaction of a plurality of robots in the system.
The attitude sensor module of the robot sensor system of the sound detection robot can adopt different types of sensors, such as an attitude sensor based on MEMS technology, and is used for sensing real-time course attitude information of the robot.
The obstacle avoidance sensor module of the robot sensor system of the sound detection robot can adopt an infrared or ultrasonic sensor module, so that the obstacle in the advancing direction of the robot can be detected in real time.
The signal flow of the single sound detection robot is as follows:
a sound source orientation module in the robot sensor system senses environmental sounds in real time, calculates the position of a sound source when a sound source target appears in the environment, and transmits the result to a control system. An attitude sensor and an ultrasonic sensor in the robot sensor system respectively output the current three-axis attitude data and the obstacle detection data of the robot to the robot control system in real time. For a slave robot in the robot, data of a sensor system, which is transmitted by a control system of the slave robot, needs to be uploaded to the captain robot through a wireless module. For a captain robot in the robot, a control system receives data of a sensor system of the captain robot, simultaneously receives data of other slave robots in a wireless manner, fuses the data to obtain a sound source target orientation result, and uploads the result to a monitoring computer through a wireless communication module.
Effects of the invention
The invention has the technical effects that: compared with a traditional single robot sound source detection system, the multi-robot cooperative sound source detection system based on the distributed sound source positioning technology has the advantages of large detection range, high detection precision, good environmental adaptability, strong fault tolerance and viability and the like, is used as a new means for group robot environment perception, greatly improves the robot environment perception capability, and lays a good foundation for intelligent robot environment perception and formation cooperation.
Drawings
FIG. 1 is a flowchart illustrating steps of a distributed sound source localization method based on a plurality of sound detection robots according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture comprised of a plurality of sound detection robots in accordance with one embodiment of the present invention;
FIG. 3 is a system initialization communication protocol of one embodiment of the present invention;
FIG. 4 is a communication protocol between a system monitoring computer and a captain robot, in accordance with one embodiment of the present invention;
FIG. 5 is a communication protocol between a system captain robot and a slave robot in accordance with one embodiment of the present invention;
FIG. 6 is a schematic diagram of a distributed sound source localization algorithm for a robotic coordinated sound source detection system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the hardware architecture of a single sound detection robot in accordance with one embodiment of the present invention;
FIG. 8 is a functional block diagram of the electrical characteristics of a single sound detection robot in accordance with one embodiment of the present invention;
fig. 9 is a schematic structural diagram of a quaternary cross microphone array of a sound source orientation module of a robot sensor system of a single sound detection robot according to an embodiment of the present invention;
FIG. 10 is a schematic block circuit diagram of the sound source orientation module of the robotic sensor system of a single sound detecting robot in accordance with one embodiment of the present invention;
description of reference numerals: 1-a positioning module; 2-a wireless communication module; 3-attitude sensor module; 4, an obstacle avoidance sensor module; 5, an electric transmission mechanism; 6-robot control system; 7-a sound source orientation module; 8-motor drive module/power supply system.
Detailed Description
The core idea of the invention is to utilize a plurality of mobile robots with sound source detection capability, and to adopt a cross direction finding algorithm to fuse sound source orientation information of a plurality of robots through a wireless sensor network, so as to realize real-time positioning and track tracking of a sound source target.
The distributed sound source detection method based on the embodiment is as follows:
the distributed sound source detection method of the real-time example is based on a multi-robot formation cooperative sound source detection system consisting of 1 monitoring computer and 3 sound detection robots, and comprises the following steps, as shown in the attached figure 1 in the specification.
Step one, system layout: 3 sound detection robots are arranged in a square area with the size of 50m multiplied by 50m to form a regular triangle topological structure as much as possible according to the field condition and the optimal layout method, the distance between adjacent robots is 30 meters, and a monitoring computer is arranged outside the area within 50 meters. The optimal layout method comprises the following steps:
according to the characteristics of the area to be monitored and the number of the robots in the formation, the optimal layout of the system means that when the layout is in a two-dimensional plane, the geometric figure formed by connecting lines between adjacent robots tends to be a regular polygon as much as possible, and if the system comprises 3 robots, the layout is in an equilateral triangle array. When the sound source intensity is 80dB, the distance between adjacent robots in the layout is within 50 meters.
Step two, role allocation: the system comprises 3 sound detection robots which are divided into 1 captain robot and 2 subordinate robots according to a role division method to form a robot formation. The captain robot is used as a communication and data processing center of formation, and the captain robot interacts with each slave robot through wireless communication. The system architecture is shown in figure 2. The role division rule is defined as follows:
after each robot is powered on, carrying out system initialization, sensing the position and the heading information of the robot through a sensor system in the system initialization stage, and collecting the environmental noise of the area where the robot is located; each robot uploads the information to a monitoring computer after initialization is finished;
the monitoring computer sequences the environmental noise of each robot in turn, the robot with low environmental noise is designated as the queue-leader robot, the queue serial number is 1, and the other robots are slave robots, the queue serial number is 2 … M in turn; the environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals for T seconds, and the average power of the signals is calculated to obtain the current environmental noise level;
and sequencing the robots in sequence according to the environmental noise of the robots, wherein the designated robot with low environmental noise is the captain robot, the formation serial number is 1, and the other robots are the slave robots, and the formation serial numbers are 2 and 3 in sequence. The environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals of T (T is more than or equal to 5s and less than or equal to 10s) seconds at a certain sampling rate, absolute values of the signal sequences are obtained, then the absolute values are accumulated and summed, and the average is obtained to obtain the current environmental noise.
Step three, information perception and signal detection: all the sound detection robots collect sound signals in real time in a static state, and detect and analyze the sound signals by adopting a sound detection algorithm. The detection principle is that the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy of the received sound signal are compared with the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value which are initially set, when the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value are all less than or equal to the respective threshold value, the effective signal is considered to be detected, and the starting;
fourthly, sound source orientation and information interaction: each sound detection robot selects sound signal data with the same number of points from the starting point of the effective signal detected by each sound detection robot respectively, calculates by adopting a sound source orientation algorithm, namely a generalized cross-correlation time delay estimation sound orientation algorithm, to obtain the azimuth angle and the pitch angle of a sound source target relative to a microphone array of the sound source target, and uploads the sound source orientation detection result, the position and the attitude information of the slave robot to a long robot according to a system communication protocol;
step five, realizing sound source positioning through data fusion: the captain robot combines the data of the captain robot after obtaining the sound source detection result from each subordinate robot and the position and attitude information of each subordinate robot, integrates the position information, course information and sound source target orientation result of all the robots in the system, obtains the sound source orientation information of all the robots relative to the ground coordinate system, constructs the topological structure of the system, adjusts the parameters of the distributed positioning algorithm, obtains the sound source target position by adopting the direction-finding cross algorithm, and uploads the target detection result to the monitoring computer according to the system communication protocol.
And sixthly, the formation collaboratively realizes positioning optimization: and the captain robot judges the type of the sound source according to the condition of the detected sound source target, and if the same sound source target is detected for 5 times within 10 seconds, the sound source is judged to be a suspected continuous sound source target, otherwise, the sound source is judged to be a suspected sudden sound source.
Aiming at a continuous sound source target, a captain robot analyzes the relative relation between the position of a continuous sound source and a space geometric connecting line topological structure of a current system sound detection robot according to an initial detection result, sequentially sends a control instruction containing an optimized point position coordinate to each robot, and each slave robot in a formation automatically moves to the optimized point position by adopting an inertial navigation algorithm and a PID control algorithm, so that the layout structure of a distributed sound source detection system is optimized, and the positioning and track tracking precision of the system on the sound source is improved.
The target sound source of the present embodiment is as follows:
the target sound aimed by the multi-robot system in the embodiment is mainly a broadband stationary signal (the frequency range is between 100Hz and 3000 Hz), such as the engine sound of an automobile, a armored car, a helicopter and the like and a short-time non-stationary signal, such as a gunshot sound and a clapping sound.
The system communication protocol in this embodiment is as follows:
the multi-robot system in this embodiment adopts a centralized communication and control structure, the formation totally includes 3 robots, and each robot has a unique fixed remote address, i.e., a communication address, which is 0x001F,0x0020, and 0x0021, respectively. And in the system initialization stage, the robot roles in the formation are divided by a monitoring computer, and the system is divided into 1 captain robot and 2 slave robots according to the roles, wherein the captain robot is used as a communication center and a data processing center of the system and is responsible for system data aggregation and fusion, and the slave robots detect environmental information under the control of the captain robot. The whole communication control system is shown in figure 2.
The following specifically describes a communication protocol in the system initialization stage, a communication protocol between the monitoring computer and the captain robot in the normal working stage, and a communication protocol between the captain robot and the slave robot.
(1) System initialization phase communication protocol
When all the robots of the system are just started and powered on, 3 robots in the formation are in a standby state, and the system issues system setting data packets to all the robots in the formation in a broadcasting mode through a monitoring computer, so that the role division of the system robots is realized. The robot is divided into a captain robot and two subordinate robots, and the formation serial numbers are respectively defined as 0,1 and 2. After all the robots receive the system setting data packet and complete the setting, response information is sequentially replied to the monitoring computer according to the formation serial numbers of the robots, the robot with the serial number of 0 immediately replies to the monitoring computer, and the robot with the serial number of 1 delays T0Post-reply monitoring computer, serial number 2 robot delay 2X T0And then the monitoring computer is replied to stagger the communication time slot. The blue part is the system first set up and the red part represents the system reset. As shown in fig. 3.
The system setting data packet comprises 1 main field of role division, and is represented by one byte. In this embodiment, we define the format of the system setup packet as shown in table 1.
TABLE 1 System setup packet Format
Figure BDA0001486471250000091
As shown in the above table, the system sets the header of the packet to 0xFE, the packet length to data length plus 4, defines the source port number and the destination port number to 0xA4, and represents that the packet type is the system setup instruction. The remote address is 0xFFFF, which means that a broadcast mode is adopted, that is, all nodes in the network receive the system setting data packet. The values of the role partition fields and the corresponding queuing numbers are shown in table 2.
TABLE 2 System role division List
Figure BDA0001486471250000101
The definition of the system setup response packet ACK is shown in table 3.
Table 3 system setup response packet ACK format
Figure BDA0001486471250000102
As shown in the above table, the source port number and the destination port number of the response packet are both 0xA5, the remote address is the communication address of the monitoring computer is 0x0022, and the confirmation field takes the value 0x00, which indicates that the setting is failed, or 0x01, which indicates that the setting is successful.
(2) Monitoring computer and captain robot
After the monitoring computer successfully completes the setting of the formation, the appointed captain robot starts to take on the fusion and collection function of the formation data in a period T1And uploading a formation state data packet containing current attitude information, position information and electric quantity information of each robot in the formation to a monitoring computer. When the sound source target appears, the captain robot immediately uploads a formation detection data packet containing sound source orientation information detected by all robots in the current formation and position, course and electric quantity information of all robots to the monitoring computer after calculating the sound source target position. The communication time sequence of the monitoring computer and the captain robot is shown in figure 4. Table 4 shows the number of formation statesTable 5 shows details of the robot status information in table 4 according to the packet format.
TABLE 4 formation status packet format for system
Figure BDA0001486471250000111
TABLE 5 detailed robot status information
Figure BDA0001486471250000112
As shown in the above table, the header of the formation status data packet is 0xFE, and the packet length is data length plus 4, i.e. 37, which is represented by 0x25 in 16-ary notation. The source port number and the destination port number are defined as 0xA0, representing the packet type as a formation status packet. The remote address is the monitoring computer communication address 0x 0022. The electric quantity of the robot is represented by 1 byte, 0x00 or 0x01 is selected, 0x00 represents that the electric quantity is insufficient, and 0x01 represents that the electric quantity is normal. The heading of the robot is expressed by 2 bytes, and the longitude and the latitude are expressed by 4 bytes. The state information of the slave robot in the table is consistent with the format of the captain robot. Table 6 shows the format of the system formation probe packet, and table 7 shows the details of the columns of robot orientation information and status information of the robot in table 6.
TABLE 6 formation Probe packet Format
Figure BDA0001486471250000113
TABLE 7 robot orientation information and State information details
Figure BDA0001486471250000114
As shown in the above table, the header of the formation probe packet is 0xFE, and the packet length is data length plus 4, i.e. 47, which is represented as 0x2F by 16. The source port number and destination port number are defined as 0xA1, representing the packet type as a queued probe packet. The remote address is a communication address 0x0022 of the monitoring computer, and is consistent with the address of the captain robot initialized by the system. The formation positioning result is expressed by 4 bytes, the first 2 bytes express x coordinate, and the last 2 bytes express y coordinate. The electric quantity of the robot is represented by 1 byte, 0x00 or 0x01 is selected, 0x00 represents that the electric quantity is insufficient, and 0x01 represents that the electric quantity is normal. The robot sound source orientation information and the heading information are respectively expressed by 2 bytes, and the longitude and the latitude are respectively expressed by 4 bytes. The orientation information of the slave robot and the data format of the state information of the slave robot in the table are consistent with the format of the captain robot.
(3) Between captain's robot and slave robot
When the system works, on one hand, each slave robot takes GPS time service pulse as a synchronous time mark according to the serial number in the formation, the slave robot with the serial number 1 immediately returns to the monitoring computer after the GPS time service second pulse is triggered, and the slave robot with the serial number 2 delays T after the GPS time service second pulse is triggered2And then the monitoring computer is replied, the communication time slots are staggered, and the single machine state data packets are respectively uploaded to the captain robot every second. When the slave robot detects the sound source target, the slave robot respectively and immediately uploads the single machine detection data packet to the captain robot. On the other hand, the captain robot sends a control data packet to the slave robot in a broadcasting and unicasting mode to control the working state of the slave robot. The single machine state data packet of the robot comprises the current information of the position, the course, the electric quantity and the like of the robot. The robot detection data packet contains the sound source orientation information of the robot at present, the self position, the course, the electric quantity and other information, and the communication time sequence is shown in the attached figure 5. Table 8 shows the slave robot standalone status data packet format.
Table 8 slave robot single machine state data packet
Figure BDA0001486471250000121
As shown in the above table, the header of the single status packet is 0xFE, and the packet length is data length plus 4, i.e. 15, which is represented by 0x0F in the 16-ary notation. The source port number and the destination port number are defined as 0xB0, which represents the packet type as a single status packet. The remote address is a communication address of the captain robot, and is consistent with the address of the captain robot initialized by the system. The single robot heading information is represented by 2 bytes, and the longitude and the latitude of the robot are represented by 4 bytes.
Table 9 shows the slave robot standalone probing packet format.
Table 9 slave robot single machine detection data packet
Figure BDA0001486471250000122
Figure BDA0001486471250000131
As shown in the above table, the header of the single probe packet is 0xFE, and the packet length is data length plus 4, i.e. 17, which is represented as 0x11 in 16-ary notation. The source port number and destination port number are defined as 0xB1, which represents the packet type as a stand-alone probe packet. The remote address is a communication address of the captain robot, and is consistent with the address of the captain robot initialized by the system. The orientation information and the heading information of the single robot are both expressed by 2 bytes, and the longitude and the latitude of the robot are both expressed by 4 bytes.
The sound detection robot sound signal detection method of the embodiment is as follows:
the sound signal detection method of the sound detection robot is based on time domain and frequency domain analysis of a target sound signal s (N) with the total length N, wherein N is discrete time. Framing s (N), i representing frame number, N1Representing the signal length of each frame, the sound signal of the ith frame, and its short-time energy, short-time zero-crossing rate and frequency domain sub-band energy are sequentially represented as xi(n),n=1,2...N1,EiAnd Zi. The frequency spectrum of s (N) is marked as S (w), the frequency spectrum is divided into frames, j represents the sequence number of the sub-band frequency spectrum, N2Representing the length of each frame, i.e. subband spectrum, of the spectrum, the energy of the jth subband spectrum being denoted Pj. Separately analyzing the target sound signal s (n)) And the time domain short-time energy E of the ambient noise (n)iShort time zero crossing rate ZiSum frequency domain subband energy PjThe three main characteristics are used for obtaining the normalized threshold values of the three characteristic quantities of different sound signals and detecting and identifying the sound signals collected by the system. Comparing the three characteristic parameters with the threshold value from the whole signal sequence, when the three threshold values are all smaller than the threshold value, considering that the signal is detected, taking the moment as the effective starting point of the target signal, and intercepting 512 point data from the sequence after the starting point to define the data as the direct wave signal of the sound. Wherein
(1) Short-time energy E of any sound signal of microphone arrayi
Figure BDA0001486471250000132
(2) Short-time zero crossing rate Z of any sound signal of microphone arrayi
Figure BDA0001486471250000133
Figure BDA0001486471250000141
Wherein sgn [ ] is a mathematical sign function defined as
Figure BDA0001486471250000142
(3) Frequency domain sub-band energy P of any sound signal of microphone arrayj
Figure BDA0001486471250000143
Wherein S (w) is the frequency spectrum of any sound signal of the microphone array
Figure BDA0001486471250000144
The sound direction algorithm of the sound detection robot of the present embodiment is as follows:
the sound orientation algorithm of the sound detection robot adopts a generalized cross-correlation time delay estimation algorithm. After a target sound source is detected to appear, the time difference of arrival of the four microphone signals is obtained by adopting a generalized cross-correlation time delay estimation algorithm for the direct wave signals of the four microphones, and the horizontal azimuth angle of the sound signal relative to the array is obtained according to the geometric relation of the quaternary array.
Let si(n) and sj(n) is the ith and jth sound direct wave signals obtained after the system is detected
(1) Then si(n) and sj(n) generalized cross-correlation delay estimate R' [ n ]]Is composed of
Figure BDA0001486471250000145
Wherein
Figure BDA0001486471250000146
Is s isi(n) and sj(n) frequency domain autocorrelation
Figure BDA0001486471250000147
Wherein S isi(omega) and Sj(ω) is each si(n) and sjThe frequency spectrum of (n) is calculated as shown in formula (5), Wn(ω) is a frequency domain weighting function.
Figure BDA0001486471250000151
(2) Direction solution
And searching the maximum value of the obtained R' n, finding out the position of the maximum value, obtaining the time difference tau from the sound source to the two microphones, and calculating the direction of the sound source according to a geometric relation by combining the distance between the microphone pairs.
The direction-finding cross algorithm of the system sound source localization of the embodiment is as follows:
the system establishes a two-dimensional xoy plane coordinate system by taking the position o of the captain robot as the origin of the coordinate system, the geographical north direction as the positive direction of the x axis and the geographical east direction as the positive direction of the y axis.
The captain robot fuses the sound source target orientation angle phi of M sound detection robots at the same time based on a cross direction finding algorithmiAnd its own current course angle
Figure BDA0001486471250000152
Obtaining the azimuth angle theta of the sound source under the geodetic coordinate systemiThe longitude and latitude information of all robots in the formation is obtained through the GPS, and the position coordinate p under the geodetic coordinate system relative to the position of the robot at the formation length is obtained through conversioni(xi,yi) The target position is realized by adopting a cross direction-finding positioning algorithm and a least square algorithm
Figure BDA0001486471250000153
The index i indicates the robot serial number, i is 1,2 … M, and the specific principle is shown in fig. 6, wherein the target orientation
Figure BDA0001486471250000154
Comprises the following steps:
Figure BDA0001486471250000155
wherein
Figure BDA0001486471250000156
C=[g1 g2 ... gN]T (11)
gi=xisinθi-yicosθi (12)
In which the geodetic coordinatesAzimuth angle theta of sound source under systemiThe calculation method of (2) is as follows:
when in use
Figure BDA0001486471250000157
Time, sound source direction angle in geodetic coordinate system:
Figure BDA0001486471250000158
when in use
Figure BDA0001486471250000159
Time, sound source direction angle in geodetic coordinate system:
Figure BDA00014864712500001510
the microphone arrays of the sound source orientation module of the system sound detection robot are sequentially 0-360 degrees in the clockwise direction.
The sound detection robot based on the distributed sound source detection method of the embodiment:
the supervisory control computer of the robot sound source collaborative detection system in the embodiment is a PC-side program, developed by adopting MATLAB GUI, and the captain robot and the slave robot in the embodiment are completely the same in hardware and function and have the functions of sound detection, wireless communication, GPS positioning, posture perception and obstacle detection.
In the embodiment, the captain robot serves as a communication center node, and is communicated with the monitoring computer through a wireless link upwards according to a system communication protocol and is communicated with 2 slave robots in a time-sharing manner through wireless downwards. And the captain robot adopts a corresponding sound source positioning algorithm to obtain the sound source target position and track information according to the sound source target detection information of each robot in the formation, and simultaneously plans the subsequent tasks of the system and commands the motion of the subordinate sound detection robot.
The sound detection robot in this embodiment includes a robot platform, a robot control system, and a robot sensor system, as shown in fig. 7.
The robot platform of the sound detection robot in this embodiment includes a mechanical structure and an electric transmission mechanism. In the embodiment, the robot platform is a wheel-type four-wheel-drive intelligent trolley robot, the size of the platform is 23cm by 19cm, and the platform is good in stability and maneuverability, and is shown in figure 7.
The robot control system of the sound detection robot in the embodiment includes an ARM microcontroller, a power supply system, and a motor drive module. Adopt STM32F103ZETB 32 bit high performance ARM microcontroller in this embodiment, power supply system adopts the power supply of the lithium cell that charges of 9V 5000 milliamperes, and motor drive module adopts heavy current motor driver chip L293D, and control system stable performance, powerful.
The robot sensor system of the sound detection robot in the embodiment includes a sound source orientation module, a positioning module, a wireless communication module, an obstacle avoidance sensor module and an attitude sensor module.
The sound source orientation module of the robot sensor system of the sound detection robot in this embodiment includes a microphone array, an analog preprocessing circuit, and a data acquisition and signal processing unit. As shown in fig. 10. In this example, the sound source orientation module is connected to the robot control system through the SPI interface, as shown in fig. 7.
In the embodiment, the quaternary cross microphone array of the sound source orientation module of the robot sensor system of the sound detection robot is constructed by using filtered electret microphones with good correlation, the four microphones are arranged in a cross shape on the same plane, when the vehicle body is in a horizontal position, the plane is parallel to the horizontal plane, and the distance between each pair of microphone elements is 0.2 m, as shown in fig. 9.
In the embodiment, the analog preprocessing circuit of the sound source orientation module of the robot sensor system of the sound detection robot adopts an AD8656 operational amplifier to build an amplification filter circuit, preprocesses an analog signal output by the microphone array, and transmits the output analog signal to the data acquisition and signal processing unit for sampling processing. In this embodiment, the data acquisition and signal processing unit of the sound source orientation module of the robot sensor system of the sound detection robot is constructed by using a multi-channel synchronous acquisition AD chip and a DSP chip, wherein the AD chip is a four-channel high-speed synchronous acquisition AD7606 chip, the highest sampling frequency of the chip is 200KHz, and the DSP chip is a TMS320F2812 chip.
In the embodiment, a UBLOX NEO-M8N GPS module is selected as a positioning module of a sensor system of the sound detection robot, and is connected with a robot control system through a UART interface. As shown in fig. 8.
In this embodiment, the wireless communication module of the robot sensor system of the sound detection robot is 2.4G Zigbee wireless module DL-LN32P, and the maximum communication distance of the module is 500 m, and the module is connected to the robot control system through a UART interface. As shown in fig. 8.
In the embodiment, the obstacle avoidance sensor module of the robot sensor system of the sound detection robot adopts two pairs of infrared pair tube sensors, the maximum detection distance is 0.5 m, and the two pairs of infrared pair tube sensors are connected by adopting a common IO spoken language control system. As shown in fig. 8.
The robot sensor system attitude sensor of the sound detection robot in the embodiment selects GY953 nine-axis attitude sensor, can directly output 3-axis Euler angles, and is connected with a robot control system through an SPI interface.
The innovation of this patent lies in that robot sound source detecting system in the past all is based on single robot, because single robot size restriction, the microphone array aperture of carrying on the robot is less, and the sound source directional precision of this type of system can also, nevertheless because the range error is great, leads to positioning accuracy not high, and detection distance is limited moreover. The system combines a distributed sound source positioning technology and a multi-robot cooperation technology, has the advantages of high detection precision, strong environmental adaptability and fault tolerance and the like, is used as a new means for group robot environmental perception, greatly improves the robot environmental perception capability, lays a good foundation for intelligent robot environmental perception and formation cooperation, and has wide application prospect.
The invention is not limited to the examples, and any equivalent changes to the technical solution of the invention by a person skilled in the art after reading the description of the invention are covered by the claims of the invention.

Claims (10)

1.一种分布式声源探测方法,其特征在于:1. a distributed sound source detection method, is characterized in that: 该方法包括以下步骤;The method includes the following steps; 步骤一,系统布局:将M个声音探测机器人分散布置于需要监测的区域,监控计算机置于无线通信范围以内;其中M≥3;Step 1, system layout: M sound detection robots are scattered in the area to be monitored, and the monitoring computer is placed within the wireless communication range; where M≥3; 步骤二,角色分配:M个声音探测机器人按照角色划分规则划分为1个队长机器人和M-1个从属机器人,组成机器人编队;其中队长机器人作为编队的通信和数据处理中心,队长机器人通过无线通信与各个从属机器人交互;其中M≥3;Step 2, role assignment: M sound detection robots are divided into 1 leader robot and M-1 slave robots according to the role division rules to form a robot formation; the leader robot serves as the communication and data processing center of the formation, and the leader robot communicates through wireless Interact with each slave robot; where M ≥ 3; 所述的角色划分规则定义如下:The described role division rules are defined as follows: 各个机器人在上电之后,进行系统初始化,在系统初始化阶段通过传感器系统感知自身位置,航向信息,采集自身所处区域的环境噪声大小;各个机器人在初始化结束后分别将上述信息上传到监控计算机;After each robot is powered on, it initializes the system. During the system initialization stage, the sensor system senses its own position and heading information, and collects the environmental noise in the area where it is located; each robot uploads the above information to the monitoring computer after initialization; 监控计算机对各个机器人的环境噪声大小依次排序,指定环境噪声小的机器人为队长机器人,编队序号为1,其余机器人为从属机器人,编队序号依次为2…M;所述的环境噪声测量方法如下:在机器人初始化阶段,所有声音探测机器人采集T秒的环境声音信号,计算上述信号的平均功率,得到当前环境噪声级;The monitoring computer ranks the environmental noise of each robot in order, and designates the robot with the lowest environmental noise as the leader robot, the formation number is 1, and the other robots are slave robots, and the formation number is 2...M; the environmental noise measurement method is as follows: In the robot initialization stage, all sound detection robots collect environmental sound signals for T seconds, calculate the average power of the above signals, and obtain the current environmental noise level; 步骤三,信息感知与信号检测:所有声音探测机器人实时感知自身姿态信息和位置信息;采集声音信号,将接收到的声音信号的时域短时能量,短时过零率和频域子带能量与初始设定的时域短时能量,短时过零率和频域子带能量门限值进行比较,当均大于等于各自门限值时,即认为检测到有效信号,并得到各自有效信号起始点,则继续执行第四步,否则循环执行第三步;Step 3: Information perception and signal detection: All sound detection robots perceive their own attitude information and position information in real time; collect sound signals, and analyze the time domain short-term energy, short-term zero-crossing rate and frequency domain sub-band energy of the received sound signal Compare with the initial set short-term energy in time domain, short-term zero-crossing rate and sub-band energy threshold value in frequency domain. When all are greater than or equal to their respective threshold values, it is considered that a valid signal is detected and their respective valid signals are obtained. If the starting point is reached, continue to execute step 4, otherwise execute step 3 in a loop; 步骤四,声源定向与信息交互:各个声音探测机器人分别从各自检测到的有效声音信号的起始点起,在信号序列中选取相同点数的数据,采用广义互相关时延估计声音定向算法进行计算,得到声源目标相对自身的麦克风阵列的方位角;其中从属机器人将声源探测结果和自身位置信息,姿态信息上传到队长机器人处;Step 4: Sound source orientation and information interaction: each sound detection robot selects the same number of data in the signal sequence from the starting point of the effective sound signal detected by each, and uses the generalized cross-correlation delay estimation sound orientation algorithm for calculation. , to obtain the azimuth angle of the sound source target relative to its own microphone array; wherein the slave robot uploads the sound source detection result, its own position information, and attitude information to the leader robot; 步骤五,数据融合实现声源定位:队长机器人在获得来自各个从属机器人的声源探测结果和它们各自的位置信息、航向信息后结合自身探测到的数据,融合系统当前所有机器人的位置信息,航向信息和声源目标定向结果,得到所有机器人相对地面坐标系下的声源定向方位角度,采用分布式声源定位算法得到声源目标方位,同时将目标探测结果上传监控计算机;Step 5: Data fusion to achieve sound source localization: After obtaining the sound source detection results and their respective position information and heading information from each slave robot, the leader robot combines the data detected by itself, and fuses the current position information and heading of all robots in the system. Information and sound source target orientation results, obtain the sound source orientation azimuth angle of all robots relative to the ground coordinate system, use the distributed sound source localization algorithm to obtain the sound source target orientation, and upload the target detection results to the monitoring computer at the same time; 步骤六,编队协同实现定位优化:针对连续声源目标,队长机器人根据初始探测结果,分析连续声源的位置与当前系统声音探测机器人空间几何连线拓扑结构的相对关系,依次向各个机器人发送包含优化点位置坐标的控制指令,编队中各个从属机器人采用惯性导航算法和PID控制算法自主移动到优化点位置,优化分布式声源探测系统的布局结构,提高系统对声源的定位和轨迹跟踪的精度。Step 6: Formation cooperates to achieve positioning optimization: for the continuous sound source target, the leader robot analyzes the relative relationship between the position of the continuous sound source and the spatial geometrical connection topology of the current system sound detection robot according to the initial detection results, and sends the information to each robot in turn. Optimize the control instructions of the position coordinates of the point. Each subordinate robot in the formation uses the inertial navigation algorithm and the PID control algorithm to move to the optimized point position autonomously, optimize the layout structure of the distributed sound source detection system, and improve the sound source localization and trajectory tracking of the system. precision. 2.根据权利要求1所述的分布式声源探测方法构建的声音探测机器人,包括机器人平台、机器人控制系统和机器人传感器系统;所述机器人平台包括机械结构和电气传动机构;所述机器人控制系统包括ARM微控制器、供电模块和电机驱动模块;所述机器人传感器系统包括声源定向模块、定位模块、无线通信模块、避障传感器模块和姿态传感器模块;机器人传感器系统中的声源定向模块实时感知环境声音,当环境中出现声源目标时,计算声源方位,将结果传入控制系统;机器人传感器系统中的姿态传感器模块、避障传感器模块分别实时输出机器人当前的三轴姿态数据和障碍物探测数据到机器人控制系统;对于系统中的从属机器人而言,其控制系统传入的传感器系统的数据需要通过无线模块上传至队长机器人;对于系统中的队长机器人而言,其控制系统在接收自身传感器系统数据的同时,通过无线接收其它从属机器人数据,融合得到声源目标定向结果后,将结果通过无线通信模块上传至监控计算机。2. The sound detection robot constructed by the distributed sound source detection method according to claim 1 comprises a robot platform, a robot control system and a robot sensor system; the robot platform comprises a mechanical structure and an electrical transmission mechanism; the robot control system It includes an ARM microcontroller, a power supply module and a motor drive module; the robot sensor system includes a sound source orientation module, a positioning module, a wireless communication module, an obstacle avoidance sensor module and an attitude sensor module; the sound source orientation module in the robot sensor system is real-time Perceive the ambient sound, when the sound source target appears in the environment, calculate the sound source orientation, and transmit the result to the control system; the attitude sensor module and obstacle avoidance sensor module in the robot sensor system respectively output the current three-axis attitude data and obstacles of the robot in real time. Object detection data to the robot control system; for the slave robot in the system, the data from the sensor system from its control system needs to be uploaded to the leader robot through the wireless module; for the leader robot in the system, its control system is receiving At the same time as the data of its own sensor system, it receives the data of other slave robots wirelessly, and after fusion obtains the sound source target orientation result, the result is uploaded to the monitoring computer through the wireless communication module. 3.根据权利要求2所述的声音探测机器人,其特征在于,所述机器人传感器系统的声源定向模块包括麦克风阵列,模拟预处理电路和数据采集与信号处理单元。3 . The sound detection robot according to claim 2 , wherein the sound source orientation module of the robot sensor system comprises a microphone array, an analog preprocessing circuit and a data acquisition and signal processing unit. 4 . 4.根据权利要求3所述的声音探测机器人,其特征在于,所述机器人传感器系统的声源定向模块的麦克风阵列包含两个连接杆和四个麦克风,其中两个连接杆位于同一水平面,连接形成十字型;四个麦克风分别位于连接杆的杆端且其到十字型中心处的轴线距离相等;所述麦克风选用驻极体麦克风或MEMS麦克风。4. The sound detection robot according to claim 3, wherein the microphone array of the sound source orientation module of the robot sensor system comprises two connecting rods and four microphones, wherein the two connecting rods are located on the same horizontal A cross shape is formed; the four microphones are respectively located at the rod ends of the connecting rods and the distances from their axes to the center of the cross shape are equal; the microphones are electret microphones or MEMS microphones. 5.根据权利要求2所述的声音探测机器人,其特征在于,所述机器人传感器系统的声源定向模块的模拟预处理电路采用运算放大器搭建放大和滤波电路,对麦克风阵列输出的模拟信号进行放大和滤波,将输出的模拟信号传入数据采集与信号处理单元进行采样和信号处理,计算声源目标方向。5 . The sound detection robot according to claim 2 , wherein the analog preprocessing circuit of the sound source orientation module of the robot sensor system adopts an operational amplifier to build an amplifying and filtering circuit to amplify the analog signal output by the microphone array. 6 . And filtering, the output analog signal is sent to the data acquisition and signal processing unit for sampling and signal processing, and the target direction of the sound source is calculated. 6.根据权利要求3所述的声音探测机器人,其特征在于,所述机器人传感器系统的声源定向模块的数据采集与信号处理单元采用多通道同步采集AD芯片与DSP芯片搭建,其中AD芯片最高采样频率大于等于20KHz。6. The sound detection robot according to claim 3, wherein the data acquisition and signal processing unit of the sound source orientation module of the robot sensor system adopts multi-channel synchronous acquisition AD chip and DSP chip to build, wherein the AD chip is the highest The sampling frequency is greater than or equal to 20KHz. 7.根据权利要求2所述的声音探测机器人,其特征在于,所述机器人传感器系统的定位模块采用GPS定位模块或北斗定位模块或格洛纳斯定位模块,用于对声音探测机器人进行实时的定位和授时。7. The sound detection robot according to claim 2, wherein the positioning module of the robot sensor system adopts a GPS positioning module or a Beidou positioning module or a GLONASS positioning module, for performing real-time detection on the sound detection robot. positioning and timing. 8.根据权利要求3所述的声音探测机器人,其特征在于,所述机器人传感器系统的无线通信模块采用支持4G移动网络、ZigBee或WIFI协议的无线通信模块,用于系统多个机器人的信息交互。8 . The sound detection robot according to claim 3 , wherein the wireless communication module of the robot sensor system adopts a wireless communication module supporting 4G mobile network, ZigBee or WIFI protocol, which is used for information interaction of multiple robots in the system. 9 . . 9.根据权利要求3所述的声音探测机器人,其特征在于,所述机器人传感器系统的姿态传感器模块采用基于MEMS技术的姿态传感器,用于感知机器人实时的航向姿态信息。9 . The sound detection robot according to claim 3 , wherein the attitude sensor module of the robot sensor system adopts an attitude sensor based on MEMS technology, which is used to perceive the real-time heading and attitude information of the robot. 10 . 10.根据权利要求3所述的声音探测机器人,其特征在于,所述机器人传感器系统的避障传感器模块采用红外或超声传感器模块,实现对机器人前进方向障碍的实时探测。10 . The sound detection robot according to claim 3 , wherein the obstacle avoidance sensor module of the robot sensor system adopts an infrared or ultrasonic sensor module to realize real-time detection of obstacles in the forward direction of the robot. 11 .
CN201711221413.2A 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same Active CN108089154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711221413.2A CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711221413.2A CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Publications (2)

Publication Number Publication Date
CN108089154A CN108089154A (en) 2018-05-29
CN108089154B true CN108089154B (en) 2021-06-11

Family

ID=62173279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711221413.2A Active CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Country Status (1)

Country Link
CN (1) CN108089154B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958202B (en) * 2018-07-27 2020-11-24 齐齐哈尔大学 A method for multi-robot collaborative exploration
CN110300355A (en) * 2019-05-07 2019-10-01 广东工业大学 A kind of Intelligent microphone for following sound source position mobile
CN110764053B (en) * 2019-10-22 2021-08-17 浙江大学 A passive localization method for multiple targets based on underwater sensor network
CN111110490A (en) * 2019-12-13 2020-05-08 南方医科大学南方医院 A multifunctional surgical nursing car
CN113331135A (en) * 2021-06-29 2021-09-03 西藏新好科技有限公司 Statistical method for death and washout rate and survival rate of pressed piglets
CN113326899A (en) * 2021-06-29 2021-08-31 西藏新好科技有限公司 Piglet compression detection method based on deep learning model
CN113791727B (en) * 2021-08-10 2023-03-24 广东省科学院智能制造研究所 Edge acquisition equipment applied to industrial acoustic intelligent sensing
CN115267454A (en) * 2022-07-29 2022-11-01 深圳亿嘉和科技研发有限公司 Fusion type AI acoustic imager and imaging method thereof
CN116448231A (en) * 2023-03-24 2023-07-18 安徽同钧科技有限公司 Urban environment noise monitoring and intelligent recognition system
CN117111139B (en) * 2023-08-04 2024-03-05 中国水利水电科学研究院 A high-coverage multi-point rapid detection device and technology for termite nests in dams
CN118965144B (en) * 2024-10-16 2025-02-18 西安石油大学 Offshore wellhead multi-sensor detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08145714A (en) * 1994-11-18 1996-06-07 Shigeo Hirose Information fusion system
CN1297394A (en) * 1999-03-24 2001-05-30 索尼公司 Robot
US7298868B2 (en) * 2002-10-08 2007-11-20 Siemens Corporate Research, Inc. Density estimation-based information fusion for multiple motion computation
CN102411138A (en) * 2011-07-13 2012-04-11 北京大学 A method for robot sound source localization
CN205067729U (en) * 2015-08-17 2016-03-02 旗瀚科技股份有限公司 Realize sound localization processing module of robot sense of hearing function
CN105425212A (en) * 2015-11-18 2016-03-23 西北工业大学 Sound source locating method
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
CN206643934U (en) * 2017-03-31 2017-11-17 长春理工大学 Multi-information acquisition perceives search and rescue robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013032955A1 (en) * 2011-08-26 2013-03-07 Reincloud Corporation Equipment, systems and methods for navigating through multiple reality models

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08145714A (en) * 1994-11-18 1996-06-07 Shigeo Hirose Information fusion system
CN1297394A (en) * 1999-03-24 2001-05-30 索尼公司 Robot
US7298868B2 (en) * 2002-10-08 2007-11-20 Siemens Corporate Research, Inc. Density estimation-based information fusion for multiple motion computation
CN102411138A (en) * 2011-07-13 2012-04-11 北京大学 A method for robot sound source localization
CN205067729U (en) * 2015-08-17 2016-03-02 旗瀚科技股份有限公司 Realize sound localization processing module of robot sense of hearing function
CN105425212A (en) * 2015-11-18 2016-03-23 西北工业大学 Sound source locating method
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
CN206643934U (en) * 2017-03-31 2017-11-17 长春理工大学 Multi-information acquisition perceives search and rescue robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Bi-Direction Interaural Matching Filter and Decision Weighting Fusion for Sound Source Localization in Noisy Environments;Liu Hong;《IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS》;20161231;第E99D卷(第12期);全文 *
Fusing Sound and Dead Reckoning for Multi-robot Cooperative;Yu-Han Cheng,Qing-Hao Meng,Ying-Jie Liu,Ming Zeng,Le Xue;《 2016 12th World Congress on Intelligent Control and Automation》;20160929;全文 *
基于声音的分布式多机器人相对定位;吴玉秀,孟庆浩,曾明;《自动化学报》;20140531;第40卷(第5期);全文 *

Also Published As

Publication number Publication date
CN108089154A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108089154B (en) Distributed sound source detection method and sound detection robot based on same
JP6377169B2 (en) System and method for estimating UAV position
CN108151747B (en) An indoor positioning system and positioning method using the fusion of acoustic signals and inertial navigation
JP6330200B2 (en) SOUND SOURCE POSITION ESTIMATION DEVICE, MOBILE BODY, AND MOBILE BODY CONTROL METHOD
CN102901949B (en) Two-dimensional spatial distribution type relative sound positioning method and device
CN108226852B (en) Unmanned aerial vehicle operator positioning system and method based on aerial radio monitoring platform
CN105142239B (en) Wireless sense network mobile sink method of data capture based on data value dynamic estimation
CN104535966A (en) Indoor navigation system of intelligent wheelchair and control method of system
CN107490377A (en) Indoor map-free navigation system and navigation method
CN102890267A (en) Microphone array structure alterable low-elevation target locating and tracking system
CN105180945A (en) Indoor motion trail reconstructing method and system based on mobile intelligent terminal
Al-Mashhadani et al. Role and challenges of the use of UAV-aided WSN monitoring system in large-scale sectors
Liu et al. The performance evaluation of hybrid localization algorithm in wireless sensor networks
CN114501325A (en) Indoor navigation method and device, electronic equipment and storage medium
CN113825100A (en) Positioning and object-searching method and system
CN112954591B (en) Cooperative distributed positioning method and system
CN116847321A (en) A Bluetooth beacon system and Bluetooth positioning method
Zhang et al. Stam: A system of tracking and mapping in real environments
CN204043703U (en) A kind of indoor environment data acquisition system (DAS)
Kemppainen et al. An infrared location system for relative pose estimation of robots
CN207788963U (en) Robotic vision system
Alhmiedat et al. A hybrid tracking system for ZigBee WSNs
CN113433977B (en) Method and system for high-rise building fire detection based on UAV swarm
JP2021092418A (en) Flight vehicle and flight vehicle positioning system
CN112720448A (en) Positioning robot for self-recognition and positioning system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant