Disclosure of Invention
The technical problem solved by the invention is as follows: aiming at the defects in the prior art and the requirements of intelligent robots on the aspect of sound perception, a novel distributed sound source detection method based on a multi-robot system is provided by combining the distributed sound source positioning technology and the robot formation cooperation technology on the common points of communication systems and hardware resources.
The technical scheme of the invention is as follows: a new distributed sound source detection method is characterized in that:
the method is based on a multi-robot formation cooperative sound source detection system consisting of 1 monitoring computer and a plurality of sound detection robots, and comprises the following steps.
Step one, system layout: and (d is less than or equal to 500), dispersedly placing the M (more than or equal to 3) sound detection robots to an area needing to be monitored according to an optimal layout method, and placing a monitoring computer in the area or within d meters outside the area. The optimal layout method of the robot is defined as follows:
according to the characteristics of the area to be monitored and the number of robots in the formation, the optimal layout of the system refers to that when the layout is in a two-dimensional plane, a geometric figure formed by connecting lines between adjacent robots tends to be a regular polygon, for example, when the system comprises 3 robots, the layout is distributed as far as possible by adopting an equilateral triangle array under the condition that the field condition allows and the signal-to-noise ratio requirement of the system is met.
Step two, role allocation: m (not less than 3) sound detection robots are divided into 1 queue-length robot and M-1 slave robots according to the role division rule to form a robot formation. The captain robot is used as a communication and data processing center of formation, and the captain robot interacts with each slave robot through wireless communication. The role division rule is defined as follows:
after each robot is powered on, carrying out system initialization, sensing the position and the heading information of the robot through a sensor system in the system initialization stage, and collecting the environmental noise of the area where the robot is located; each robot uploads the information to a monitoring computer after initialization is finished;
the monitoring computer sequences the environmental noise of each robot in turn, the robot with low environmental noise is designated as the queue-leader robot, the queue serial number is 1, and the other robots are slave robots, the queue serial number is 2 … M in turn; the environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals for T seconds, and the average power of the signals is calculated to obtain the current environmental noise level;
step three, information perception and signal detection: all the sound detection robots sense own attitude information and position information in real time in a static state; and collecting the sound signals, comparing the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy of the received sound signals with the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value which are initially set, when the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value are all less than or equal to the respective threshold value, determining that effective signals are detected, obtaining the respective effective signal starting points, continuing to execute the fourth step, and otherwise, circularly executing.
Fourthly, sound source orientation and information interaction: each sound detection robot selects data of the same point number from the starting point of the effective sound signal detected by each sound detection robot, and calculates by adopting a generalized cross-correlation time delay estimation sound orientation algorithm to obtain the azimuth angle of the sound source target relative to the microphone array of the sound source target. The slave robot uploads the sound source detection result, the position information of the slave robot and the attitude information to the captain robot.
Step five, realizing sound source positioning through data fusion: the captain robot combines the detected data after obtaining the sound source detection results from each subordinate robot and the respective position and attitude information thereof, fuses the position information, the course information and the sound source target orientation results of all the robots in the system at present, obtains the sound source orientation azimuth angle of all the robots relative to the ground coordinate system, obtains the sound source target orientation by adopting a distributed sound source positioning algorithm, and uploads the target detection results to the monitoring computer.
And sixthly, the formation collaboratively realizes positioning optimization: the captain robot judges the type of the sound source according to the detected condition of the sound source target, and if the type of the sound source is in the time Tcontinue(TcontinueAnd more than or equal to 10s) continuously detecting the same sound source target m (m is more than or equal to 5) times, and judging the sound source as a suspected continuous sound source target, otherwise, judging the sound source as a suspected burst sound source.
And if the position of the sound source target is continuously detected to be kept unchanged, judging that the sound-producing object is a suspected static target, and otherwise, judging that the sound-producing object is a suspected dynamic target. And the captain robot plans a system task, and sends a control command to command all robots of the system to keep the current captain form in-situ static sound source detection target aiming at the suspected static and suspected dynamic burst sound sources.
Aiming at a suspected dynamic continuous sound source target, the captain robot analyzes the relative relation between the position of a continuous sound source and the space geometric connecting line topological structure of the current system sound detection robot according to an initial detection result, sequentially sends a control instruction containing the position coordinates of the appointed optimization point and task type information to each robot, commands each slave robot in the formation to autonomously move to different positions, optimizes the distributed detection formation of the system, enables the target to tend to be located at the central position of the space geometric connecting line topological structure of the current robot, and improves the precision of system sound source positioning and track tracking.
The further technical scheme of the invention is as follows: a sound detection robot based on the method comprises a robot platform, a robot control system and a robot sensor system. The robot platform includes a mechanical structure and an electrical drive mechanism. The control system comprises an ARM microcontroller, a power supply system and a motor driving module. The sensor system comprises a sound source orientation module, a positioning module, a wireless communication module, an obstacle avoidance sensor module and an attitude sensor module.
The sound source orientation module of the sensor system of the sound detection robot comprises a microphone array, an analog preprocessing circuit and a data acquisition and signal processing unit.
The microphone array of the sound source orientation module of the sensor system of the sound detection robot comprises two connecting rods and four microphones, wherein the two connecting rods are positioned on the same horizontal plane and connected to form a cross shape; the four microphones are respectively positioned at the rod end of the connecting rod and are equidistant from the axis of the cross center. The analog preprocessing circuit of the sound source orientation module of the sensor system adopts an operational amplifier to build an amplifying and filtering circuit, preprocesses analog signals output by a microphone array, transmits the output analog signals into a data acquisition and processing unit for sampling processing, and calculates the target direction of a sound source. The data acquisition and signal processing unit of the sound source orientation module is built by an AD chip and a DSP chip, wherein the AD chip is a multichannel synchronous acquisition AD chip with the highest sampling frequency more than or equal to 20 KHz.
The positioning module of the sensor system of the sound detection robot can adopt a GPS positioning module, a Beidou positioning module or a Glonass positioning module and is used for carrying out real-time positioning and time service on the sound detection robot.
The wireless communication module of the robot sensor system can be of various types, such as a wireless communication module supporting a 4G mobile network, ZigBee or WIFI protocol, and is used for information interaction of a plurality of robots in the system.
The attitude sensor module of the robot sensor system of the sound detection robot can adopt different types of sensors, such as an attitude sensor based on MEMS technology, and is used for sensing real-time course attitude information of the robot.
The obstacle avoidance sensor module of the robot sensor system of the sound detection robot can adopt an infrared or ultrasonic sensor module, so that the obstacle in the advancing direction of the robot can be detected in real time.
The signal flow of the single sound detection robot is as follows:
a sound source orientation module in the robot sensor system senses environmental sounds in real time, calculates the position of a sound source when a sound source target appears in the environment, and transmits the result to a control system. An attitude sensor and an ultrasonic sensor in the robot sensor system respectively output the current three-axis attitude data and the obstacle detection data of the robot to the robot control system in real time. For a slave robot in the robot, data of a sensor system, which is transmitted by a control system of the slave robot, needs to be uploaded to the captain robot through a wireless module. For a captain robot in the robot, a control system receives data of a sensor system of the captain robot, simultaneously receives data of other slave robots in a wireless manner, fuses the data to obtain a sound source target orientation result, and uploads the result to a monitoring computer through a wireless communication module.
Effects of the invention
The invention has the technical effects that: compared with a traditional single robot sound source detection system, the multi-robot cooperative sound source detection system based on the distributed sound source positioning technology has the advantages of large detection range, high detection precision, good environmental adaptability, strong fault tolerance and viability and the like, is used as a new means for group robot environment perception, greatly improves the robot environment perception capability, and lays a good foundation for intelligent robot environment perception and formation cooperation.
Detailed Description
The core idea of the invention is to utilize a plurality of mobile robots with sound source detection capability, and to adopt a cross direction finding algorithm to fuse sound source orientation information of a plurality of robots through a wireless sensor network, so as to realize real-time positioning and track tracking of a sound source target.
The distributed sound source detection method based on the embodiment is as follows:
the distributed sound source detection method of the real-time example is based on a multi-robot formation cooperative sound source detection system consisting of 1 monitoring computer and 3 sound detection robots, and comprises the following steps, as shown in the attached figure 1 in the specification.
Step one, system layout: 3 sound detection robots are arranged in a square area with the size of 50m multiplied by 50m to form a regular triangle topological structure as much as possible according to the field condition and the optimal layout method, the distance between adjacent robots is 30 meters, and a monitoring computer is arranged outside the area within 50 meters. The optimal layout method comprises the following steps:
according to the characteristics of the area to be monitored and the number of the robots in the formation, the optimal layout of the system means that when the layout is in a two-dimensional plane, the geometric figure formed by connecting lines between adjacent robots tends to be a regular polygon as much as possible, and if the system comprises 3 robots, the layout is in an equilateral triangle array. When the sound source intensity is 80dB, the distance between adjacent robots in the layout is within 50 meters.
Step two, role allocation: the system comprises 3 sound detection robots which are divided into 1 captain robot and 2 subordinate robots according to a role division method to form a robot formation. The captain robot is used as a communication and data processing center of formation, and the captain robot interacts with each slave robot through wireless communication. The system architecture is shown in figure 2. The role division rule is defined as follows:
after each robot is powered on, carrying out system initialization, sensing the position and the heading information of the robot through a sensor system in the system initialization stage, and collecting the environmental noise of the area where the robot is located; each robot uploads the information to a monitoring computer after initialization is finished;
the monitoring computer sequences the environmental noise of each robot in turn, the robot with low environmental noise is designated as the queue-leader robot, the queue serial number is 1, and the other robots are slave robots, the queue serial number is 2 … M in turn; the environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals for T seconds, and the average power of the signals is calculated to obtain the current environmental noise level;
and sequencing the robots in sequence according to the environmental noise of the robots, wherein the designated robot with low environmental noise is the captain robot, the formation serial number is 1, and the other robots are the slave robots, and the formation serial numbers are 2 and 3 in sequence. The environmental noise measuring method comprises the following steps: in the robot initialization stage, all sound detection robots collect environmental sound signals of T (T is more than or equal to 5s and less than or equal to 10s) seconds at a certain sampling rate, absolute values of the signal sequences are obtained, then the absolute values are accumulated and summed, and the average is obtained to obtain the current environmental noise.
Step three, information perception and signal detection: all the sound detection robots collect sound signals in real time in a static state, and detect and analyze the sound signals by adopting a sound detection algorithm. The detection principle is that the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy of the received sound signal are compared with the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value which are initially set, when the time domain short-time energy, the short-time zero crossing rate and the frequency domain sub-band energy threshold value are all less than or equal to the respective threshold value, the effective signal is considered to be detected, and the starting;
fourthly, sound source orientation and information interaction: each sound detection robot selects sound signal data with the same number of points from the starting point of the effective signal detected by each sound detection robot respectively, calculates by adopting a sound source orientation algorithm, namely a generalized cross-correlation time delay estimation sound orientation algorithm, to obtain the azimuth angle and the pitch angle of a sound source target relative to a microphone array of the sound source target, and uploads the sound source orientation detection result, the position and the attitude information of the slave robot to a long robot according to a system communication protocol;
step five, realizing sound source positioning through data fusion: the captain robot combines the data of the captain robot after obtaining the sound source detection result from each subordinate robot and the position and attitude information of each subordinate robot, integrates the position information, course information and sound source target orientation result of all the robots in the system, obtains the sound source orientation information of all the robots relative to the ground coordinate system, constructs the topological structure of the system, adjusts the parameters of the distributed positioning algorithm, obtains the sound source target position by adopting the direction-finding cross algorithm, and uploads the target detection result to the monitoring computer according to the system communication protocol.
And sixthly, the formation collaboratively realizes positioning optimization: and the captain robot judges the type of the sound source according to the condition of the detected sound source target, and if the same sound source target is detected for 5 times within 10 seconds, the sound source is judged to be a suspected continuous sound source target, otherwise, the sound source is judged to be a suspected sudden sound source.
Aiming at a continuous sound source target, a captain robot analyzes the relative relation between the position of a continuous sound source and a space geometric connecting line topological structure of a current system sound detection robot according to an initial detection result, sequentially sends a control instruction containing an optimized point position coordinate to each robot, and each slave robot in a formation automatically moves to the optimized point position by adopting an inertial navigation algorithm and a PID control algorithm, so that the layout structure of a distributed sound source detection system is optimized, and the positioning and track tracking precision of the system on the sound source is improved.
The target sound source of the present embodiment is as follows:
the target sound aimed by the multi-robot system in the embodiment is mainly a broadband stationary signal (the frequency range is between 100Hz and 3000 Hz), such as the engine sound of an automobile, a armored car, a helicopter and the like and a short-time non-stationary signal, such as a gunshot sound and a clapping sound.
The system communication protocol in this embodiment is as follows:
the multi-robot system in this embodiment adopts a centralized communication and control structure, the formation totally includes 3 robots, and each robot has a unique fixed remote address, i.e., a communication address, which is 0x001F,0x0020, and 0x0021, respectively. And in the system initialization stage, the robot roles in the formation are divided by a monitoring computer, and the system is divided into 1 captain robot and 2 slave robots according to the roles, wherein the captain robot is used as a communication center and a data processing center of the system and is responsible for system data aggregation and fusion, and the slave robots detect environmental information under the control of the captain robot. The whole communication control system is shown in figure 2.
The following specifically describes a communication protocol in the system initialization stage, a communication protocol between the monitoring computer and the captain robot in the normal working stage, and a communication protocol between the captain robot and the slave robot.
(1) System initialization phase communication protocol
When all the robots of the system are just started and powered on, 3 robots in the formation are in a standby state, and the system issues system setting data packets to all the robots in the formation in a broadcasting mode through a monitoring computer, so that the role division of the system robots is realized. The robot is divided into a captain robot and two subordinate robots, and the formation serial numbers are respectively defined as 0,1 and 2. After all the robots receive the system setting data packet and complete the setting, response information is sequentially replied to the monitoring computer according to the formation serial numbers of the robots, the robot with the serial number of 0 immediately replies to the monitoring computer, and the robot with the serial number of 1 delays T0Post-reply monitoring computer, serial number 2 robot delay 2X T0And then the monitoring computer is replied to stagger the communication time slot. The blue part is the system first set up and the red part represents the system reset. As shown in fig. 3.
The system setting data packet comprises 1 main field of role division, and is represented by one byte. In this embodiment, we define the format of the system setup packet as shown in table 1.
TABLE 1 System setup packet Format
As shown in the above table, the system sets the header of the packet to 0xFE, the packet length to data length plus 4, defines the source port number and the destination port number to 0xA4, and represents that the packet type is the system setup instruction. The remote address is 0xFFFF, which means that a broadcast mode is adopted, that is, all nodes in the network receive the system setting data packet. The values of the role partition fields and the corresponding queuing numbers are shown in table 2.
TABLE 2 System role division List
The definition of the system setup response packet ACK is shown in table 3.
Table 3 system setup response packet ACK format
As shown in the above table, the source port number and the destination port number of the response packet are both 0xA5, the remote address is the communication address of the monitoring computer is 0x0022, and the confirmation field takes the value 0x00, which indicates that the setting is failed, or 0x01, which indicates that the setting is successful.
(2) Monitoring computer and captain robot
After the monitoring computer successfully completes the setting of the formation, the appointed captain robot starts to take on the fusion and collection function of the formation data in a period T1And uploading a formation state data packet containing current attitude information, position information and electric quantity information of each robot in the formation to a monitoring computer. When the sound source target appears, the captain robot immediately uploads a formation detection data packet containing sound source orientation information detected by all robots in the current formation and position, course and electric quantity information of all robots to the monitoring computer after calculating the sound source target position. The communication time sequence of the monitoring computer and the captain robot is shown in figure 4. Table 4 shows the number of formation statesTable 5 shows details of the robot status information in table 4 according to the packet format.
TABLE 4 formation status packet format for system
TABLE 5 detailed robot status information
As shown in the above table, the header of the formation status data packet is 0xFE, and the packet length is data length plus 4, i.e. 37, which is represented by 0x25 in 16-ary notation. The source port number and the destination port number are defined as 0xA0, representing the packet type as a formation status packet. The remote address is the monitoring computer communication address 0x 0022. The electric quantity of the robot is represented by 1 byte, 0x00 or 0x01 is selected, 0x00 represents that the electric quantity is insufficient, and 0x01 represents that the electric quantity is normal. The heading of the robot is expressed by 2 bytes, and the longitude and the latitude are expressed by 4 bytes. The state information of the slave robot in the table is consistent with the format of the captain robot. Table 6 shows the format of the system formation probe packet, and table 7 shows the details of the columns of robot orientation information and status information of the robot in table 6.
TABLE 6 formation Probe packet Format
TABLE 7 robot orientation information and State information details
As shown in the above table, the header of the formation probe packet is 0xFE, and the packet length is data length plus 4, i.e. 47, which is represented as 0x2F by 16. The source port number and destination port number are defined as 0xA1, representing the packet type as a queued probe packet. The remote address is a communication address 0x0022 of the monitoring computer, and is consistent with the address of the captain robot initialized by the system. The formation positioning result is expressed by 4 bytes, the first 2 bytes express x coordinate, and the last 2 bytes express y coordinate. The electric quantity of the robot is represented by 1 byte, 0x00 or 0x01 is selected, 0x00 represents that the electric quantity is insufficient, and 0x01 represents that the electric quantity is normal. The robot sound source orientation information and the heading information are respectively expressed by 2 bytes, and the longitude and the latitude are respectively expressed by 4 bytes. The orientation information of the slave robot and the data format of the state information of the slave robot in the table are consistent with the format of the captain robot.
(3) Between captain's robot and slave robot
When the system works, on one hand, each slave robot takes GPS time service pulse as a synchronous time mark according to the serial number in the formation, the slave robot with the serial number 1 immediately returns to the monitoring computer after the GPS time service second pulse is triggered, and the slave robot with the serial number 2 delays T after the GPS time service second pulse is triggered2And then the monitoring computer is replied, the communication time slots are staggered, and the single machine state data packets are respectively uploaded to the captain robot every second. When the slave robot detects the sound source target, the slave robot respectively and immediately uploads the single machine detection data packet to the captain robot. On the other hand, the captain robot sends a control data packet to the slave robot in a broadcasting and unicasting mode to control the working state of the slave robot. The single machine state data packet of the robot comprises the current information of the position, the course, the electric quantity and the like of the robot. The robot detection data packet contains the sound source orientation information of the robot at present, the self position, the course, the electric quantity and other information, and the communication time sequence is shown in the attached figure 5. Table 8 shows the slave robot standalone status data packet format.
Table 8 slave robot single machine state data packet
As shown in the above table, the header of the single status packet is 0xFE, and the packet length is data length plus 4, i.e. 15, which is represented by 0x0F in the 16-ary notation. The source port number and the destination port number are defined as 0xB0, which represents the packet type as a single status packet. The remote address is a communication address of the captain robot, and is consistent with the address of the captain robot initialized by the system. The single robot heading information is represented by 2 bytes, and the longitude and the latitude of the robot are represented by 4 bytes.
Table 9 shows the slave robot standalone probing packet format.
Table 9 slave robot single machine detection data packet
As shown in the above table, the header of the single probe packet is 0xFE, and the packet length is data length plus 4, i.e. 17, which is represented as 0x11 in 16-ary notation. The source port number and destination port number are defined as 0xB1, which represents the packet type as a stand-alone probe packet. The remote address is a communication address of the captain robot, and is consistent with the address of the captain robot initialized by the system. The orientation information and the heading information of the single robot are both expressed by 2 bytes, and the longitude and the latitude of the robot are both expressed by 4 bytes.
The sound detection robot sound signal detection method of the embodiment is as follows:
the sound signal detection method of the sound detection robot is based on time domain and frequency domain analysis of a target sound signal s (N) with the total length N, wherein N is discrete time. Framing s (N), i representing frame number, N1Representing the signal length of each frame, the sound signal of the ith frame, and its short-time energy, short-time zero-crossing rate and frequency domain sub-band energy are sequentially represented as xi(n),n=1,2...N1,EiAnd Zi. The frequency spectrum of s (N) is marked as S (w), the frequency spectrum is divided into frames, j represents the sequence number of the sub-band frequency spectrum, N2Representing the length of each frame, i.e. subband spectrum, of the spectrum, the energy of the jth subband spectrum being denoted Pj. Separately analyzing the target sound signal s (n)) And the time domain short-time energy E of the ambient noise (n)iShort time zero crossing rate ZiSum frequency domain subband energy PjThe three main characteristics are used for obtaining the normalized threshold values of the three characteristic quantities of different sound signals and detecting and identifying the sound signals collected by the system. Comparing the three characteristic parameters with the threshold value from the whole signal sequence, when the three threshold values are all smaller than the threshold value, considering that the signal is detected, taking the moment as the effective starting point of the target signal, and intercepting 512 point data from the sequence after the starting point to define the data as the direct wave signal of the sound. Wherein
(1) Short-time energy E of any sound signal of microphone arrayi
(2) Short-time zero crossing rate Z of any sound signal of microphone arrayi
Wherein sgn [ ] is a mathematical sign function defined as
(3) Frequency domain sub-band energy P of any sound signal of microphone arrayj
Wherein S (w) is the frequency spectrum of any sound signal of the microphone array
The sound direction algorithm of the sound detection robot of the present embodiment is as follows:
the sound orientation algorithm of the sound detection robot adopts a generalized cross-correlation time delay estimation algorithm. After a target sound source is detected to appear, the time difference of arrival of the four microphone signals is obtained by adopting a generalized cross-correlation time delay estimation algorithm for the direct wave signals of the four microphones, and the horizontal azimuth angle of the sound signal relative to the array is obtained according to the geometric relation of the quaternary array.
Let si(n) and sj(n) is the ith and jth sound direct wave signals obtained after the system is detected
(1) Then si(n) and sj(n) generalized cross-correlation delay estimate R' [ n ]]Is composed of
Wherein
Is s is
i(n) and s
j(n) frequency domain autocorrelation
Wherein S isi(omega) and Sj(ω) is each si(n) and sjThe frequency spectrum of (n) is calculated as shown in formula (5), Wn(ω) is a frequency domain weighting function.
(2) Direction solution
And searching the maximum value of the obtained R' n, finding out the position of the maximum value, obtaining the time difference tau from the sound source to the two microphones, and calculating the direction of the sound source according to a geometric relation by combining the distance between the microphone pairs.
The direction-finding cross algorithm of the system sound source localization of the embodiment is as follows:
the system establishes a two-dimensional xoy plane coordinate system by taking the position o of the captain robot as the origin of the coordinate system, the geographical north direction as the positive direction of the x axis and the geographical east direction as the positive direction of the y axis.
The captain robot fuses the sound source target orientation angle phi of M sound detection robots at the same time based on a cross direction finding algorithm
iAnd its own current course angle
Obtaining the azimuth angle theta of the sound source under the geodetic coordinate system
iThe longitude and latitude information of all robots in the formation is obtained through the GPS, and the position coordinate p under the geodetic coordinate system relative to the position of the robot at the formation length is obtained through conversion
i(x
i,y
i) The target position is realized by adopting a cross direction-finding positioning algorithm and a least square algorithm
The index i indicates the robot serial number, i is 1,2 … M, and the specific principle is shown in fig. 6, wherein the target orientation
Comprises the following steps:
wherein
C=[g1 g2 ... gN]T (11)
gi=xisinθi-yicosθi (12)
In which the geodetic coordinatesAzimuth angle theta of sound source under systemiThe calculation method of (2) is as follows:
when in use
Time, sound source direction angle in geodetic coordinate system:
when in use
Time, sound source direction angle in geodetic coordinate system:
the microphone arrays of the sound source orientation module of the system sound detection robot are sequentially 0-360 degrees in the clockwise direction.
The sound detection robot based on the distributed sound source detection method of the embodiment:
the supervisory control computer of the robot sound source collaborative detection system in the embodiment is a PC-side program, developed by adopting MATLAB GUI, and the captain robot and the slave robot in the embodiment are completely the same in hardware and function and have the functions of sound detection, wireless communication, GPS positioning, posture perception and obstacle detection.
In the embodiment, the captain robot serves as a communication center node, and is communicated with the monitoring computer through a wireless link upwards according to a system communication protocol and is communicated with 2 slave robots in a time-sharing manner through wireless downwards. And the captain robot adopts a corresponding sound source positioning algorithm to obtain the sound source target position and track information according to the sound source target detection information of each robot in the formation, and simultaneously plans the subsequent tasks of the system and commands the motion of the subordinate sound detection robot.
The sound detection robot in this embodiment includes a robot platform, a robot control system, and a robot sensor system, as shown in fig. 7.
The robot platform of the sound detection robot in this embodiment includes a mechanical structure and an electric transmission mechanism. In the embodiment, the robot platform is a wheel-type four-wheel-drive intelligent trolley robot, the size of the platform is 23cm by 19cm, and the platform is good in stability and maneuverability, and is shown in figure 7.
The robot control system of the sound detection robot in the embodiment includes an ARM microcontroller, a power supply system, and a motor drive module. Adopt STM32F103ZETB 32 bit high performance ARM microcontroller in this embodiment, power supply system adopts the power supply of the lithium cell that charges of 9V 5000 milliamperes, and motor drive module adopts heavy current motor driver chip L293D, and control system stable performance, powerful.
The robot sensor system of the sound detection robot in the embodiment includes a sound source orientation module, a positioning module, a wireless communication module, an obstacle avoidance sensor module and an attitude sensor module.
The sound source orientation module of the robot sensor system of the sound detection robot in this embodiment includes a microphone array, an analog preprocessing circuit, and a data acquisition and signal processing unit. As shown in fig. 10. In this example, the sound source orientation module is connected to the robot control system through the SPI interface, as shown in fig. 7.
In the embodiment, the quaternary cross microphone array of the sound source orientation module of the robot sensor system of the sound detection robot is constructed by using filtered electret microphones with good correlation, the four microphones are arranged in a cross shape on the same plane, when the vehicle body is in a horizontal position, the plane is parallel to the horizontal plane, and the distance between each pair of microphone elements is 0.2 m, as shown in fig. 9.
In the embodiment, the analog preprocessing circuit of the sound source orientation module of the robot sensor system of the sound detection robot adopts an AD8656 operational amplifier to build an amplification filter circuit, preprocesses an analog signal output by the microphone array, and transmits the output analog signal to the data acquisition and signal processing unit for sampling processing. In this embodiment, the data acquisition and signal processing unit of the sound source orientation module of the robot sensor system of the sound detection robot is constructed by using a multi-channel synchronous acquisition AD chip and a DSP chip, wherein the AD chip is a four-channel high-speed synchronous acquisition AD7606 chip, the highest sampling frequency of the chip is 200KHz, and the DSP chip is a TMS320F2812 chip.
In the embodiment, a UBLOX NEO-M8N GPS module is selected as a positioning module of a sensor system of the sound detection robot, and is connected with a robot control system through a UART interface. As shown in fig. 8.
In this embodiment, the wireless communication module of the robot sensor system of the sound detection robot is 2.4G Zigbee wireless module DL-LN32P, and the maximum communication distance of the module is 500 m, and the module is connected to the robot control system through a UART interface. As shown in fig. 8.
In the embodiment, the obstacle avoidance sensor module of the robot sensor system of the sound detection robot adopts two pairs of infrared pair tube sensors, the maximum detection distance is 0.5 m, and the two pairs of infrared pair tube sensors are connected by adopting a common IO spoken language control system. As shown in fig. 8.
The robot sensor system attitude sensor of the sound detection robot in the embodiment selects GY953 nine-axis attitude sensor, can directly output 3-axis Euler angles, and is connected with a robot control system through an SPI interface.
The innovation of this patent lies in that robot sound source detecting system in the past all is based on single robot, because single robot size restriction, the microphone array aperture of carrying on the robot is less, and the sound source directional precision of this type of system can also, nevertheless because the range error is great, leads to positioning accuracy not high, and detection distance is limited moreover. The system combines a distributed sound source positioning technology and a multi-robot cooperation technology, has the advantages of high detection precision, strong environmental adaptability and fault tolerance and the like, is used as a new means for group robot environmental perception, greatly improves the robot environmental perception capability, lays a good foundation for intelligent robot environmental perception and formation cooperation, and has wide application prospect.
The invention is not limited to the examples, and any equivalent changes to the technical solution of the invention by a person skilled in the art after reading the description of the invention are covered by the claims of the invention.