[go: up one dir, main page]

CN112040417A - Audio marking method and device, electronic equipment and storage medium - Google Patents

Audio marking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112040417A
CN112040417A CN202010891412.4A CN202010891412A CN112040417A CN 112040417 A CN112040417 A CN 112040417A CN 202010891412 A CN202010891412 A CN 202010891412A CN 112040417 A CN112040417 A CN 112040417A
Authority
CN
China
Prior art keywords
audio
service
abnormal
audio data
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010891412.4A
Other languages
Chinese (zh)
Inventor
沙泓州
赵文思
张佳林
刘章勋
王远征
赖春波
吴钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202010891412.4A priority Critical patent/CN112040417A/en
Publication of CN112040417A publication Critical patent/CN112040417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/012Measuring and analyzing of parameters relative to traffic conditions based on the source of data from other sources than vehicle or roadside beacons, e.g. mobile networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Alarm Systems (AREA)

Abstract

The application provides an audio marking method, an audio marking device, electronic equipment and a storage medium. By the audio marking method, the electronic equipment analyzes the track data of the service vehicle and judges whether the service vehicle is abnormal or not; and extracting the audio segments in the audio data which has a synchronous relation with the track data based on the abnormal time information to be used as the audio of the service abnormal time period to be analyzed by the expert group. Because the electronic equipment can detect the service vehicles with abnormal suspicion in time through the track data of the vehicles and provide audio of abnormal service time intervals to the expert group in a targeted manner, the working efficiency of the expert group can be improved, and the personal safety of drivers and/or passengers can be further guaranteed.

Description

Audio marking method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to an audio tagging method, apparatus, electronic device, and storage medium.
Background
With the increasing development of network appointment vehicles, traffic safety accidents also tend to increase day by day. Therefore, it is very important to sense the occurrence of accidents and rescue in time in order to ensure the personal safety of drivers and passengers.
At present, after the accident of the network car booking is known, a specialist listens to the recording of the whole travel of the network car booking, and determines the time and the place of the accident for guiding rescue work.
However, the above method is not only not good in timeliness but also low in efficiency.
Disclosure of Invention
In view of the above, an object of the present application is to provide an audio tagging method, an audio tagging apparatus, an electronic device, and a storage medium, which can solve the problem of poor efficiency in determining whether a service vehicle is abnormal in the prior art by using the audio tagging method, so as to achieve an effect of improving the work efficiency of related workers.
An object of an embodiment of the present application is to provide an audio tagging method applied to an electronic device, where the method includes:
acquiring track data and audio data in the running process of a service vehicle, wherein the acquisition time of the track data and the acquisition time of the audio data meet a synchronous relation;
acquiring time information when the service vehicle meets an abnormal triggering condition according to the track data;
according to the synchronous relation, first audio data within a preset time range from the moment information are obtained from the audio data;
and taking the first audio data as service abnormal period audio.
Optionally, before the first audio data is used as the audio of the service abnormal period, the method further includes:
analyzing whether the first audio data carries characteristic information representing abnormal service or not;
and if so, taking the first audio data as the audio in the abnormal service period.
Optionally, the step of analyzing whether the first audio data carries characteristic information representing a service anomaly includes:
and processing the first audio data through a pre-trained machine learning model, and judging whether the first audio data carries characteristic information representing the service abnormity.
Optionally, the method further comprises:
taking the first audio data as the audio of the service abnormal period, and then acquiring second audio data, wherein the acquisition time of the second audio data is behind the first audio data;
the second audio data is taken as reference audio.
Optionally, before the first audio data is taken as the service abnormal period audio, the method further includes:
sending an anomaly statistical instruction to communication equipment in the service vehicle;
acquiring abnormal report information sent by the communication equipment;
judging whether the abnormal report information carries indication information needing rescue or not;
if the abnormal report information carries indication information needing rescue, the first audio data is used as the audio of the abnormal service period;
and if the abnormal report information does not carry indication information needing rescue, ignoring the first audio frequency.
Optionally, after the taking the first audio data as the service abnormal period audio, the method further includes:
outputting a first display interface with the service abnormal period audio;
and receiving service abnormity confirmation information, and determining that the service corresponding to the audio in the service abnormity time period is in an abnormal state.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state, the method further includes:
outputting a second display interface having communication information of the service vehicle;
and responding to the communication connection operation based on the second display interface, and establishing communication connection with the communication equipment corresponding to the communication information.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state, the method further includes:
outputting a third display interface, wherein the third display interface displays identification information of the rescue vehicle within a preset distance from the service vehicle;
and responding to vehicle selection operation based on the third display interface, and sending navigation information of the service vehicle to the selected target rescue vehicle.
It is another object of the embodiments of the present application to provide an audio tagging apparatus applied to an electronic device, the audio tagging apparatus including:
the data acquisition module is used for acquiring track data and audio data in the running process of a service vehicle, wherein the acquisition time of the track data and the acquisition time of the audio data meet the synchronous relation;
the time acquisition module is used for acquiring time information when the service vehicle meets an abnormal triggering condition according to the track data;
the audio acquisition module is used for acquiring first audio data within a preset time range from the moment information from the audio data according to the synchronous relation;
and the audio marking module is used for taking the first audio data as the audio of the abnormal service period.
Optionally, the audio tagging device further comprises an audio analysis module, and before the first audio data is used as the audio of the service abnormal period:
the audio analysis module is used for analyzing whether the first audio data carries characteristic information representing abnormal service;
and if so, the audio marking module takes the first audio data as the audio in the abnormal service period.
Optionally, the audio analysis module specifically includes:
and processing the first audio data through a pre-trained machine learning model, and judging whether the first audio data carries characteristic information representing the service abnormity.
Optionally, the first audio data is used as the audio after the abnormal service period;
the audio acquisition module is further configured to acquire second audio data, where acquisition time of the second audio data is located after the first audio data;
the audio tagging module is further configured to use the second audio data as reference audio.
Optionally, before the first audio data is used as the audio of the service abnormal period, the audio tagging device further includes:
the statistical instruction module is used for sending an abnormal statistical instruction to the communication equipment in the service vehicle;
the report acquisition module is used for acquiring abnormal report information sent by the communication equipment;
the rescue judging module is used for judging whether the abnormal report information carries indication information needing rescue or not;
if the abnormal report information carries indication information needing rescue, the first audio data is used as the audio of the abnormal service period;
and if the abnormal report information does not carry indication information needing rescue, ignoring the first audio frequency.
Optionally, after the step of using the first audio data as the audio of the service abnormal period, the audio tagging device further includes:
the interface display module is used for outputting a first display interface with the service abnormal period audio;
and the interactive response module is used for receiving the service abnormity confirmation information and determining that the service corresponding to the audio in the service abnormity time interval is in an abnormal state.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state;
the interface display module is also used for outputting a second display interface with the communication information of the service vehicle;
and the interactive response module is also used for responding to the communication connection operation based on the second display interface and establishing the communication connection with the communication equipment corresponding to the communication information.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state;
the interface display module is further used for outputting a third display interface, wherein the third display interface displays identification information of the rescue vehicle within a preset distance from the service vehicle;
and the interactive response module is also used for responding to the vehicle selection operation based on the third display interface and sending the navigation information of the service vehicle to the selected target rescue vehicle.
It is a further object of embodiments of the present application to provide an electronic device, which includes a processor and a memory, where the memory stores computer-executable instructions, and the computer-executable instructions, when executed by the processor, implement the audio tagging method.
It is a fourth object of the embodiments of the present application to provide a storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the audio tagging method.
Compared with the prior art, the method has the following beneficial effects:
the embodiment of the application provides an audio marking method, an audio marking device, electronic equipment and a storage medium. By the audio marking method, the electronic equipment analyzes the track data of the service vehicle and judges whether the service vehicle is abnormal or not; and extracting the audio segments in the audio data which has a synchronous relation with the track data based on the abnormal time information to be used as the audio of the service abnormal time period to be analyzed by the expert group. Because the electronic equipment can detect the service vehicles with abnormal suspicion in time through the track data of the vehicles and provide audio for the expert group in the abnormal service time period in a targeted manner, the working efficiency of the expert group is improved, and the personal safety of drivers and/or passengers is further guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 illustrates a scene diagram provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 3 is a flow chart illustrating one of the steps of an audio tagging method provided by an embodiment of the present application;
FIG. 4 is a schematic view of a service vehicle service process provided by an embodiment of the present application;
fig. 5 is a second schematic flowchart illustrating steps of an audio tagging method according to an embodiment of the present application;
fig. 6 is a third schematic flowchart illustrating steps of an audio tagging method according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating an accident statistics reporting interface according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a fourth step of an audio tagging method according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating an interaction step based on a first display interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a display interface provided by an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an interaction step based on a second display interface according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating an interaction step based on a third display interface according to an embodiment of the present application;
fig. 13 is a second schematic diagram of a display interface provided in the embodiment of the present application;
fig. 14 shows a schematic structural diagram of an audio marker device provided in an embodiment of the present application.
Icon: 100-an electronic device; 200-a requesting terminal; 300-a network; 400-a service vehicle; 110-an audio marker device; 120-a memory; 130-a processor; 140-a communication unit; 401-mobile intelligent terminal; 500-a first display interface; 510-audio identification; 520-play button; 530-sort button; 600-a second display interface; 700-a third display interface; 710-service vehicle identification; 720-rescue vehicle identification; 1101-a data acquisition module; 1102-a time acquisition module; 1103-an audio acquisition module; 1104-an audio tagging module; 1105-an audio analysis module; 1106-statistical instruction module; 1107-report acquisition module; 1108-a rescue judgment module; 1109-interface display module; 1110-interactive response module.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The terms "service request" and "order" are used interchangeably herein to refer to a request initiated by a passenger, a service requester, a driver, a service provider, or a supplier, the like, or any combination thereof. Accepting the "service request" or "order" may be a passenger, a service requester, a driver, a service provider, a supplier, or the like, or any combination thereof. The service request may be charged or free.
In order to enable those skilled in the art to use the present disclosure, for the service provider and the service provider, an abnormal condition may occur in the service during the service providing process due to an abnormal surrounding environment or a device failure of the service provider. At present, after an abnormality occurs, the service provider mainly listens to the whole recording of the service process by members of expert group, and determines the time and place of the abnormality. Because the whole recording in the service process needs to be listened, the working efficiency of the expert group members is difficult to improve, and the valuable time for rescuing is wasted, so that the efficiency of the expert group members in determining the abnormal occurrence time and the occurrence place is improved, and the method has important significance for guaranteeing the personal safety of personnel.
The following embodiments are given below in conjunction with a specific application scenario of a network appointment. It will be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the application. Although the present application is described primarily in the context of a particular scenario of network appointment, it should be understood that this is only one exemplary embodiment.
As shown in fig. 1, the network appointment scenario includes a requesting terminal 200, an electronic device 100, and a service vehicle 400 as a service provider. The request terminal 200, the electronic device 100, and the service vehicle 400 establish a communication connection via the network 300. The electronic device 100 may be a server of a network car booking platform in a network car booking scenario. The server may be, but is not limited to, a data server, a web server, an FTP (File Transfer Protocol) server, a video server, an audio server, and the like.
There may be a plurality of request terminals 200, and there may be a plurality of vehicles. The requesting terminal 200 may be, but is not limited to, a mobile intelligent terminal, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like.
The communication device of the service vehicle 400 for receiving the navigation information or other information sent by the electronic device 100 may be a device on board the service vehicle 400, or may be a mobile intelligent terminal of a driver or passenger on which the service vehicle 400 is mounted.
The request terminal 200 sends an order request to the electronic device 100, where the order request carries current location information of the request terminal 200 and location information of a destination that a user of the request terminal 200 intends to go to.
After acquiring the order request, the electronic device 100 selects the service vehicle 400 from a plurality of candidate vehicles within a preset distance range from the request terminal 200 based on the position information of the request terminal 200, and sends the navigation information of the request terminal 200 to the service vehicle 400, so that the service vehicle 400 goes to the location of the request terminal 200 through the navigation information. Further, the service vehicle 400 sends the user of the request terminal 200 to the destination.
After receiving the user of the request terminal 200, the service vehicle 400 may cause an abnormal situation in the process of sending the user of the request terminal 200 to the destination, or in the process of the service vehicle 400 going to the location where the request terminal 200 is located through the navigation information, due to an abnormal surrounding environment or a device failure of the service vehicle. Therefore, after the abnormality occurs, timely rescue needs to be provided for guaranteeing the personal safety of the driver and the passengers. Particularly, the service vehicle 400 is capable of timely finding abnormality and providing rescue in a relatively remote area, which is more important.
It should be noted that the Positioning technology for acquiring the position information in the present application may be based on a Global Positioning System (GPS), a Global Navigation Satellite System (GLONASS), a COMPASS Navigation System (COMPASS), a galileo Positioning System, a Quasi-Zenith Satellite System (QZSS), a Wireless Fidelity (WiFi) Positioning technology, a BeiDou Navigation Satellite System (BDS), and the like, or any combination thereof. One or more of the above-described positioning systems may be used interchangeably in this application.
At present, after knowing that the service vehicle 400 is abnormal, an expert listens to the recording of the whole travel of the service vehicle 400 and determines the time and place of the abnormality for guiding rescue work. However, the above method is not only not good in timeliness but also low in efficiency.
The following provides a possible implementation, which exemplifies the current rescue method. The service vehicle 400 collects sounds through the in-vehicle device, the mobile intelligent terminal of the driver, or the mobile intelligent terminal of the passenger during the entire driving process, and transmits the collected audio data to the electronic device 100. When the provider (e.g., the networked car appointment platform) of the electronic device 100 knows that the service vehicle 400 is abnormal through other approaches (e.g., the report information of police), all audio data of the service vehicle 400 in the whole driving process are extracted, the whole audio data are listened by an expert group, the symbolic sound (e.g., collision sound and/or scream sound) when the abnormality occurs is found out, and the time and the position information of the abnormality are determined. Then, professional rescuers go to the abnormal place to rescue.
However, in the rescue method, the time when the provider of the electronic device 100 first knows that the service vehicle 400 is abnormal is significantly delayed from the time when the service vehicle 400 actually becomes abnormal. In addition, the expert group needs to listen to the whole audio data, and the working efficiency is low.
In view of this, the present embodiment provides an audio tagging method for improving the processing efficiency when the expert group determines whether the service vehicle 400 is abnormal. In one possible embodiment, the structure of an electronic device for performing the audio tagging method is shown in fig. 2.
As shown in fig. 2, the electronic device 100 includes an audio tagging apparatus 110, a memory 120, a processor 130, and a communication unit 140. The memory 120, the processor 130, and the communication unit 140 are directly or indirectly communicatively coupled to each other to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The audio tagging device 110 includes at least one software function module that can be stored in the memory 120 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 130 is used for executing executable modules stored in the memory 120, such as software functional modules and computer programs included in the audio tagging device 110. When the electronic device 100 is running, the processor 130 communicates with the memory 120 via a bus, and machine-executable instructions corresponding to the audio tagging devices in the memory implement the audio tagging method when executed by the processor 130.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 120 is used for storing a program, and the processor 130 executes the program after receiving the execution instruction. The communication unit 140 is configured to establish communication connection between the electronic device 100 and the request terminal 200 and the service vehicle 400 through the network 300 shown in fig. 1, and is configured to transmit and receive data through the network 300.
The processor 130 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the audio tagging method executed by the electronic device are described in detail below with reference to the schematic flowchart of the steps of the audio tagging method shown in fig. 3.
And S100, acquiring track data and audio data in the running process of the service vehicle, wherein the acquisition time of the track data and the audio data meets the synchronous relation.
As shown in fig. 4, as a possible implementation manner, in the process that the service vehicle travels from the location a to the location B, track data and audio data of the entire travel of the service vehicle are acquired through onboard vehicle-mounted devices (for example, a GPS positioning device and an audio acquisition device loaded in the service vehicle), that is, the track data and the audio data satisfy a synchronization relationship in a time dimension. And then the electronic equipment can intercept the audio data through the track data. The trajectory data may be GPS data and/or positioning data provided by other navigation systems.
The audio data may include audio data in multiple scenes. For example, the audio data may be audio data inside the service vehicle and/or audio data outside the service vehicle.
Taking the example of audio data in a service vehicle, the audio data may carry the dialogue sounds of passengers with passengers, passengers with a driver, or/and passengers with other persons in the service vehicle. Of course, the audio data may also carry other audio signals. For example, voice sounds, played music sounds that serve navigation devices in the vehicle. The embodiments of the present application are not particularly limited.
Taking the audio data outside the service vehicle as an example, the audio data may carry collision sounds between the service vehicle and other vehicles and/or other obstacles. Of course, the audio data outside the service vehicle may also carry other audio messages. Such as an alarm, crying and/or screaming, etc. The embodiments of the present application are not particularly limited.
The sound information can reflect whether the service vehicle has a service abnormality or not to a certain extent. For example, the service vehicle has a traffic accident or the service vehicle has a malfunction.
Referring to fig. 3, in step S110, time information when the service vehicle meets the abnormal triggering condition is obtained according to the trajectory data.
It should be understood that the trajectory data of the service vehicle carries characteristic information of whether the service vehicle is abnormal or not. For example, the electronic device determines whether the service vehicle stops, decelerates a distance and/or accelerates during deceleration based on the change of the trajectory data over time. It can be understood that when the service vehicle is abnormal, the service vehicle is stopped to deal with the abnormal condition, and in the process of the abnormal condition, the condition of rapid deceleration is accompanied, and phenomena such as overlarge acceleration and/or too short deceleration distance are shown. Therefore, based on the trajectory data of the service vehicle, it is possible to detect whether or not the service vehicle is abnormal, and the time when the abnormality occurs.
Step S120, according to the synchronous relation, first audio data within a preset time range from the time information are obtained from the audio data.
Because the track data and the audio data are mutually synchronized in the time dimension, when the track data meet the abnormal triggering condition, the electronic equipment obtains the time information meeting the abnormal triggering condition, and intercepts the audio segment corresponding to the time information in the audio data. Optionally, the electronic device may intercept the first audio data within a preset time range from the time information.
And step S180, taking the first audio data as the audio of the abnormal service period.
Furthermore, the electronic equipment takes the intercepted first target audio frequency as the audio frequency in the abnormal service time period, provides the audio frequency to an expert group, and makes further judgment by the expert group.
Based on the above steps, a possible example is provided, and the above steps are exemplified. On the way to beijing from tianjin, the service vehicle stores the collected audio data in a local storage medium in an AMR (adaptive Multi-Rate) format. The service vehicle uploads locally stored audio data to the server every 5 minutes. At the same time, the service vehicle stores the collected GPS position data locally, and then uploads the GPS position data to the server every 3 minutes. Of course, the time interval for uploading the audio data and the GPS position data may be adaptively adjusted according to actual requirements.
The server analyzes the location status, acceleration, and/or deceleration distance of the service vehicle based on the GPS location data uploaded by the service vehicle. Wherein the server detects that the service vehicle has rapid deceleration at 17:30 according to the track data, the acceleration during deceleration exceeds a set acceleration threshold value, and the deceleration distance is less than a set distance threshold value. The server intercepts the 17:25-17:35 first audio data and provides the 17:25-17:35 first audio data as service anomaly interval audio to the expert group for further analysis.
By the audio marking method, the electronic equipment analyzes the track data of the service vehicle and judges whether the service vehicle is abnormal or not; and extracting the audio segments in the audio data which has a synchronous relation with the track data based on the abnormal time information to be used as the audio of the service abnormal time period to be analyzed by the expert group. Because the electronic equipment can detect the service vehicles with abnormal suspicion in time through the track data of the vehicles and provide the audio expert group with service abnormal time periods in a targeted manner, the working efficiency of the expert group is improved, and the personal safety of drivers and/or passengers is further guaranteed.
When the track data of the service vehicle meets the abnormity triggering condition, some track data indicate that the service vehicle is abnormal, but the motion track of the service vehicle meets the abnormity triggering condition only because of the driving level of a driver of the service vehicle. Therefore, when the amount of the first audio intercepted by the electronic device reaches a certain level, if a large number of audio clips unrelated to the service abnormality exist, a large amount of time of the expert group is undoubtedly wasted.
In view of the above, please refer to fig. 5, in order to reduce the waste of a lot of time for the expert group to listen to the audio clip unrelated to the service abnormality. Before step S180, the audio tagging method further includes:
step S130, analyzing whether the first audio data carries characteristic information representing the service anomaly.
Research shows that when an abnormal condition occurs, the service vehicle often has a violent collision and is accompanied by screaming sound of passengers and/or drivers. Based on the symbolic sound when the vehicle is abnormal, before the first audio data is taken as the audio in the abnormal service period, the electronic equipment performs characteristic identification on the first audio data, and detects whether the first audio data carries characteristic information such as collision sound and/or scream sound.
If the first audio data does not carry the characteristic information representing the abnormal service, the electronic device executes step S170 to ignore the first target audio data.
If the first audio data carries the characteristic information representing the service abnormality, the electronic device executes step S180 to use the first audio data as the audio in the service abnormality period.
The electronic equipment performs pre-analysis on the intercepted first audio, and judges whether the first audio carries characteristic information about abnormal service. Such as screaming, collision and/or crying. Therefore, the electronic equipment automatically screens the first audio to screen out the audio with the highest abnormal suspicion degree in the abnormal service time period.
The electronic device processes the first audio data through a configured pre-trained machine learning model, and judges whether the first audio data carries characteristic information representing the service abnormality. The pre-trained machine learning model is obtained by training through historical service abnormal time interval audio.
As another possible implementation manner, the electronic device is configured with the frequency spectrum information of the audio in the historical service abnormal period in advance. Based on the frequency spectrum information of the audio in the historical service abnormal period, the electronic equipment acquires the frequency spectrum information of the first audio data, and similarity calculation is carried out on the frequency spectrum information of the first audio data and the frequency spectrum information of the audio in the historical service abnormal period. When the similarity between the two exceeds a similarity threshold value, the electronic equipment takes the first audio data as the audio in the abnormal service period.
Further, when the vehicle is abnormal, the vehicle can be classified into a rescue-needed state and a rescue-not-needed state according to the severity of the abnormal condition. In other words, some slight abnormal conditions do not cause serious damage or casualties of vehicles, and professional rescuers do not need to be dispatched to rescue. Moreover, if the expert group personnel listen to the audio in the service abnormal time period, the audio in the service abnormal time period which really needs to be rescued is delayed. The parties in which the abnormal condition occurs are most clearly considered whether or not rescue is specifically performed.
In view of this, referring to fig. 6, after step S130, the audio tagging method further includes:
step S140, sending an anomaly statistical instruction to the communication device in the service vehicle.
Step S150, obtaining the exception report information sent by the communication device.
And step S160, judging whether the abnormal report information carries indication information needing rescue.
If the abnormal report information does not carry the indication information requiring rescue, the electronic device executes step S170 to ignore the first target audio frequency.
If the abnormal report information carries the indication information needing rescue, the electronic device executes step S180, and uses the first audio data as the audio of the abnormal service period.
Through the steps, the electronic equipment sends an abnormality statistical instruction to the communication equipment in the service vehicle, so that a driver or a passenger in the service vehicle reports the abnormality report information through the communication equipment. The electronic equipment judges whether the intercepted first audio data is delivered to an expert group to be listened as the audio in the abnormal service period based on the abnormal report information.
Referring to fig. 7, the communication device may be different devices in different scenarios, and a possible example is provided below by taking the passenger's mobile intelligent terminal 401 as an example, and the above steps are exemplarily described. The electronic device intercepts 17:25-17:35 of first audio data from the trajectory data of the service vehicle. The electronic device sends an abnormality statistic instruction to the passenger's mobile intelligent terminal 401 before submitting the first audio data of 17:25-17:35 as service abnormality period audio to the expert group.
The mobile terminal presents an interactive interface as shown in fig. 7 in a display screen, which includes two options of "yes" and "no", and is provided with a text edit box. If the passenger can click "yes", the abnormal report information reported by the mobile intelligent terminal 401 carries indication information that the passenger needs to be rescued. Also, the passenger may enter a detailed description of the abnormal situation in the text edit box. For example, a detailed description of the abnormal condition may include the number of injuries, a description of the abnormal condition. Further, the detailed description of the abnormal condition may be in the form of text or speech. If the passenger can click 'no', the abnormal report information reported by the mobile terminal does not carry the indication information needing rescue.
The electronic equipment ignores the first audio data when detecting that the reported abnormal report information does not carry indication information needing rescue; otherwise, the first audio data is submitted to the expert group as the audio in the abnormal service period. Of course, the electronic device may also set the response duration threshold, and if the abnormal report information reported by the mobile terminal is not received after the response duration threshold, the first audio data is also submitted to the expert group as the audio in the abnormal service period.
It should be understood that there is a certain limitation in the expert group audio judgment only through the abnormal service period, and certain interference is caused to the judgment of the expert group. For example, the audio in the abnormal service period carries screaming sound, but the screaming sound only causes frightening to passengers when other vehicles are abnormal, and the abnormal condition does not occur in the service vehicle.
Therefore, in order to enable the expert group to make a more accurate judgment based on the audio clip provided by the electronic device, the electronic device also provides other audio data besides the service abnormal period audio. Referring to fig. 8, after step S180, the audio tagging method further includes:
step S190, after the first audio data is used as the audio of the service abnormal period, acquiring second audio data, wherein the acquisition time of the second audio data is after the first audio data.
Step S200, the second audio data is used as the reference audio.
One possible example is provided below, illustrating the above steps. The electronic equipment provides the audio clips intercepting the time interval of 16:20-16:30 in the audio data of the service vehicle as service abnormal time interval audio to the expert group. Meanwhile, the electronic device also provides the reference audio after 16:20-16:30 to the expert group. For the expert group, the audio in the abnormal service period is heard, and if the audio in the abnormal service period has collision, scream and/or crying, the reference audio is further heard.
If navigation voices such as 'turn right at the next intersection' and/or 'you have overspeed' exist in the reference audio or sounds of normal conversations between passengers and/or between passengers and a driver exist in the reference audio, the characteristics show that although collision, scream and/or crying are existed in the audio during the abnormal service period, the service vehicle does not actually have abnormal conditions and still runs normally.
Therefore, the electronic equipment provides more reference information for the expert group by providing the reference audio, and helps the expert group to make more accurate judgment.
The expert group members can conveniently listen to the audio in the abnormal service period and judge and classify the audio in the abnormal service period. Referring to fig. 9, after step S200, the audio tagging method further includes:
step S210, outputting a first display interface with the service abnormal time interval audio.
For the first display interface, please refer to an example provided in fig. 10. As shown in fig. 10, the first display interface 500 displays a plurality of audio identifiers 510 of the audio in the abnormal service period, a play button 520 for controlling to play the audio in the abnormal service period, and a classification button 530 for the members of the expert group to classify the audio in the abnormal service period. The sorting button 530 specifically includes an "abnormal" button and a "normal" button.
Step S220, receiving the service abnormality confirmation information, and determining that the service corresponding to the audio in the service abnormal period is in an abnormal state.
Based on the first display interface 500, for each audio frequency in the abnormal service period, the expert group member listens by clicking the play button 520; then, the judgment is made on the service abnormal period audio by clicking an 'abnormal' button or a 'normal' button.
When the expert group member hears abnormal sounds in the accident audio, such as screaming sound, collision sound and/or crying sound, the abnormal button is clicked, and otherwise, the normal button is clicked.
Although the expert group can determine most of the abnormal situations by the service abnormal period audio and/or the reference audio, there are few even cases where the determination cannot be made by combining the service abnormal period audio and the reference audio.
In view of this, referring to fig. 11, after step S220, the audio tagging method further includes:
and step S230, outputting a second display interface with the communication information of the service vehicle.
And step S240, responding to the communication connection operation based on the second display interface, and establishing communication connection with the communication equipment corresponding to the communication information.
When the expert group is difficult to make accurate judgment through the service abnormal period audio and/or the reference audio, the electronic equipment provides a second display interface on which the communication information of the communication equipment in the service vehicle is displayed, so that the expert group can establish audio and/or video connection with the communication equipment in the service vehicle through the communication information.
It should be understood that the communication device described above may be an onboard electronic device serving a vehicle. For example, the service vehicle is implemented with a wireless communication device with which the electronic device can establish a communication connection. Of course, the communication device may also be a mobile intelligent terminal serving the driver or passenger in the vehicle.
The above steps are exemplified again with reference to fig. 10, where the user clicks the "abnormal" button in the first display interface 500 for a certain service abnormal period audio. The telephone numbers of the driver, the passenger a, and the passenger B in the service vehicle corresponding to the service abnormal period audio, and the "dial" button are displayed in the second display interface 600 shown in fig. 10. When it is difficult to determine whether the service vehicle is abnormal through the service abnormal period tone and/or the reference tone, the expert group may dial the phone of the driver, the passenger a and/or the passenger B to inquire the driver, the passenger a and/or the passenger B to confirm whether the service vehicle is abnormal.
After the expert group member confirms that the service vehicle is abnormal, rescue personnel need to be dispatched to the place of affairs for rescue. Therefore, dispatching the most appropriate rescue vehicle can obtain more rescue time.
In view of this, in order to win more rescue time, referring to fig. 12, after step S240, the audio tagging method further includes:
and step S250, providing a third display interface, wherein the third display interface displays the identification information of the rescue vehicle within the preset distance from the service vehicle.
Referring to fig. 13, the third display interface 700 displays a map information, a service vehicle identifier 710 and a rescue vehicle identifier 720. The positions of the service vehicle identification 710 and the rescue vehicle identification 720 in the map correspond to the real geographic positions of the service vehicle and the rescue vehicle.
And step S260, responding to the vehicle selection operation based on the third display interface, and sending the navigation information of the service vehicle to the selected target rescue vehicle.
As shown in fig. 13, the members of the expert group can select the most suitable target rescue vehicle to go to rescue according to their own experience based on the information in the third display interface. For example, the target rescue vehicle closest to the service vehicle is selected to go to rescue. Of course, the expert group member may also select the target rescue vehicle book based on other conditions, and the embodiment of the application is not particularly limited.
It should be noted that the flowchart of steps S180-S260 is only one possible example provided in the embodiment of the present application, and the execution sequence between steps S180-S260 may be adaptively adjusted according to actual requirements, which is not specifically limited in the embodiment of the present application.
Based on the same inventor conception, the embodiment of the present application also provides an audio marking device corresponding to the audio marking method, because the principle of the device in the embodiment of the present application for solving the problem is similar to the audio marking method in the embodiment of the present application.
Referring to fig. 14, a schematic diagram of the audio tagging apparatus provided in the embodiment of the present application is shown, where the apparatus includes:
the data acquisition module 1101 is configured to acquire track data and audio data in a driving process of the service vehicle, where acquisition time of the track data and acquisition time of the audio data satisfy a synchronization relationship.
In the embodiment of the present application, the data acquisition module 1101 is configured to execute step S100 in fig. 3, and as to the detailed description of the data acquisition module 1101, reference may be made to the detailed description of step S100.
And a time obtaining module 1102, configured to obtain time information when the service vehicle meets the abnormal triggering condition according to the trajectory data.
In this embodiment of the application, the time obtaining module 1102 is configured to execute step S110 in fig. 3, and as to the detailed description of the time obtaining module 1102, reference may be made to the detailed description of step S110.
The audio obtaining module 1103 is configured to obtain, from the audio data, first audio data within a preset time range from the time information according to the synchronization relationship.
In this embodiment of the application, the audio obtaining module 1103 is configured to perform step S120 in fig. 3, and as to the detailed description of the audio obtaining module 1103, reference may be made to the detailed description of step S120.
And an audio marking module 1104, configured to use the first audio data as the service abnormal period audio.
In the embodiment of the present application, the audio tagging module 1104 is configured to perform step S180 in fig. 3, and as to the detailed description of the audio tagging module 1104, reference may be made to the detailed description of step S180.
Optionally, referring to fig. 14 again, the audio tagging apparatus further includes an audio analysis module 1105, before taking the first audio data as the audio of the service abnormal period:
the audio analysis module 1105 is configured to analyze whether the first audio data carries characteristic information that characterizes a service anomaly.
If yes, the audio tagging module 1104 takes the first audio data as the audio of the service abnormal time period.
Optionally, the audio analysis module specifically includes:
and processing the first audio data through a pre-trained machine learning model, and judging whether the first audio data carries characteristic information representing the service abnormity.
Optionally, the first audio data is used as the audio after the abnormal service period;
the audio acquisition module is further configured to acquire second audio data, where acquisition time of the second audio data is located after the first audio data;
the audio tagging module is further configured to use the second audio data as reference audio.
Optionally, referring to fig. 14 again, before the first audio data is used as the audio of the abnormal service period, the audio tagging device further includes:
a statistical instruction module 1106, configured to send an abnormal statistical instruction to a communication device in the service vehicle;
a report obtaining module 1107, configured to obtain exception report information sent by the communication device;
a rescue judgment module 1108, configured to judge whether the abnormal report information carries indication information that needs to be rescued;
if the abnormal report information carries indication information needing rescue, the first audio data is used as the audio of the abnormal service period;
and if the abnormal report information does not carry indication information needing rescue, ignoring the first audio frequency.
Optionally, referring to fig. 14 again, after the taking the first audio data as the audio of the abnormal service period, the audio tagging apparatus further includes:
an interface display module 1109, configured to output a first display interface with the service abnormal period audio;
the interactive response module 1110 is configured to receive the service abnormality confirmation information, and determine that the service corresponding to the audio in the service abnormality period is in an abnormal state.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state;
the interface display module 1109 is further configured to output a second display interface with the communication information of the service vehicle;
the interaction response module 1110 is further configured to respond to a communication connection operation based on the second display interface, and establish a communication connection with a communication device corresponding to the communication information.
Optionally, after determining that the service corresponding to the audio in the service abnormal period is in an abnormal state;
the interface display module 1109 is further configured to output a third display interface, where the third display interface displays identification information of a rescue vehicle within a preset distance from the service vehicle;
the interactive response module 1110 is further configured to send navigation information of the service vehicle to the selected target rescue vehicle in response to a vehicle selection operation based on the third display interface.
Embodiments of the present application also provide a storage medium storing a computer program, which when executed by a processor implements an audio tagging method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, or the like, and when a computer program on the storage medium is run, the audio tagging method can be executed, so that the problem that the efficiency is not good enough when the expert group member judges whether the service vehicle is abnormal or not at present is solved, and the effect of improving the working efficiency of the expert group is achieved.
In summary, the embodiments of the present application provide an audio marking method, an audio marking device, an electronic device, and a storage medium. By the audio marking method, the electronic equipment analyzes the track data of the service vehicle and judges whether the service vehicle has an accident or not; and extracting audio segments in the audio data which has a synchronous relation with the track data based on the time information of the accident as the audio of the service abnormal time period to be analyzed by the expert group. Because the electronic equipment can detect the service vehicles with abnormal suspicion in time through the track data of the vehicles and provide the audio expert group with service abnormal time periods in a targeted manner, the working efficiency of the expert group can be improved, and the electronic equipment has important significance for guaranteeing the personal safety of drivers and/or passengers.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. An audio marking method, applied to an electronic device, the method comprising:
acquiring track data and audio data in the running process of a service vehicle, wherein the acquisition time of the track data and the acquisition time of the audio data meet a synchronous relation;
acquiring time information when the service vehicle meets an abnormal triggering condition according to the track data;
according to the synchronous relation, first audio data within a preset time range from the moment information are obtained from the audio data;
and taking the first audio data as service abnormal period audio.
2. The audio tagging method of claim 1, wherein prior to said treating said first audio data as service anomaly period audio, said method further comprises:
analyzing whether the first audio data carries characteristic information representing abnormal service or not;
and if so, taking the first audio data as the audio in the abnormal service period.
3. The audio tagging method of claim 2, wherein said step of analysing whether said first audio data carries characteristic information indicative of a service anomaly comprises:
and processing the first audio data through a pre-trained machine learning model, and judging whether the first audio data carries characteristic information representing the service abnormity.
4. The audio tagging method of claim 1, wherein after said first audio data is treated as said out-of-service period audio, said method further comprises:
acquiring second audio data, wherein the acquisition time of the second audio data is behind the first audio data;
the second audio data is taken as reference audio.
5. The audio tagging method of claim 1, wherein prior to the first audio data being the service anomaly period audio, the method further comprises:
sending an anomaly statistical instruction to communication equipment in the service vehicle;
acquiring abnormal report information sent by the communication equipment;
judging whether the abnormal report information carries indication information needing rescue or not;
if the abnormal report information carries indication information needing rescue, the first audio data is used as the audio of the abnormal service period;
and if the abnormal report information does not carry indication information needing rescue, ignoring the first audio frequency.
6. The audio tagging method of claim 1, wherein after said treating said first audio data as said out-of-service period audio, said method further comprises:
outputting a first display interface with the service abnormal period audio;
and receiving service abnormity confirmation information, and determining that the service corresponding to the audio in the service abnormity time period is in an abnormal state.
7. The audio tagging method of claim 6, wherein after said determining that the service corresponding to the audio during the service anomaly period is in an anomaly state, the method further comprises:
outputting a second display interface having communication information of the service vehicle;
and responding to the communication connection operation based on the second display interface, and establishing communication connection with the communication equipment corresponding to the communication information.
8. The audio tagging method of claim 6, wherein after said determining that the service corresponding to the audio during the service anomaly period is in an anomaly state, the method further comprises:
outputting a third display interface, wherein the third display interface displays identification information of rescue vehicles within a preset distance from the service vehicle;
and responding to vehicle selection operation based on the third display interface, and sending navigation information of the service vehicle to the selected target rescue vehicle.
9. An audio marking device applied to an electronic device, the audio marking device comprising:
the data acquisition module is used for acquiring track data and audio data in the running process of a service vehicle, wherein the acquisition time of the track data and the acquisition time of the audio data meet the synchronous relation;
the time acquisition module is used for acquiring time information when the service vehicle meets an abnormal triggering condition according to the track data;
the audio acquisition module is used for acquiring first audio data within a preset time range from the moment information from the audio data according to the synchronous relation;
and the audio marking module is used for taking the first audio data as the audio of the abnormal service period.
10. The audio tagging device of claim 9 further comprising an audio analysis module that precedes the first audio data as service anomaly period audio by:
the audio analysis module is used for analyzing whether the first audio data carries characteristic information representing abnormal service;
and if so, the audio marking module takes the first audio data as the audio in the abnormal service period.
11. The audio tagging device of claim 10, wherein the audio analysis module specifically comprises:
and processing the first audio data through a pre-trained machine learning model, and judging whether the first audio data carries characteristic information representing the service abnormity.
12. The audio tagging device of claim 9 wherein said first audio data follows said out-of-service period audio;
the audio acquisition module is further configured to acquire second audio data, where acquisition time of the second audio data is located after the first audio data;
the audio tagging module is further configured to use the second audio data as reference audio.
13. The audio tagging device of claim 9, wherein prior to said first audio data being said out-of-service period audio, said audio tagging device further comprises:
the statistical instruction module is used for sending an abnormal statistical instruction to the communication equipment in the service vehicle;
the report acquisition module is used for acquiring abnormal report information sent by the communication equipment;
the rescue judging module is used for judging whether the abnormal report information carries indication information needing rescue or not;
if the abnormal report information carries indication information needing rescue, the first audio data is used as the audio of the abnormal service period;
and if the abnormal report information does not carry indication information needing rescue, ignoring the first audio frequency.
14. The audio tagging device of claim 9, wherein after said treating said first audio data as said out-of-service period audio, said audio tagging device further comprises:
the interface display module is used for outputting a first display interface with the service abnormal period audio;
and the interactive response module is used for receiving the service abnormity confirmation information and determining that the service corresponding to the audio in the service abnormity time interval is in an abnormal state.
15. The audio tagging device of claim 14 wherein after said determining that the service corresponding to the audio during the service anomaly period is in an anomalous state;
the interface display module is also used for outputting a second display interface with the communication information of the service vehicle;
and the interactive response module is also used for responding to the communication connection operation based on the second display interface and establishing the communication connection with the communication equipment corresponding to the communication information.
16. The audio tagging device of claim 14 wherein after said determining that the service corresponding to the audio during the service anomaly period is in an anomalous state;
the interface display module is further used for outputting a third display interface, wherein the third display interface displays identification information of the rescue vehicle within a preset distance from the service vehicle;
and the interactive response module is also used for responding to the vehicle selection operation based on the third display interface and sending the navigation information of the service vehicle to the selected target rescue vehicle.
17. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions that, when executed by the processor, implement the audio tagging method of any one of claims 1-8.
18. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the audio tagging method of any one of claims 1-8.
CN202010891412.4A 2020-08-30 2020-08-30 Audio marking method and device, electronic equipment and storage medium Pending CN112040417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010891412.4A CN112040417A (en) 2020-08-30 2020-08-30 Audio marking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010891412.4A CN112040417A (en) 2020-08-30 2020-08-30 Audio marking method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112040417A true CN112040417A (en) 2020-12-04

Family

ID=73586569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010891412.4A Pending CN112040417A (en) 2020-08-30 2020-08-30 Audio marking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112040417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927393A (en) * 2021-02-08 2021-06-08 上海钧正网络科技有限公司 Riding data processing method, server, user equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9818239B2 (en) * 2015-08-20 2017-11-14 Zendrive, Inc. Method for smartphone-based accident detection
CN108416870A (en) * 2018-02-24 2018-08-17 吉利汽车研究院(宁波)有限公司 Driving information recording method and system
CN108810101A (en) * 2018-05-22 2018-11-13 苏州市启献智能科技有限公司 Network vehicle service supervision method and platform for guaranteeing passenger safety
CN109145065A (en) * 2017-06-19 2019-01-04 北京嘀嘀无限科技发展有限公司 Methods of exhibiting and device, the computer readable storage medium of vehicle driving trace
CN109747657A (en) * 2018-12-17 2019-05-14 北京百度网讯科技有限公司 Remote control method and device for autonomous vehicle
CN110460669A (en) * 2019-08-16 2019-11-15 广州亚美信息科技有限公司 A kind of car accident detection alarm method based on car networking intelligent terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9818239B2 (en) * 2015-08-20 2017-11-14 Zendrive, Inc. Method for smartphone-based accident detection
CN109145065A (en) * 2017-06-19 2019-01-04 北京嘀嘀无限科技发展有限公司 Methods of exhibiting and device, the computer readable storage medium of vehicle driving trace
CN108416870A (en) * 2018-02-24 2018-08-17 吉利汽车研究院(宁波)有限公司 Driving information recording method and system
CN108810101A (en) * 2018-05-22 2018-11-13 苏州市启献智能科技有限公司 Network vehicle service supervision method and platform for guaranteeing passenger safety
CN109747657A (en) * 2018-12-17 2019-05-14 北京百度网讯科技有限公司 Remote control method and device for autonomous vehicle
CN110460669A (en) * 2019-08-16 2019-11-15 广州亚美信息科技有限公司 A kind of car accident detection alarm method based on car networking intelligent terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927393A (en) * 2021-02-08 2021-06-08 上海钧正网络科技有限公司 Riding data processing method, server, user equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN106157614B (en) Method and system for determining responsibility for automobile accident
US12106670B2 (en) Accident reporter
CN202444627U (en) Dual-channel vehicle-mounted information service system
JP6827712B2 (en) Control devices, in-vehicle devices, video distribution methods, and programs
CN115064006A (en) Traffic weakness participant early warning method, device, equipment, storage medium and system
CN102941852A (en) Intelligent vehicle-mounted terminal
CN114140300A (en) Method, device, storage medium and terminal for identifying vehicle stop points based on GPS data
CN111361568A (en) Driver driving behavior evaluation method, device, equipment and storage medium
US11017476B1 (en) Telematics system and method for accident detection and notification
US11553321B2 (en) Apparatus and method for dispatching a tow truck in response to a roadway emergency
CN111739191A (en) Violation early warning method, device, equipment and storage medium
US20130131893A1 (en) Vehicle-use information collection system
CN111863029A (en) An audio-based event detection method and system
US10997841B2 (en) Information processing apparatus, information processing system and information processing method
CN113163364A (en) Vehicle communication method and device, communication controller and vehicle
US20220017032A1 (en) Methods and systems of predicting total loss events
CN112016625A (en) Vehicle abnormality detection method, device, electronic device, and storage medium
CN114093143A (en) Vehicle driving risk perception early warning method and device
CN111615049A (en) Early warning method and device based on vehicle position information, storage medium and terminal
CN111862386A (en) Accident recording method, device, medium and server for vehicle
CN216232014U (en) Vehicle-mounted projection system and vehicle
CN108921418B (en) Driving risk assessment method based on automobile positioning and comprehensive information big data
CN111695956A (en) Intelligent service management method and system for automobile leasing platform and electronic equipment
CN113352989A (en) Intelligent driving safety auxiliary method, product, equipment and medium
CN112040417A (en) Audio marking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201204