[go: up one dir, main page]

DK201300471A1 - System for dynamically modifying car audio system tuning parameters - Google Patents

System for dynamically modifying car audio system tuning parameters Download PDF

Info

Publication number
DK201300471A1
DK201300471A1 DK201300471A DKPA201300471A DK201300471A1 DK 201300471 A1 DK201300471 A1 DK 201300471A1 DK 201300471 A DK201300471 A DK 201300471A DK PA201300471 A DKPA201300471 A DK PA201300471A DK 201300471 A1 DK201300471 A1 DK 201300471A1
Authority
DK
Denmark
Prior art keywords
mood
tuning
input
parameters
sound
Prior art date
Application number
DK201300471A
Other languages
Danish (da)
Inventor
Grzegorz Sikora
Original Assignee
Bang & Olufsen As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bang & Olufsen As filed Critical Bang & Olufsen As
Priority to DK201300471A priority Critical patent/DK201300471A1/en
Priority to CN201480046559.6A priority patent/CN105637903B/en
Priority to EP17159760.2A priority patent/EP3280162A1/en
Priority to EP14752326.0A priority patent/EP3036919A1/en
Priority to PCT/EP2014/067503 priority patent/WO2015024881A1/en
Priority to US14/912,894 priority patent/US10142758B2/en
Publication of DK201300471A1 publication Critical patent/DK201300471A1/en

Links

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

This invention relates to control and use of an automotive audio system, to provide more immersive end user experience. The aspects of the invention is a system that based on some input (such as mood of car occupants, sensor data, manual user input/feedback, etc.) and a given track/song to be played (also given as input), a Tuning Agent would automatically send/adjust relevant tuning parameters to/in the amplifier. Thus, the invention includes an adjustment to alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.

Description

System for dynamically modifying car audio system tuning parameters Technical fieldSystem for dynamically modifying car audio system tuning parameters Technical field

This invention relates to control and use of an automotive audio system to provide more immersive end user experience.This invention relates to control and use of an automotive audio system to provide more immersive than user experience.

Background of the inventionBackground of the invention

Sound tuning of the automotive audio system is a process of creating accurate and enjoyable reproduction of the sound given the acoustic limitation of car cabin and speaker units. Complex automotive sound system can consist of more than 20 loudspeakers. Parameters adjusted in the sound tuning process are typically:Sound tuning of the automotive audio system is a process of creating accurate and enjoyable reproduction of the sound given the acoustic limitation of car cabin and speaker units. Complex automotive sound system can consist of more than 20 loudspeakers. Parameters adjusted in the sound tuning process are typically:

Loudspeaker gainLoudspeaker gain

Relative time delays between loudspeakers Filters (order, frequency, quality, gain) - Amount of additional sound-field processingRelative time delays between loudspeakers Filters (order, frequency, quality, gain) - Amount of additional sound field processing

Each speaker has to be adjusted to its application and position by a proper use of equalizers (filter banks) and by gain leveling. Moreover, setting relative time delays between speakers is crucial to the process of sound stage reproduction. Very often, some digital signal processing is applied to extract reverberant part of a multi-channel audio recording (stereo or more) and adjusted to create more immerse sound field in car cabin. Sound tuning is a technical, as well as an aesthetic process.Each speaker has to be adjusted to its application and position by proper use of equalizers (filter banks) and by gain leveling. Moreover, setting relative time delays between speakers is crucial to the process of sound stage reproduction. Very often, some digital signal processing is applied to extract reverberant part of a multi-channel audio recording (stereo or more) and adjusted to create more immersive sound field in car cabin. Sound tuning is a technical as well as an aesthetic process.

In prior art systems the tuning currently done in the cars, is "static" meaning the user perceives it as the same, no matter what type of song is playing.In prior art systems the tuning currently done in the cars, "static" meaning the user perceives it as the same, no matter what type of song is playing.

In reality, the tuning is actually "adaptive", but only to external factors, so that it overcomes some of the environmental background noise. These parameters are road noise (due to speed), rain noise, fan/engine noise, other in-car noises, etc.In reality, tuning is actually "adaptive", but only to external factors, so that it overcomes some of the environmental background noise. These parameters are road noise (due to speed), rain noise, fan / engine noise, other in-car noises, etc.

In the present solution, user perceives the same tuning, for all songs. However, to make the system more engaging certain types of songs would benefit from different tuning (e.g. also modifying dynamically other tuning parameters). The problem is that the user perceives the same tuning, for all songs. However, to make the system more engaging certain types of songs would benefit from different tuning (e.g. also modifying dynamically other tuning parameters).In the present solution, users perceive the same tuning, for all songs. However, to make the system more engaging certain types of songs would benefit from different tuning (e.g. also dynamically modifying other tuning parameters). The problem is that the user perceives the same tuning, for all songs. However, to make the system more engaging certain types of songs would benefit from different tuning (e.g. also dynamically modifying other tuning parameters).

The prior art US2009/0076637 A1 discloses a vehicular music replay system allowing a user to select a music source appropriate to user's character or biological condition even from genres unknown to the user, and allows even a music source to be suited for the user, and enhances an emotion of the user about encounter with an unknown music source. The audio chain is described simply as preamplifier, power amplifier and loudspeaker and adjust sound according to seating sensors.Prior art US2009 / 0076637 A1 discloses a vehicular music replay system allowing a user to select a music source appropriate to a user's character or biological condition even from genres unknown to the user, and also allows a music source to be suited for the user, and enhances an user's emotion about meeting an unknown music source. The audio chain is described simply as preamplifier, power amplifier and loudspeaker and adjust sound according to seating sensors.

Some existing systems are using equalization based on genre (e.g. Pop, Rock) to adjust some of the overall EQ curve. Eg. (http://www.ehow.com/how 4869657 set-stereo-equalizer.html). However, this does not take in consideration the "mood". Meaning, a "Rock" song could be either "melancholic" as well as "happyTenergetic". Without this "mood" categorization as an input to the system, the tuning would not be engaging and exciting.Some existing systems are using equalization based on genre (e.g. Pop, Rock) to adjust some of the overall EQ curve. Eg. (http://www.ehow.com/how 4869657 set-stereo equalizer.html). However, this does not take into account the "mood". Meaning, a "Rock" song could be either "melancholic" as well as "happyTenergetic". Without this "mood" categorization as an input to the system, the tuning would not be engaging and exciting.

The prior art in JP2004361845 describes how sensors data and other environmental conditions can be used in a system for selecting appropriate music for a car/driver. However, it does not suggest/assume that the tuning is dynamic for those conditions.The prior art in JP2004361845 describes how sensors data and other environmental conditions can be used in a system for selecting appropriate music for a car / driver. However, it does not suggest / assume that tuning is dynamic for those conditions.

An aspect of the invention seen from the above prior art is to use signal processing to change timbral and spatial characteristics of a sound system. Thus, the audio chain consists of DSP unit between preamplifier and power amplifier, and it allows to alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.An aspect of the invention seen from the above prior art is to use signal processing to change timbral and spatial characteristics of a sound system. Thus, the audio chain consists of DSP unit between preamplifier and power amplifier, and it allows for alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.

The aspects of the invention is a system that based on some input (such as mood of car occupants, sensor data, manual user input/feedback, etc.) and a given track/song to be played (also given as input), a "Tuning Agent" would automatically send/adjust relevant tuning parameters to/in the amplifier. These tuning parameters, such as the ones mentioned earlier, would actually adjust the system, dynamically, in such a way that the music would seem more engaging and immersive to the system users, matching the input (such as their mood, etc.).The aspects of the invention are a system based on some input (such as mood of car occupants, sensor data, manual user input / feedback, etc.) and a given track / song to be played (also given as input), a "Tuning Agent" would automatically send / adjust relevant tuning parameters to / in the amplifier. These tuning parameters, such as the ones mentioned earlier, would actually adjust the system, dynamically, in such a way that the music would seem more engaging and immersive to the system users, matching the input (such as their mood, etc.).

Thus, a first aspect of the invention is:Thus, a first aspect of the invention is:

An audio system including a tuning agent that is configured according to a given input, where the Tuning Agent is characterized by: • read input data including at least one or more of the parameters: mood of user(s), sensor data, manual user input/feedback, • read media data for a given track/song to be played, • adjust the sound system parameters of the audio system, • the adjustment to alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.An audio system including a tuning agent configured according to a given input, where the Tuning Agent is characterized by: • read input data including at least one or more of the parameters: user's mood (s), sensor data, manual user input / feedback, • read media data for a given track / song to be played, • adjust the sound system parameters of the audio system, • the adjustment to alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.

Applied termsApplied terms

Audio Amplifier is an electronic amplifier that amplifies low-power audio signals (signals composed primarily of frequencies between 20 - 20 000 Hz, the human range of hearing) to a level suitable for driving loudspeakers and is the final stage in a typical audio playback chain. The preceding stages in such a chain are low power audio amplifiers which perform tasks like pre-amplification, equalization, tone control, mixing/effects, or audio sources like record players, CD players, and cassette players.Audio Amplifier is an electronic amplifier that amplifies low-power audio signals (signals composed primarily of frequencies between 20 - 20 000 Hz, the human range of hearing) to a level suitable for driving loudspeakers and is the final stage in a typical audio playback chain . The preceding stages in such a chain are low power audio amplifiers which perform tasks like pre-amplification, equalization, tone control, mixing / effects, or audio sources like record players, CD players, and cassette players.

Audio equalization is the process of adjusting the balance between frequency components within an electronic signal. The most well-known use of equalization is in sound recording and reproduction but there are many other applications in electronics and telecommunications. The circuit or equipment used to achieve equalization is called an equalizer. These devices strengthen (boost) or weaken (cut) the energy of specific frequency bands. A sensor (also called detector) is a converter that measures a physical quantity and converts it into a signal which can be read by an observer or by an (today mostly electronic) instrument.Audio equalization is the process of adjusting the balance between frequency components within an electronic signal. The most well-known use of equalization is in sound recording and reproduction but there are many other applications in electronics and telecommunications. The circuit or equipment used to achieve equalization is called an equalizer. These devices strengthen (boost) or weaken (cut) the energy of specific frequency bands. A sensor (also called a detector) is a converter that measures a physical quantity and converts it into a signal which can be read by an observer or by an (today mostly electronic) instrument.

Brief description of the figuresBrief description of the figures

Figure 1 displays a traditional audio system.Figure 1 displays a traditional audio system.

Figure 2 displays example of a system including a Tuning Agent.Figure 2 shows an example of a system including a tuning agent.

Figure 3 displays a system concept according to the invention.Figure 3 shows a system concept according to the invention.

Figure 4 & 5 displays examples related to an automotive application.Figure 4 & 5 shows examples related to an automotive application.

Figure 6,7,8 & 9 displays mood mappimg alternatives.Figure 6,7,8 & 9 displays mood mappimg alternatives.

Figure 10 displays mood sound tuning alternatives for a cluster.Figure 10 displays mood sound tuning alternatives for a cluster.

DescriptionDescription

In a traditional system, such as the one shown in Figure 1, data is collected (e.g. from sensors, user input, time, etc.) and fed into an algorithm, in order to deduct the mood of the user(s). A separate system then makes a selection of the appropriate song to be played (on the same device, or via a cloud service support), and then it is passed to the Player/Renderer, for playing the song. As mentioned earlier, some tuning settings are passed to the amplifier, but those are related to external factors (e.g. speed of the car).In a traditional system, such as the one shown in Figure 1, data is collected (e.g. from sensors, user input, time, etc.) and fed into an algorithm, in order to deduct the mood of the user (s). A separate system then makes a selection of the appropriate song to be played (on the same device, or via a cloud service support), and then it is passed to the Player / Renderer, for playing the song. As mentioned earlier, some tuning settings are passed to the amplifier, but those are related to external factors (e.g., speed of the car).

In the enhanced system, as disclosed, a new component called Tuning Agent is proposed as shown in Figure 2. This component would be able to dynamically change sound tuning parameters in the amplifier, per song to be played, taking as input the "mood" of the song. This information would be given to the agent by one of the existing external systems/algorithms.In the enhanced system, as disclosed, a new component called Tuning Agent is proposed as shown in Figure 2. This component would be able to dynamically change sound tuning parameters in the amplifier, per song to be played, taking as input the "mood" of the song. This information would be provided to the agent by one of the existing external systems / algorithms.

There are many ways how this "mood" categorization of the song to be played can be passed to the Tuning Agent. In the simplest form (as shown in Figure 2), the "mood" of the song is another metadata textual field added to the track media file, before passed to the Player/Renderer. It is outside of scope how this is added there (e.g. could be by the algorithm/system that chooses the appropriate song based on the extracted mood). But, for the system according to the invention, it is important that this "mood" tag is given along the media item to the Player/Renderer which then passes it to the Tuning Agent.There are many ways that this "mood" categorization of the song to be played can be passed to the Tuning Agent. In the simplest form (as shown in Figure 2), the "mood" of the song is another metadata textual field added to the track media file, before passing to the Player / Renderer. It is beyond scope how this is added there (e.g. could be by the algorithm / system that chooses the appropriate song based on the extracted mood). But, for the system according to the invention, it is important that this "mood" tag is given along the media item to the Player / Renderer which then passes it to the Tuning Agent.

Examples of mood attributes for a specific song are, but not limited to:Examples of mood attributes for a specific song are, but are not limited to:

Summery, Brash, Celebratory, Cheerful, Earthy, Exuberant, Joyous, Organic, Passionate,Summery, Brash, Celebratory, Cheerful, Earthy, Exuberant, Joyous, Organic, Passionate,

Rousing, Sensual, Spiritual.Rousing, Sensual, Spiritual.

As illustrated in Figure 3, the Amplifier has a dynamic tuning table available that can match different moods to different tunings. The Tuning agent simply instructs the amplifier which tuning to select, based on the metadata it has extracted. Switching between tuning sets could be instantaneous or gradual (over a certain period of time).As illustrated in Figure 3, the Amplifier has a dynamic tuning table available that can match different moods to different tunings. The tuning agent simply instructs the amplifier which tuning to select based on the metadata it has extracted. Switching between tuning sets could be instantaneous or gradual (over a certain period of time).

In other implementations, the Tuning Agent could receive the "mood" of the song to be played directly from another external system (i.e. not via the song file metadata), for example by providing an interface/API that accepts this information from the external system (e.g. a cloud service).In other implementations, the Tuning Agent could receive the "mood" of the song to be played directly from another external system (ie not via the song file metadata), for example by providing an interface / API that accepts this information from the external system (eg a cloud service).

An aspect of the Tuning Agent is a logical entity that can be implemented in different components, based on setup and conceptual requirements, par example: • Tuning Agent could be implement as a standalone component. • Tuning Agent could be implement as part of the "Player / Renderer". • Tuning Agent could be implement as a smartphone application, which runs on a smartphone and interacts with the rest of the car via the standardized car-phone existing infrastructure communication systems. • Tuning Agent could be implement as a physically different component, for example of the CAN or MOST bus of a car (i.e. being a separate hardware component).An aspect of the Tuning Agent is a logical entity that can be implemented in different components, based on setup and conceptual requirements, for example: • Tuning Agent could be implemented as a standalone component. • Tuning Agent could be implemented as part of the "Player / Renderer". • Tuning Agent could be implemented as a smartphone application, which runs on a smartphone and interacts with the rest of the car via the standardized car-phone existing infrastructure communication systems. • Tuning Agent could be implemented as a physically different component, for example the CAN or MOST bus of a car (i.e. being a separate hardware component).

The alternative/optional means to input mood parameters are par example: a. user input through Ul (head unit, buttons, etc.); b. pre-embedded metadata of the track, son; c. matching a song from external "online" service for getting song's mood (based on title of the songs, or audio fingerprinting); d. based on sensor data (cameras, CAN data, MOST data, Navi data, etc;The alternative / optional means to input mood parameters are for example: a. User input through Ul (head unit, buttons, etc.); b. pre-embedded metadata of the track, son; c. matching a song from external "online" service to get song's mood (based on title of the songs, or audio fingerprinting); d. based on sensor data (cameras, CAN data, MOST data, Navi data, etc;

If the metadata of a song self-describes its mood, the user input / sensor data can override it.If the metadata of a song self-describes its mood, the user input / sensor data can override it.

Examples depicted in figure 4 and 5, illustrate mood based sound sets related to automotive applications.Examples depicted in Figures 4 and 5 illustrate mood based sound sets related to automotive applications.

Basically its interesting to have a more emotional sound experience and results from previous consumer insight studies call for emotional and easy to grasp technical features: • Four sound sets based on context or mood: reference - party - relaxed - focused. • The sound sets could be manually selected by the user or selection could be governed by external factors (e.g. sport mode, traffic jam, long drives).Basically its interesting to have a more emotional sound experience and results from previous consumer insight studies call for emotional and easy to grasp technical features: • Four sound sets based on context or mood: reference - party - relaxed - focused. • The sound sets could be manually selected by the user or selection could be governed by external factors (e.g. sport mode, traffic jam, long drives).

The example describes the four mood situations Reference, Relaxed, Focused, Party:The example describes the four mood situations Reference, Relaxed, Focused, Party:

Reference • Bass clean and properly leveled (not too much). • Optimized for all seats. • Should be a default mode, good for static listening and everyday driving.Reference • Bass clean and properly leveled (not too much). • Optimized for all seats. • Should be a default mode, good for static listening and everyday driving.

Relaxed • Designed for long trips, cruising. • Less treble than reference, presence under control. • Optimized for all seats. • Significantly more S component on front seats to increase Envelopment. • Wide sound stage with fuzzy but stable phantom center.Relaxed • Designed for long trips, cruising. • Less treble than reference, presence under control. • Optimized for all seats. • Significantly more S component on front seats to increase Envelopment. • Wide sound stage with fuzzy but stable phantom center.

Party • Designed for loud music listening. • Bass heavy, very punchy. • Staging should be decent, but preference is on timbre. • Less use of EQ, no high Q and deep cuts, let the speakers play by themselves. • Opposite to Reference mode.Party • Designed for loud music listening. • Bass heavy, very punchy. • Staging should be decent, but preference is on timbre. • Less use of EQ, no high Q and deep cuts, let the speakers play by themselves. • Opposite to Reference mode.

Focused • Designed for high speed, sporty driving. • Bass fast and punchy with flat treble and increased presence. • M component pronounced on front seats. • Optimized for front seats only. • Opposite to Relax mode.Focused • Designed for high speed, sporty driving. • Bass fast and punchy with flat treble and increased presence. • M component pronounced on front seats. • Optimized for front seats only. • Opposite to Relax mode.

In a preferred embodiment of the invention mood metadata are received from service provider and use it to enhance user experience; music services such as Spotify, Deezer, Aupeo, etc. are ubiquitous, and some of them offer playlist creation based on the mood.In a preferred embodiment of the invention mood metadata are received from service provider and used to enhance user experience; music services such as Spotify, Deezer, Aupeo, etc. are ubiquitous, and some of them offer playlist creation based on the mood.

This enables for the feature to change sound tuning of the sound system according to the mood input.This allows for the feature to change sound tuning of the sound system according to the mood input.

Music service providers may include mood information in their metadata according to the standard as in ID3v2 standard metadata container used in conjunction with MP3 files (tmoo frame) - A piece of music may not be categorized by one single "mood parameter", as there are many possible moods that could be assigned to a specific album (also few moods could be associated with one album), for example:Music service providers may include mood information in their metadata according to the standard as in ID3v2 standard metadata container used in conjunction with MP3 files (tmoo frame) - A piece of music may not be categorized by a single "mood parameter", if there are many possible moods that could be assigned to a specific album (also few moods could be associated with one album), for example:

Santana- Supernatural: - Summery - Celebratory - Earthly - Joyous - Passionate - Sensual - Cheerful - Organic - SpiritualSantana- Supernatural: - Summery - Celebratory - Earthly - Joyous - Passionate - Sensual - Cheerful - Organic - Spiritual

An aspect of the invention is Mood Sound Tuning based on Mood metadata: • Mood metadata to be an input for a mood sound tuning which would exist on the top of an ^ actual standard sound tuning. • A solution for executing mood sound tuning would be Advanced Tone Control (ATC), where mood tuning is represented by 3 variables (x, y, z - or arousal, valence and distance). • A constraint table is applied for functional method for matching mood input with mood sound tuning.An aspect of the invention is Mood Sound Tuning based on Mood metadata: • Mood metadata to be an input for a mood sound tuning which would exist at the top of an ^ actual standard sound tuning. • A solution for executing mood sound tuning would be Advanced Tone Control (ATC), where mood tuning is represented by 3 variables (x, y, z - or arousal, valence and distance). • A constraint table is applied for functional method for matching mood input with mood sound tuning.

Another aspect of the invention is Mood Sound Tuning based on specific mood parameters: • There are around 200 moods that could be assigned to track or album. • Refer to www.aymyslc.com/ for some mood tag examples. • The mood sound tuning would be to the individual mood parameters. • A constraint table is applied for functional method for matching mood input with mood sound tuning.Another aspect of the invention is Mood Sound Tuning based on specific mood parameters: • There are around 200 moods that could be assigned to track or album. • Refer to www.aymyslc.com/ for some mood tag examples. • The mood sound tuning would be to the individual mood parameters. • A constraint table is applied for functional method for matching mood input with mood sound tuning.

Yet Another aspect of the invention is Mood Sound Tuning based on Mood Clustering: • Cluster analysis or clustering - a method to group a set of objects / labels that objects in the same group (called cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). • Some research papers suggest five clusters analyzing a large set of moods. • Clustering would depend on our criteria and would be done once or offline. • Mood tag from internet radio or song would arrive as input and then assigned to a certain cluster with its degree of membership to other clusters. • The mood sound tuning would be an ATC setting representing the center of each cluster, tuned by a sound designer. • As ATC space represents a volume of acceptable / tuned settings, ATC setting can interpolate if a specific mood belongs to two or more clusters with a certain degree of membership. • In effect there would be one ATC setting for each cluster. • A constraint table is applied for functional method for matching mood input with mood sound tuning.Yet another aspect of the invention is Mood Sound Tuning based on Mood Clustering: • Cluster analysis or clustering - a method to group a set of objects / labels that objects in the same group (called cluster) are more similar (in some sense or another ) to each other than to those in other groups (clusters). • Some research papers suggest five clusters analyzing a large set of moods. • Clustering would depend on our criteria and would be done once or offline. • Mood tag from internet radio or song would arrive as input and then assigned to a certain cluster with its degree of membership to other clusters. • The mood sound tuning would be an ATC setting representing the center of each cluster, tuned by a sound designer. • If ATC space represents a volume of acceptable / tuned settings, ATC setting can interpolate if a specific mood belongs to two or more clusters with a certain degree of membership. • In effect there would be one ATC setting for each cluster. • A constraint table is applied for functional method for matching mood input with mood sound tuning.

As an alternative to the mood clustering the Russel's Circumplex Model of Affect (RCMA)(1980) used in psychology, may be applied.This is useful for mapping different moods in space and as graphical representation on the user inface.As an alternative to mood clustering the Russel's Circumplex Model of Affect (RCMA) (1980) used in psychology may be applied.This is useful for mapping different moods in space and as graphical representation on the user inface.

In a preferred embodiment the Mood user inface is supported by a wheel alike representation, that combines the RCMA with a navation principle to effectively address and select specific moods.In a preferred embodiment the Mood user inface is supported by a wheel alike representation, which combines the RCMA with a navation principle to effectively address and select specific moods.

As displayed in figure 3 of a preferred embodiment, this applies standard metadata combined with mood related parameters to address specific tuning parameters.As shown in Figure 3 of a preferred embodiment, this applies standard metadata combined with mood related parameters to address specific tuning parameters.

As displayed in figure 4 of a preferred embodiment different stageing concepts may be configured, each having individual tuning parameters, and each reflecting different moods, e.g. but not limited to: Reference, Relaxed, Focused, and Party.As shown in Figure 4 of a preferred embodiment, different stageing concepts may be configured, each having individual tuning parameters, and each reflecting different moods, e.g. but not limited to: Reference, Relaxed, Focused, and Party.

As displayed in figure 5 of a preferred embodiment different tone-/tuning space, may be configured, each having individual tuning parameters, and each reflecting different moods, e.g. but not limited to: Reference, Relaxed, Focused, and Party. The space represented by the terms: wide, and narrow, and the tone represented by the terms: treble and bass. A qualified attribute may be recorded as the user perception of the "situation", like: this is "annoying" or this is "boring", related to the position of the music event in the space and tone domain.As shown in Figure 5 of a preferred embodiment, different tone / tuning space may be configured, each having individual tuning parameters, and each reflecting different moods, e.g. but not limited to: Reference, Relaxed, Focused, and Party. The space represented by the terms: wide, and narrow, and the tone represented by the terms: treble and bass. A qualified attribute may be recorded as the user perception of the "situation", such as: this is "annoying" or this is "boring", related to the position of the music event in the space and tone domain.

In figure 6 and 7 and 8 three alternative representations of mood paramters are displayed. These a) include individual mood parameters, b) mood parameters grouped into a cluster, in which the parameters possess a certain simlilarity and c) a circumplex model of affect, represented in a coordinate system.In Figures 6 and 7 and 8 three alternative representations of mood parameters are displayed. These a) include individual mood parameters, b) mood parameters grouped into a cluster, in which the parameters possess a certain similarity and c) a circumplex model of affect, represented in a coordinate system.

As displayed in figure 9 a user interface may apply a color scheme and graphical outline for ease of access to the mood related music choise. Different colors represent different mood parameters. As seen as an example from the figure, the user may relate to his/her own music, some familiar feeling music or just other diverse music. The music actual in play may be selected via access to the color codes and as illustrated the "current mood" and the "previous mood" in play might be displayed at the same time.As shown in figure 9 a user interface may apply a color scheme and graphical outline for ease of access to the mood related music choice. Different colors represent different mood parameters. As seen as an example from the figure, the user may relate to his / her own music, some familiar feeling music or just other diverse music. The music currently in play may be selected via access to the color codes and as illustrated the "current mood" and the "previous mood" in play might be displayed at the same time.

As displayed in figure 10 mood sound tuning for a cluster includes settings for the members in the cluster. As illustrated the cluster positioned at alternative positions in the coordinate system (party, warm, bright, relaxed), according to the "complete mood" of the cluster.As shown in figure 10 mood sound tuning for a cluster includes settings for the members in the cluster. As illustrated the cluster positioned at alternative positions in the coordinate system (party, warm, bright, relaxed), according to the "complete mood" of the cluster.

The table complex in a advanced implementation of the invention constitutes a combinatorial state space relating all variables in data model. This will give a relatively big number of combinations.The table complex in an advanced implementation of the invention establishes a combinatorial state space relating all variables in data model. This will give a relatively large number of combinations.

For example an audio system may have up to 21 channels and every channel could have 16 filters, gain values, dynamic compressors with about 10 sub-parameters.For example, an audio system may have up to 21 channels and each channel could have 16 filters, gain values, dynamic compressors with about 10 sub-parameters.

This type of problem domain is effectively implemented in a constraint solver as disclosed by the applicant in patent US 8,224,854, and including a combinatorial tate space as data representation method.This type of problem domain is effectively implemented in a constraint solver as disclosed by the applicant in patent US 8,224,854, and including a combinatorial tate space as data representation method.

The combinatorial state space defines all combinations of relevant parameters in a 3-dimentional structure that represent all legal solutions defined by the referred parameters, these being sensor readings, mood parameters, filter settings, user identification and alike.The combinatorial state space defines all combinations of relevant parameters in a 3-dimensional structure that represents all legal solutions defined by the referred parameters, these being sensor readings, mood parameters, filter settings, user identification and alike.

This representation is very effective and may be processed in real time, which would not be possible in an ordinary relational data base model.This representation is very effective and may be processed in real time, which would not be possible in an ordinary relational database model.

According to the present embodiment, the constraint solver domain table is organized as relations among variables in the general mathematical notation of 'Disjunctive Form':According to the present embodiment, the constraint solver domain table is organized as relations among variables in the general mathematical notation of 'Disjunctive Form':

AttribVariable 1.1 and AttribVariable 1.2 and AttribVariable 1.3 and AttribVariable l.n Or Attrib Variable 2.1 and Attrib Variable 2.2 and Attrib Variable 2.3 and Attrib Variable 2.nAttribVariable 1.1 and AttribVariable 1.2 and AttribVariable 1.3 and AttribVariable l.n Or Attrib Variable 2.1 and Attrib Variable 2.2 and Attrib Variable 2.3 and Attrib Variable 2.n

Or OrOr Or

Or Attrib Variable m. 1 and Attrib Variable m.2 and Attrib Variable m.3 and Attrib Variable m.nOr Attrib Variable m.1 and Attrib Variable m.2 and Attrib Variable m.3 and Attrib Variable m.n

For example, AttribVariable 1.1 may be "mood type"; AttribVariable 1.1a "seat sensing"; AttribVariable 1.3 "a music title"; and AttribVariable 1 .n may be a reference to another table or an action code that addresses the deduced action to take place when the specific legal combination of variables is fulfilled, e.g. a set of filter settings to be applied.For example, AttribVariable 1.1 may be "mood type"; AttribVariable 1.1a "seat sensing"; AttribVariable 1.3 "a music title"; and AttribVariable 1 .n may be a reference to another table or an action code that addresses the deduced action to take place when the specific legal combination of variables is fulfilled, e.g. a set of filter settings to be applied.

The principles behind the method to adjust the acoustical performance of a loudspeaker system is disclosed by the applicant in patent US 7,991,175. "Method and a System to Adjust the Acoustical Performance of a Loudspeaker".The principles behind the method of adjusting the acoustical performance of a loudspeaker system are disclosed by the applicant in patent US 7,991,175. "Method and a System to Adjust the Acoustical Performance of a Loudspeaker".

Different set of amplifier adjustment parameters like loudspeaker gain, relative time delays between loudspeakers, amount of additional sound-field processing and correction filters (order, frequency, quality, gain) can be addressed and applied as a function of the indentified mood.Different set of amplifier adjustment parameters like loudspeaker gain, relative time delays between loudspeakers, amount of additional sound field processing and correction filters (order, frequency, quality, gain) can be addressed and applied as a function of the indentified mood.

In combination with a system that will provide sensory input / mood data to a proposed system, the invention would add significant value to existing audio sound systems. The sound system would not only adapt to the external driving conditions (as it is used today) but also will take into account occupants mood. Preexisting solutions adapted sound tuning only for physical values and proposed system will adjust also for psychological factors. This solution would make a system more attractive to car manufacturers and also end customers.In combination with a system that will provide sensory input / mood data to a proposed system, the invention would add significant value to existing audio sound systems. The sound system would not only adapt to the external driving conditions (as it is used today) but will also take into account occupants mood. Preexisting solutions adapted sound tuning only for physical values and proposed system will also adjust for psychological factors. This solution would make a system more attractive to car manufacturers and also end customers.

Claims (7)

1. An audio system including a Tuning Agent that is configured according to a given input, where the Tuning Agent is characterized by: a. read input data including at least one or more of the parameters: mood of user(s), sensor data, manual user input/feedback, b. read media data for a given track/song to be played, c. adjust the sound system parameters of the audio system, d. the adjustment to alter timbral and spatial characteristics of the sound system in real time and depending on the mood input.
2. A Tuning Agent according to claim 1, in which mood input is derived from metadata received from a service provider.
3. A Tuning Agent according to claim 2, in which the sound system parameters to be configured/reconfigured are one or more of: i. Loudspeaker gain, ii. Relative time delays between loudspeakers, III. Filters (order, frequency, quality, gain), iv. Amount of additional sound-field processing.
4. AT uning Agent according to claim 3, in which the audio amplifier includes a dynamic tuning table that’s related to different moods versus different tunings, the table being as a finite state domain structure.
5. AT uning Agent according to claim 4, in which mood input may be of different forms: a. individual moods parameters, b. clusters including two or more mood parameters, c. mood spatial mapping, d. custom graphical mapping.
6. AT uning Agent according to claim 5, in which the custom graphical mapping enables access to multiple control modes of operation like current mood and previous mood.
7. AT uning Agent according to all claims, in which the Tuning Agent is a logical entity that may be implemented in different components, based on setup and conceptual requirements: a. the Tuning Agent is a standalone component, b. the Tuning Agent is part of the “Player / Renderer”, c. the Tuning Agent is a smartphone application, which runs on a smartphone and interacts with the rest of the car via the standardized car-phone existing infrastructure communication systems, d. The Tuning Agent is a physically different component on the CAN or MOST bus of a car.
DK201300471A 2013-08-20 2013-08-20 System for dynamically modifying car audio system tuning parameters DK201300471A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
DK201300471A DK201300471A1 (en) 2013-08-20 2013-08-20 System for dynamically modifying car audio system tuning parameters
CN201480046559.6A CN105637903B (en) 2013-08-20 2014-08-15 System and method for generating sound
EP17159760.2A EP3280162A1 (en) 2013-08-20 2014-08-15 A system for and a method of generating sound
EP14752326.0A EP3036919A1 (en) 2013-08-20 2014-08-15 A system for and a method of generating sound
PCT/EP2014/067503 WO2015024881A1 (en) 2013-08-20 2014-08-15 A system for and a method of generating sound
US14/912,894 US10142758B2 (en) 2013-08-20 2014-08-15 System for and a method of generating sound

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DK201300471A DK201300471A1 (en) 2013-08-20 2013-08-20 System for dynamically modifying car audio system tuning parameters
DK201300471 2013-08-20

Publications (1)

Publication Number Publication Date
DK201300471A1 true DK201300471A1 (en) 2015-03-02

Family

ID=52577555

Family Applications (1)

Application Number Title Priority Date Filing Date
DK201300471A DK201300471A1 (en) 2013-08-20 2013-08-20 System for dynamically modifying car audio system tuning parameters

Country Status (1)

Country Link
DK (1) DK201300471A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190351912A1 (en) * 2018-05-18 2019-11-21 Hyundai Motor Company System for determining driver's emotion in vehicle and control method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002839A1 (en) * 2006-06-28 2008-01-03 Microsoft Corporation Smart equalizer
US20080160943A1 (en) * 2006-12-27 2008-07-03 Samsung Electronics Co., Ltd. Method and apparatus to post-process an audio signal
WO2009023289A1 (en) * 2007-08-14 2009-02-19 Sony Ericsson Mobile Communications Ab Method of using music metadata to save music listening preferences
US20090310799A1 (en) * 2008-06-13 2009-12-17 Shiro Suzuki Information processing apparatus and method, and program
US20130120114A1 (en) * 2011-11-16 2013-05-16 Pixart Imaging Inc. Biofeedback control system and method for human-machine interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002839A1 (en) * 2006-06-28 2008-01-03 Microsoft Corporation Smart equalizer
US20080160943A1 (en) * 2006-12-27 2008-07-03 Samsung Electronics Co., Ltd. Method and apparatus to post-process an audio signal
WO2009023289A1 (en) * 2007-08-14 2009-02-19 Sony Ericsson Mobile Communications Ab Method of using music metadata to save music listening preferences
US20090310799A1 (en) * 2008-06-13 2009-12-17 Shiro Suzuki Information processing apparatus and method, and program
US20130120114A1 (en) * 2011-11-16 2013-05-16 Pixart Imaging Inc. Biofeedback control system and method for human-machine interface

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190351912A1 (en) * 2018-05-18 2019-11-21 Hyundai Motor Company System for determining driver's emotion in vehicle and control method thereof

Similar Documents

Publication Publication Date Title
US20240223979A1 (en) Sound normalization and frequency remapping using haptic feedback
JP5898305B2 (en) Sound reproduction device including auditory scenario simulation
US8891794B1 (en) Methods and devices for creating and modifying sound profiles for audio reproduction devices
US20150193196A1 (en) Intensity-based music analysis, organization, and user interface for audio reproduction devices
KR102887589B1 (en) Blockchain data-based digital media creation
US12170885B2 (en) Systems and methods of adjusting bass levels of multi-channel audio signals
CN101120412A (en) System and method for mixing first audio data and second audio data, a program element and a computer-readable medium
EP3889958B1 (en) Dynamic audio playback equalization using semantic features
Shelvock Audio mastering as musical practice
DK201300471A1 (en) System for dynamically modifying car audio system tuning parameters
JP4392040B2 (en) Acoustic signal processing apparatus, acoustic signal processing method, acoustic signal processing program, and computer-readable recording medium
JP2020537470A (en) How to set parameters for personal application of audio signals
Liang et al. Improvement of Sound Quality in Car Based on the Third-Party Sound Effect
MOORMAN How Does Engineering Bridge into the Traditionally ‘Creative’Realm of Music?
Mowen Can future audio products ever match the soundstage (perception of sound) and emotion conveyed from that of industry-standard monitors and acoustic spaces?

Legal Events

Date Code Title Description
PHB Application deemed withdrawn due to non-payment or other reasons

Effective date: 20160511