US20140128160A1 - Method and system for generating a sound effect in a piece of game software - Google Patents
Method and system for generating a sound effect in a piece of game software Download PDFInfo
- Publication number
- US20140128160A1 US20140128160A1 US13/264,189 US201113264189A US2014128160A1 US 20140128160 A1 US20140128160 A1 US 20140128160A1 US 201113264189 A US201113264189 A US 201113264189A US 2014128160 A1 US2014128160 A1 US 2014128160A1
- Authority
- US
- United States
- Prior art keywords
- music
- audio data
- ambient music
- sound effect
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000000694 effects Effects 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000004044 response Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 37
- 230000001020 rhythmical effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 abstract description 22
- 239000011295 pitch Substances 0.000 description 30
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 235000015961 tonic Nutrition 0.000 description 5
- 230000001256 tonic effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000002085 persistent effect Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 229910001369 Brass Inorganic materials 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 239000010951 brass Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 229960000716 tonics Drugs 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/54—Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F9/00—Games not otherwise provided for
- A63F9/24—Electric games; Games using electronic circuits not otherwise provided for
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/22—Setup operations, e.g. calibration, key configuration or button assignment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/44—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/38—Chord
- G10H1/383—Chord detection and/or recognition, e.g. for correction, or automatic bass generation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/40—Rhythm
- G10H1/42—Rhythm comprising tone forming circuits
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/215—Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1081—Input via voice recognition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6063—Methods for processing data by generating or executing the game program for sound processing
- A63F2300/6081—Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/021—Background music, e.g. for video sequences or elevator music
- G10H2210/026—Background music, e.g. for video sequences or elevator music for games, e.g. videogames
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/071—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for rhythm pattern analysis or rhythm style recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/076—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/141—Riff, i.e. improvisation, e.g. repeated motif or phrase, automatically added to a piece, e.g. in real time
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
- G10H2240/081—Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
- G10H2240/085—Mood, i.e. generation, detection or selection of a particular emotional content or atmosphere in a musical piece
Definitions
- the present disclosure relates to a method and system for generating a sound effect in a piece of game software, and in particular for synchronizing the sound effects of a video game to background music as a substitution to the original game music.
- the present disclosure relates to adjusting the sound effects of a video game in such a way that they blend perfectly with whatever piece of music the user has decided to play as a substitution to the original game music.
- the aim of the disclosure is to allow satisfactory immersion in the game, even when a user is using his own ambient music, by encouraging the user to keep the sound effects provided.
- the present disclosure discusses a method for generating a sound effect in a piece of game software.
- the method includes accessing audio data representing a sound effect from a sound reproduction device in response to a request for emission of a sound effect from the game software.
- the method analyzes audio data representing music in the course of reproduction, referred to as ambient music, in order to determine at least one characteristic of the ambient music.
- the method then defines at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- the method includes analyzing the audio data representing the ambient music in order to determine instants at which the ambient music has a rhythmic beat in order to analyze audio data representing the ambient music for determining the at least one characteristic of the ambient music.
- the method then defines an instant at which the transmission starts from the instants at which the ambient music has a rhythmic beat in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- the method includes defining as the instant at which the transmission starts, an instant that follows the last instant at which the music has a rhythmic beat in order to determine the instant at which the transmission starts from the instants at which the music has a rhythmic beat.
- the instant is defined by an integer number multiplied by the average time interval separating the instants at which the music has a rhythmic beat. According to some embodiments, it is preferable that this be once the average time interval.
- the method includes analyzing the audio data representing the ambient music in order to determine a musical genre for the ambient music in order to analyze the audio data representing the ambient music in order to determine the at least one characteristic of the ambient music.
- the method then includes selecting, from among several audio data associated with different musical genres, the audio data which is associated with the genre of the ambient music, where the audio data for the transmission stem is from the selected audio data in order to define the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- the method includes analyzing the audio data representing the ambient music in order to determine a key for the ambient music in order to analyze the audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then determines a desired pitch from the determined key in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- the method includes analyzing the audio data representing the ambient music in order to determine a bass line and a melody line for the ambient music.
- the analyzing step is also performed in order to analyze the audio data representing the ambient music in order to determine a key for the ambient music.
- the method also includes determining the key of the ambient music from the bass line and the melody line that have been determined.
- the method further includes recovering audio data representing a sound effect having a certain pitch, modifying the recovered audio data so that the sound effect that they represent has the desired pitch, in that the audio data of the transmission stem from the audio data that have been modified in this manner.
- the method further includes determining parameters of a software synthesizer from, firstly, the at least one characteristic of the ambient music and, secondly, from defined relationships.
- the method includes implementing the software synthesizer with the determined parameters so that it synthesizes sound effect audio data, in that the audio data of the transmission stem from the audio data that have been synthesized in this manner.
- a computer-readable storage medium for generating a sound effect in a piece of game software.
- a system for generating a sound effect in a piece of game software.
- the system includes a data processing system which includes a sound reproduction device, a storage device on which a computer program has been saved, and a central processing unit for executing the instructions of the computer program.
- FIG. 1 is a block diagram of a data processing system in accordance with an embodiment of the present disclosure
- FIG. 2 is a block diagram illustrating instruction blocks in a piece of game software implemented by the data processing system of FIG. 1 in accordance with an embodiment of the present disclosure
- FIG. 3 illustrates a flow chart for generating a sound effect in accordance an embodiment of the present disclosure
- FIG. 4 is a block diagram illustrating an internal architecture of a computing device in accordance with an embodiment of the present disclosure.
- the functions/acts noted in the blocks can occur out of the order noted in the operational illustrations.
- two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved.
- the embodiments of methods presented and described as flowcharts in this disclosure are provided by way of example in order to provide a more complete understanding of the technology. The disclosed methods are not limited to the operations and logical flow presented herein. Alternative embodiments are contemplated in which the order of the various operations is altered and in which sub-operations described as being part of a larger operation are performed independently.
- the principles described herein may be embodied in many different forms.
- the described systems and methods allow for synchronizing the sound effects of a video game to background music.
- the described systems and methods adjust the sound effects in such a way that they blend perfectly with whichever piece of music the player has decided to play as a substitution to the original game music.
- end user should be understood to refer to a consumer of data supplied by a data provider.
- user can refer to a person who receives data provided by the data provider over the Internet in a browser session, or can refer to an automated software application which receives the data and stores or processes the data.
- a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form.
- a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals.
- Computer readable storage media refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
- a module is a software, hardware, or firmware (or combinations thereof) system, process or functionality, or component thereof, that performs or facilitates the processes, features, and/or functions described herein (with or without human interaction or augmentation).
- a module can include sub-modules.
- Software components of a module may be stored on a computer readable medium. Modules may be integral to one or more computers (or servers), or be loaded and executed by one or more computers (or servers). One or more modules may be grouped into an engine or an application.
- a background music analyzer, game sound effects analyzer and a sound effect scheduler can be a module that is a software, hardware, or firmware (or combinations thereof) system for automatically synchronizing game sound effects with background music.
- server should be understood to refer to a service point which provides processing, database, and communication facilities.
- server can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and applications software which support the services provided by the server.
- the game software may provide an option to use an ambient music file (for example a file in mp3 format) from the user instead of the ambient music initially provided.
- an ambient music file for example a file in mp3 format
- users simply turn off the ambient music initially provided to replace it with ambient music from a piece of software other than the game software, generally a multimedia player such as the software VLC or the software foobar2000.
- the background music analyzer is a library integrated into a game responsible for recording the music which is substituted to the original game music, either through direct access to the audio file (at the game-level), through OS-level interception of audio buffers (at the system-level), or through direct recording with a microphone (at the room-level).
- a recorded signal can be split into overlapping frames, such as 100 ms frames.
- the following functions can be used to extract features for each frame: (1) Beat detection function: a function showing sharp peaks at beats; (2) Key detection function: indicating the probability that the music has been, over a past period of time, such as 20 s, in a specific tonality.
- a predetermined number of the key detection functions are computed for each minor and major tonalities. For example, 24 of the key detection functions are computed for each of the 12 minor and major tonalities.
- the beat detection function is computed by a periodicity estimation and tracking of onset detection.
- the key detection function is computed by matching a bass and melody chromagram with note distribution templates computed for each scale.
- the chromogram is obtained by binning the frequency spectrum into a number of bins (e.g., 12 bins) mapped to a number of tones (e.g., 12 tones) of equal temperament scale; or by encoding into a number of pitch classes (e.g., 12 pitch classes) the output of a multi-pitch estimator.
- Additional genre information can be extracted through the use of standard machine learning techniques, such as but not limited to, SVM or Bayesian classifier using mixtures of Gaussian distributions trained on annotated audio files.
- a game sound effects analyzer analyzes each of the sound effects samples used in the game to detect their fundamental frequency, using an algorithm such as YIN. It is either used during the game development process, in which all the sound effect samples produced for the game can be annotated with their pitch, or embedded in the game, in which the analysis can be performed every time the game is launched. In the situation the analysis is part of the game asset preparation procedure, different sound effects can also be annotated with a specific music genre, or different sets of sound effects can be created that match different music genres. For example, the destruction of an enemy in a game can be sonified by a synthesizer sound in the “electro” sample set, and a brass hit in the “soul” sample set.
- a sound effect scheduler can be embedded in the game and may be responsible for the playback of the game sound effects. It can operate in two modes. In a normal operating mode, the samples are played at their original pitch immediately after the moment the action that triggers them has taken place. In a music-synchronous mode, the sound effect scheduler queries the background music analyzer to retrieve the times at which the past number of beats (e.g., 4 beats) have been played in the background music, and the most probable tonality of the background music. The position in time of the past number of beats (e.g., 4 beats) can be used to anticipate the time at which the next beat will occur.
- the past number of beats e.g., 4 beats
- the sound effect is not played instantly, but instead, it is delayed so that its playback will coincide with the next beat in the music. Additionally, the difference in pitch between the original sound effect sample (as computed by the sound effect analyzer) and the tonality of the music is compensated for, using transpositions methods such as sample rate conversion or pitch-shifting. In the situation where the game sound effects bank has been annotated by genre, the genre information returned by the analysis module can be used to restrain further the set of sound effects played back.
- the data processing system 100 includes a central unit 102 which contains a central processing unit 104 , such as a microprocessor, and a storage device 106 , such as a hard disk.
- the data processing system 100 has a man/machine interface 108 comprising input devices, such as for example a keyboard 110 and a mouse 112 , and output devices, such as for example a display screen 114 and a sound reproduction device 118 , 120 .
- the sound reproduction device can be comprised of a sound card 118 arranged in the central unit 102 and speakers 120 connected to the sound card 118 .
- the data processing system 100 includes a sound capture device 122 , such as a microphone connected to the sound card 118 .
- the sound capture device 122 is designed to capture a musical source 114 which can be external 124 to the data processing system 100 .
- a non-limiting example of an external musical source 124 is a hi-fi system.
- a computing device discussed in the data processing system 100 may be any computing device that may be coupled to a network, including, for example, personal digital assistants, Web-enabled cellular telephones, devices that dial into the network, mobile computers, personal computers, Internet appliances, wireless communication devices, game consoles and the like.
- Computing devices in data processing system 100 include a program for interfacing with the network.
- Such program can be a window or browser, or other similar graphical user interface, for visually displaying the game to the end user (or player) on the display 114 of the computing device.
- servers for providing game software and/or ambient music external to the game software may be of any type, running any software, and the software modules, objects or plug-ins may be written in any suitable programming language.
- FIG. 2 illustrates instruction blocks in a piece of game software implemented by the data processing system 100 of FIG. 1 in accordance with some embodiments of the present disclosure.
- audio data FX A , FX B and FX C are saved in the storage device 106 of the data processing system of FIG. 1 .
- the audio data FX A , FX B or FX C represent a sound effect and are associated with respective musical genres G A , G B and G C .
- a piece of game software 200 allowing a user to play a game is likewise saved in the storage device 106 .
- the game software 200 includes game instructions 202 which are designed to supply game information to a user through the output devices of the man/machine interface 108 , in that the game information evolves on the basis of commands input by a user using the input devices (e.g., 110 , 112 ) of the man/machine interface 108 .
- the game instructions 202 are designed to send a request R for emission of a sound effect when the game is being executed.
- the request R is sent upon every action in the game which is performed by the user using the input devices of the man/machine interface 108 , in that said action is associated with a sound effect, as discussed below.
- the game software 200 includes sound effect analysis instructions 204 .
- the sound effect analysis instructions 204 are designed to analyze each saved instance of audio data FX A , FX B and FX C and to determine the pitch P A , P B and P C thereof.
- the pitch corresponds to a fundamental frequency for the audio data, as determined by means of, for example, a YIN algorithm.
- the sound effect analysis instructions 204 are furthermore designed to create associations between the audio data FX A , FX B or FX C and the respective pitch P A , P B or P C thereof. That is, a pitch value P A , P B or P C are determined from the audio samples FX A , FX B or FX C respectively, and this determination is taken into account for assigning a pitch value to the sound effects.
- the game software 200 includes instructions 206 for analyzing a piece of music in the course of reproduction either by the reproduction device 118 , 120 or by the external reproduction device 124 .
- This music is referred to as ambient music.
- the ambient music analysis instructions 206 are designed to recover audio data MUS representing the ambient music.
- the ambient music analysis instructions 206 are designed to directly access the music file indicated by the user in the game software options.
- the game software options can be a dialog box, window, menu or any other graphical user interface element through which the user can configure aspects of the game, such as, input controls, sound volume, music selection, etc.
- the ambient music analysis instructions 206 are designed to intercept the audio buffers of an operating system running on the data processing system 100 and executing the game software.
- the ambient music analysis instructions 206 are designed to use the sound capture device 122 to convert the ambient music into the audio data MUS.
- the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine at least one characteristic of the ambient music. More precisely, in an example, three characteristics of the ambient music are determined. Thus, the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine instants, denoted as BEAT in FIG. 2 , at which the ambient music has a rhythmic beat. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a musical genre, denoted GENRE in FIG. 2 , for the ambient music. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a key, denoted KEY in FIG. 2 , for the ambient music.
- a key is defined as the set of a tonic and a mode.
- the tonic is one of the twelve notes in the classical scale (C, C sharp, D, D sharp, E, F, F sharp, G, G sharp, A, A sharp, B), and the mode is chosen from among the harmonic major mode and the harmonic minor mode.
- the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine a bass line and a melody line for the ambient music. From this, the key of the music from the bass line and the melody line is determined.
- the game software 200 has sound effect generation instructions 208 . This coincides with the sound effects scheduler discussed above.
- the sound effect generation instructions 208 are designed to, in response to the sending of the request R, define at least one characteristic for an audio data transmission, which are denoted FX in FIG. 2 representing a sound effect, to the reproduction device 118 , 120 .
- This at least one transmission characteristic is determined from the at least one ambient music characteristic determined by the ambient music analysis instructions 204 . More precisely, according to some embodiments, and by way of a non-limiting example, the sound effect generation instructions 208 are designed to define three transmission characteristics from, respectively, the three ambient music characteristics: BEAT, GENRE and KEY.
- the sound effect generation instructions 208 are designed to define an instant T 0 at which the transmission starts from the instants BEAT, at which the ambient music has a rhythmic beat.
- the sound effect generation instructions 208 are designed to define this instant T as following the last rhythmic beat instant by a time interval equal to an integer number of times the average time interval separating the rhythmic beat instants. According to some embodiments, transmission occurs once this average time interval.
- the sound effect generation instructions 208 are designed to select, from among the default audio data FX A , FX B and FX C , those which are associated with the musical genre GENRE of the ambient music, as provided by the instructions 204 .
- the selected default audio data will subsequently be denoted FX i and the pitch thereof P i .
- the sound effect generation instructions 208 are designed to determine a desired pitch P from the key KEY of the ambient music MUS as provided by the instructions 204 .
- the desired pitch P is the tonic or the fifth of the key KEY.
- the sound effect generation instructions 208 are designed to recover the selected default audio data FX i which, as indicated previously, have a default pitch P i .
- the sound effect generation instructions 208 are designed to modify the recovered default audio data FX i so that the sound effect which they represent has the desired pitch P.
- the sound effect generation instructions 208 are designed to define the selected and modified audio data as audio data FX which represents the desired sound effect.
- the sound effect generation instructions 208 are designed to implement the transmission having the characteristics defined previously, that is to say: the instant T 0 at which transmission starts, the audio data FX stemming from default audio data FX i corresponding to the genre of the ambient music and having the desired pitch P.
- FIG. 3 is a flow chart showing the steps in a method 300 for generating a sound effect, via the data processing system 100 in FIG. 1 executing the instructions of the game software in FIG. 2 , in accordance an embodiment of the present disclosure.
- the data processing system 100 receives a request for execution of the game software 200 from the user through the man/machine interface 108 .
- Step 304 in response to reception of the request, the data processing system 100 launches the game software 200 .
- Step 305 in which the game is initialized the processing unit 104 executing the sound effect analysis instructions 204 analyzes the audio data FX A , FX B and FX C , determines the respective pitch P A , P B and P C thereof, in the manner indicated with reference to FIG. 2 , and creates associations between the audio data FX A , FX B and FX C and the respective pitch P A , P B , P C thereof.
- Step 306 the central processing unit 104 executing the game instructions 202 supplies game information to the user through the output devices (screen, sound reproduction device, etc.) of the man/machine interface 108 on the basis of commands which are input by the user using the input devices 110 , 112 (keyboard, mouse, etc.) of the man/machine interface 108 .
- the processing unit 104 executing the ambient music analysis instructions 204 recovers audio data MUS representing the ambient music.
- Step 310 the processing unit 104 executing the ambient music analysis instructions 204 analyzes the audio data MUS in order to determine at least one characteristic of the ambient music, for example the three characteristics BEAT, GENRE and KEY indicated previously.
- Step 316 the central processing unit 104 executing the game instructions 202 receives a command from the user through the input devices of the man/machine interface 108 in order to perform an action in the game, where the action is associated with a sound effect.
- Step 318 in response to reception of the command from the user, the central processing unit 104 executing the game instructions 202 sends a request R for emission of a sound effect.
- Step 320 in response to the request R, the central processing unit 104 executing the sound effect generation instructions 208 defines the three characteristics T, FX i and P on the basis of, respectively, the three characteristics BEAT, GENRE and KEY of the ambient music which were determined during step 310 .
- Step 322 the central processing unit 104 executing the sound effect generation instructions 208 recovers the selected default audio data FX i which, as indicated previously, represents a sound effect having the default pitch P i .
- Step 324 the central processing unit 104 executing the sound effect generation instructions 208 modifies the default audio data FX i so that the sound effect which they represent changes from the pitch P i to the desired pitch P.
- the audio data modified in this manner are denoted FX.
- Step 326 the central processing unit 104 executing the sound effect generation instructions 208 performs the transmission at the instant T, with the audio data FX which, firstly, represents a sound effect at the pitch P and, secondly, stems from the audio data FX i selected in accordance with the genre of the ambient music.
- the generated sound effect is harmoniously incorporated into the ambient music on several levels: on a rhythmic level as a result of the transmission instant T 0 , on a melodic level as a result of the pitch P 0 of said sound effect, and on a stylistic level as a result of the selection of the audio data FX i that matched the genre of the ambient music.
- the method 300 then returns to Steps 306 and 308 .
- FIG. 4 is a block diagram illustrating an internal architecture of an example of a computing device, as discussed in data processing system 100 of FIGS. 1-3 , in accordance with one or more embodiments of the present disclosure.
- a computing device as referred to herein refers to any device with a processor capable of executing logic or coded instructions, and could be, as understood in context, a server, personal computer, game console, set top box, smart phone, pad/tablet computer or media device, to name a few such devices.
- internal architecture 400 includes one or more processing units (also referred to herein as CPUs) 412 , which interface with at least one computer bus 402 . Also interfacing with computer bus 402 are persistent storage medium/media 406 , network interface 414 , memory 404 , e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc., media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc.
- processing units also referred to herein as CPUs
- persistent storage medium/media 406 e.g., persistent storage medium/media 406 , network interface 414 , memory 404 , e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc.
- media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc.
- display interface 410 as interface for a monitor or other display device
- keyboard interface 416 as interface for a keyboard
- pointing device interface 418 as an interface for a mouse or other pointing device
- miscellaneous other interfaces not shown individually such as parallel and serial port interfaces, a universal serial bus (USB) interface, and the like.
- USB universal serial bus
- Memory 404 interfaces with computer bus 402 so as to provide information stored in memory 404 to CPU 412 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein.
- CPU 412 first loads computer executable process steps from storage, e.g., memory 404 , storage medium/media 406 , removable media drive, and/or other storage device.
- CPU 412 can then execute the stored process steps in order to execute the loaded computer-executable process steps.
- Stored data e.g., data stored by a storage device, can be accessed by CPU 412 during the execution of computer-executable process steps.
- Persistent storage medium/media 406 is a computer readable storage medium(s) that can be used to store software and data, e.g., an operating system and one or more application programs. Persistent storage medium/media 406 can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage medium/media 406 can further include program modules and data files used to implement one or more embodiments of the present disclosure.
- the system can be composed of a games console, of an input for music, of an input for introducing the game into the console, the console being provided so as to implement the whole of the method.
- the input for the music may be a USB port or a digital disk reader.
- any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than, or more than, all of the features described herein are possible.
- Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known.
- myriad software/hardware/firmware combinations are possible in achieving the functions, features, interfaces and preferences described herein.
- the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be understood by those skilled in the art now and hereafter
- the saved instances of the sound effect audio data could be associated with pitches outside of execution of the game software, either automatically (with software analysis) during development of the game or by the musicians or engineers of the sound themselves.
- step 305 of the method 300 from FIG. 3 may be unnecessary.
- the sound effect audio data could be adapted not only to suit the possible musical genres of the ambient music but also to suit possible keys of the ambient music.
- the sound effect audio data could be adapted to suit the twenty-four keys corresponding to the twelve possible tonics and to the two possible modes as discussed above.
- each saved instance of audio data would be associated, in addition to a genre, with a tonic and with a mode.
- the sound effect generation instructions 208 would be designed to select, from among the default audio data, those which are associated not only with the musical genre of the ambient music but also with the key thereof.
- Step 322 of the method in FIG. 3 would be adapted as a result. Furthermore, it would no longer be necessary to analyze the sound effect audio data in order to determine the pitch thereof, nor to modify them in order to transpose said pitch, so that steps 305 and 324 of the method in FIG. 3 may be unnecessary.
- the sound effect generation instructions 208 could be designed to synthesize the sound effect, that is to provide the audio data corresponding to said sound effect on the basis of sound synthesis taking account of the characteristics of the ambient music which are determined by the means 206 , particularly the characteristics KEY, GENRE and BEAT. There would thus no longer be any need for sound effects to be saved nor for the analysis means 204 illustrated in FIG. 2 .
- the sound synthesis could comprise, firstly, a software synthesizer having a certain number of modifiable parameters (for example the fundamental frequency or the waveform from an oscillator, or else the cutoff frequency of a filter) and, secondly, a set of relationships, defined by mathematical expressions, between the parameters of the software synthesizer and the characteristics of the ambient music.
- steps 322 and 324 of the method in FIG. 3 would be replaced by a step involving determination of the parameters of the software synthesizer from, firstly, the characteristics of the ambient music KEY and GENRE and, secondly, the defined relationships, and via a step involving implementation of the software synthesizer with the determined parameters so that it synthesizes sound effect audio data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Disclosed is a method and system for generating a sound effect in a piece of game software. In response to a request for emission of a sound effect from the game software, transmission of audio data is performed, where the transmission represents a sound effect to a sound reproduction device. Further, audio data, referred to as ambient music, representing music in the course of reproduction is analyzed in order to determine at least one characteristic (BEAT, GENRE, KEY) of the ambient music. At least one characteristic of the transmission is defined from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music.
Description
- The present disclosure relates to a method and system for generating a sound effect in a piece of game software, and in particular for synchronizing the sound effects of a video game to background music as a substitution to the original game music.
- Many video game players prefer to play music from their own collection instead of the original background score authored for the game. As a result, they may switch off the game's original sound effects, which may be perceived as unwanted or even annoying.
- The present disclosure relates to adjusting the sound effects of a video game in such a way that they blend perfectly with whatever piece of music the user has decided to play as a substitution to the original game music. The aim of the disclosure is to allow satisfactory immersion in the game, even when a user is using his own ambient music, by encouraging the user to keep the sound effects provided.
- According to some embodiments, the present disclosure discusses a method for generating a sound effect in a piece of game software. The method includes accessing audio data representing a sound effect from a sound reproduction device in response to a request for emission of a sound effect from the game software. The method analyzes audio data representing music in the course of reproduction, referred to as ambient music, in order to determine at least one characteristic of the ambient music. The method then defines at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine instants at which the ambient music has a rhythmic beat in order to analyze audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then defines an instant at which the transmission starts from the instants at which the ambient music has a rhythmic beat in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- According to some embodiments, the method includes defining as the instant at which the transmission starts, an instant that follows the last instant at which the music has a rhythmic beat in order to determine the instant at which the transmission starts from the instants at which the music has a rhythmic beat. The instant is defined by an integer number multiplied by the average time interval separating the instants at which the music has a rhythmic beat. According to some embodiments, it is preferable that this be once the average time interval.
- According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a musical genre for the ambient music in order to analyze the audio data representing the ambient music in order to determine the at least one characteristic of the ambient music. The method then includes selecting, from among several audio data associated with different musical genres, the audio data which is associated with the genre of the ambient music, where the audio data for the transmission stem is from the selected audio data in order to define the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a key for the ambient music in order to analyze the audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then determines a desired pitch from the determined key in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
- According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a bass line and a melody line for the ambient music. The analyzing step is also performed in order to analyze the audio data representing the ambient music in order to determine a key for the ambient music. The method also includes determining the key of the ambient music from the bass line and the melody line that have been determined.
- According to some embodiments, the method further includes recovering audio data representing a sound effect having a certain pitch, modifying the recovered audio data so that the sound effect that they represent has the desired pitch, in that the audio data of the transmission stem from the audio data that have been modified in this manner.
- According to some embodiments, the method further includes determining parameters of a software synthesizer from, firstly, the at least one characteristic of the ambient music and, secondly, from defined relationships. The method includes implementing the software synthesizer with the determined parameters so that it synthesizes sound effect audio data, in that the audio data of the transmission stem from the audio data that have been synthesized in this manner.
- In another embodiment, a computer-readable storage medium is disclosed for generating a sound effect in a piece of game software.
- In yet another embodiment, a system is disclosed for generating a sound effect in a piece of game software. The system includes a data processing system which includes a sound reproduction device, a storage device on which a computer program has been saved, and a central processing unit for executing the instructions of the computer program.
- These and other aspects and embodiments will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
- In the drawing figures, which are not to scale, and where like reference numerals indicate like elements throughout the several views:
-
FIG. 1 is a block diagram of a data processing system in accordance with an embodiment of the present disclosure; -
FIG. 2 is a block diagram illustrating instruction blocks in a piece of game software implemented by the data processing system ofFIG. 1 in accordance with an embodiment of the present disclosure; -
FIG. 3 illustrates a flow chart for generating a sound effect in accordance an embodiment of the present disclosure; -
FIG. 4 is a block diagram illustrating an internal architecture of a computing device in accordance with an embodiment of the present disclosure. - Embodiments are now discussed in more detail referring to the drawings that accompany the present application. In the accompanying drawings, like and/or corresponding elements are referred to by like reference numbers.
- Various embodiments are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative of the disclosure that can be embodied in various forms. In addition, each of the examples given in connection with the various embodiments is intended to be illustrative, and not restrictive. Further, the figures are not necessarily to scale, some features may be exaggerated to show details of particular components (and any size, material and similar details shown in the figures are intended to be illustrative and not restrictive). Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the disclosed embodiments.
- The present disclosure is described below with reference to block diagrams and operational illustrations of methods and devices. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, can be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks.
- In some alternate implementations, the functions/acts noted in the blocks can occur out of the order noted in the operational illustrations. For example, two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments of methods presented and described as flowcharts in this disclosure are provided by way of example in order to provide a more complete understanding of the technology. The disclosed methods are not limited to the operations and logical flow presented herein. Alternative embodiments are contemplated in which the order of the various operations is altered and in which sub-operations described as being part of a larger operation are performed independently.
- The principles described herein may be embodied in many different forms. The described systems and methods allow for synchronizing the sound effects of a video game to background music. The described systems and methods adjust the sound effects in such a way that they blend perfectly with whichever piece of music the player has decided to play as a substitution to the original game music.
- For the purposes of this disclosure the term “end user”, “user” or “player” should be understood to refer to a consumer of data supplied by a data provider. By way of example, and not limitation, the term “user” can refer to a person who receives data provided by the data provider over the Internet in a browser session, or can refer to an automated software application which receives the data and stores or processes the data.
- For the purposes of this disclosure, a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
- For the purposes of this disclosure a module is a software, hardware, or firmware (or combinations thereof) system, process or functionality, or component thereof, that performs or facilitates the processes, features, and/or functions described herein (with or without human interaction or augmentation). A module can include sub-modules. Software components of a module may be stored on a computer readable medium. Modules may be integral to one or more computers (or servers), or be loaded and executed by one or more computers (or servers). One or more modules may be grouped into an engine or an application. As discussed herein, a background music analyzer, game sound effects analyzer and a sound effect scheduler can be a module that is a software, hardware, or firmware (or combinations thereof) system for automatically synchronizing game sound effects with background music.
- For the purposes of this disclosure the term “server” should be understood to refer to a service point which provides processing, database, and communication facilities. By way of example, and not limitation, the term “server” can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and applications software which support the services provided by the server.
- As discussed herein, many users of game software prefer to play music from their own music collection rather than the music initially provided with the game software. By way of non-limiting examples, there are several ways of replacing the music initially provided in the game software with other ambient music via a background music analyzer. By way of an example, at the game-level, the game software may provide an option to use an ambient music file (for example a file in mp3 format) from the user instead of the ambient music initially provided. As a non-limiting variant, at the system-level, users simply turn off the ambient music initially provided to replace it with ambient music from a piece of software other than the game software, generally a multimedia player such as the software VLC or the software foobar2000. As a further non-limiting variant, at room-level, users simply turn off the ambient music initially provided to replace it with ambient music from a source other than the data processing system executing the game, for example a hi-fi system. Moreover, it has been noticed that users also often turn off the sound effects provided in the game software because they are perceived as disturbing the ambient music which they have chosen. As a result, they are less immersed in the game and the playing pleasure decreases. The background music analyzer is a library integrated into a game responsible for recording the music which is substituted to the original game music, either through direct access to the audio file (at the game-level), through OS-level interception of audio buffers (at the system-level), or through direct recording with a microphone (at the room-level).
- According to some embodiments, as discussed herein, a recorded signal can be split into overlapping frames, such as 100 ms frames. The following functions can be used to extract features for each frame: (1) Beat detection function: a function showing sharp peaks at beats; (2) Key detection function: indicating the probability that the music has been, over a past period of time, such as 20 s, in a specific tonality. According to some embodiments, a predetermined number of the key detection functions are computed for each minor and major tonalities. For example, 24 of the key detection functions are computed for each of the 12 minor and major tonalities. The beat detection function is computed by a periodicity estimation and tracking of onset detection. The key detection function is computed by matching a bass and melody chromagram with note distribution templates computed for each scale. The chromogram is obtained by binning the frequency spectrum into a number of bins (e.g., 12 bins) mapped to a number of tones (e.g., 12 tones) of equal temperament scale; or by encoding into a number of pitch classes (e.g., 12 pitch classes) the output of a multi-pitch estimator. Additional genre information can be extracted through the use of standard machine learning techniques, such as but not limited to, SVM or Bayesian classifier using mixtures of Gaussian distributions trained on annotated audio files.
- As discussed herein, at least in view of the above discussion of the background music analyzer, a game sound effects analyzer analyzes each of the sound effects samples used in the game to detect their fundamental frequency, using an algorithm such as YIN. It is either used during the game development process, in which all the sound effect samples produced for the game can be annotated with their pitch, or embedded in the game, in which the analysis can be performed every time the game is launched. In the situation the analysis is part of the game asset preparation procedure, different sound effects can also be annotated with a specific music genre, or different sets of sound effects can be created that match different music genres. For example, the destruction of an enemy in a game can be sonified by a synthesizer sound in the “electro” sample set, and a brass hit in the “soul” sample set.
- As discussed herein, at least in view of the above discussion of the background music analyzer and game sound effects analyzer, a sound effect scheduler can be embedded in the game and may be responsible for the playback of the game sound effects. It can operate in two modes. In a normal operating mode, the samples are played at their original pitch immediately after the moment the action that triggers them has taken place. In a music-synchronous mode, the sound effect scheduler queries the background music analyzer to retrieve the times at which the past number of beats (e.g., 4 beats) have been played in the background music, and the most probable tonality of the background music. The position in time of the past number of beats (e.g., 4 beats) can be used to anticipate the time at which the next beat will occur. Every time the player initiates in or during the game an action that triggers a sound effect, the sound effect is not played instantly, but instead, it is delayed so that its playback will coincide with the next beat in the music. Additionally, the difference in pitch between the original sound effect sample (as computed by the sound effect analyzer) and the tonality of the music is compensated for, using transpositions methods such as sample rate conversion or pitch-shifting. In the situation where the game sound effects bank has been annotated by genre, the genre information returned by the analysis module can be used to restrain further the set of sound effects played back.
- Certain embodiments will now be discussed in greater detail with reference to the figures. In general, with reference to
FIG. 1 , adata processing system 100 in accordance with an embodiment for synchronizing sound effects of a video game with background music is shown. Thedata processing system 100 includes acentral unit 102 which contains acentral processing unit 104, such as a microprocessor, and astorage device 106, such as a hard disk. Thedata processing system 100 has a man/machine interface 108 comprising input devices, such as for example akeyboard 110 and amouse 112, and output devices, such as for example adisplay screen 114 and a 118, 120. By way of example, the sound reproduction device can be comprised of asound reproduction device sound card 118 arranged in thecentral unit 102 andspeakers 120 connected to thesound card 118. - The
data processing system 100 includes asound capture device 122, such as a microphone connected to thesound card 118. Thesound capture device 122 is designed to capture amusical source 114 which can be external 124 to thedata processing system 100. A non-limiting example of an externalmusical source 124 is a hi-fi system. - It is to be understood that the present disclosure may be implemented utilizing any number of computer technologies. For example, although certain embodiments relate to providing access to game software and ambient music via a computing device, the disclosure may be utilized over any computer network, including, for example, a wide area network, local area network or, corporate intranet. Similarly, a computing device discussed in the
data processing system 100 may be any computing device that may be coupled to a network, including, for example, personal digital assistants, Web-enabled cellular telephones, devices that dial into the network, mobile computers, personal computers, Internet appliances, wireless communication devices, game consoles and the like. Computing devices indata processing system 100 include a program for interfacing with the network. Such program, as understood in the art, can be a window or browser, or other similar graphical user interface, for visually displaying the game to the end user (or player) on thedisplay 114 of the computing device. Furthermore, servers for providing game software and/or ambient music external to the game software may be of any type, running any software, and the software modules, objects or plug-ins may be written in any suitable programming language. -
FIG. 2 illustrates instruction blocks in a piece of game software implemented by thedata processing system 100 ofFIG. 1 in accordance with some embodiments of the present disclosure. InFIG. 2 , audio data FXA, FXB and FXC are saved in thestorage device 106 of the data processing system ofFIG. 1 . The audio data FXA, FXB or FXC represent a sound effect and are associated with respective musical genres GA, GB and GC. A piece ofgame software 200 allowing a user to play a game is likewise saved in thestorage device 106. - The
game software 200 includesgame instructions 202 which are designed to supply game information to a user through the output devices of the man/machine interface 108, in that the game information evolves on the basis of commands input by a user using the input devices (e.g., 110, 112) of the man/machine interface 108. Thegame instructions 202 are designed to send a request R for emission of a sound effect when the game is being executed. By way of example, the request R is sent upon every action in the game which is performed by the user using the input devices of the man/machine interface 108, in that said action is associated with a sound effect, as discussed below. - The
game software 200 includes soundeffect analysis instructions 204. The soundeffect analysis instructions 204 are designed to analyze each saved instance of audio data FXA, FXB and FXC and to determine the pitch PA, PB and PC thereof. According to some exemplary embodiments, the pitch corresponds to a fundamental frequency for the audio data, as determined by means of, for example, a YIN algorithm. The soundeffect analysis instructions 204 are furthermore designed to create associations between the audio data FXA, FXB or FXC and the respective pitch PA, PB or PC thereof. That is, a pitch value PA, PB or PC are determined from the audio samples FXA, FXB or FXC respectively, and this determination is taken into account for assigning a pitch value to the sound effects. - The
game software 200 includesinstructions 206 for analyzing a piece of music in the course of reproduction either by the 118, 120 or by thereproduction device external reproduction device 124. This music is referred to as ambient music. The ambientmusic analysis instructions 206 are designed to recover audio data MUS representing the ambient music. In a first case of replacing ambient music, for example, the ambientmusic analysis instructions 206 are designed to directly access the music file indicated by the user in the game software options. The game software options can be a dialog box, window, menu or any other graphical user interface element through which the user can configure aspects of the game, such as, input controls, sound volume, music selection, etc. In a second case of replacing ambient music, for example, the ambientmusic analysis instructions 206 are designed to intercept the audio buffers of an operating system running on thedata processing system 100 and executing the game software. In a third case of replacing ambient music, for example, the ambientmusic analysis instructions 206 are designed to use thesound capture device 122 to convert the ambient music into the audio data MUS. - The ambient
music analysis instructions 206 are designed to analyze the audio data MUS in order to determine at least one characteristic of the ambient music. More precisely, in an example, three characteristics of the ambient music are determined. Thus, the ambientmusic analysis instructions 206 are designed to analyze the audio data MUS in order to determine instants, denoted as BEAT inFIG. 2 , at which the ambient music has a rhythmic beat. The ambientmusic analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a musical genre, denoted GENRE inFIG. 2 , for the ambient music. The ambientmusic analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a key, denoted KEY inFIG. 2 , for the ambient music. A key is defined as the set of a tonic and a mode. By way of example, the tonic is one of the twelve notes in the classical scale (C, C sharp, D, D sharp, E, F, F sharp, G, G sharp, A, A sharp, B), and the mode is chosen from among the harmonic major mode and the harmonic minor mode. there are thus twenty-four possible keys. To perform the analysis, for example, the ambientmusic analysis instructions 206 are designed to analyze the audio data MUS in order to determine a bass line and a melody line for the ambient music. From this, the key of the music from the bass line and the melody line is determined. - The
game software 200 has soundeffect generation instructions 208. This coincides with the sound effects scheduler discussed above. The soundeffect generation instructions 208 are designed to, in response to the sending of the request R, define at least one characteristic for an audio data transmission, which are denoted FX inFIG. 2 representing a sound effect, to the 118, 120. This at least one transmission characteristic is determined from the at least one ambient music characteristic determined by the ambientreproduction device music analysis instructions 204. More precisely, according to some embodiments, and by way of a non-limiting example, the soundeffect generation instructions 208 are designed to define three transmission characteristics from, respectively, the three ambient music characteristics: BEAT, GENRE and KEY. Thus, the soundeffect generation instructions 208 are designed to define an instant T0 at which the transmission starts from the instants BEAT, at which the ambient music has a rhythmic beat. By way of example, the soundeffect generation instructions 208 are designed to define this instant T as following the last rhythmic beat instant by a time interval equal to an integer number of times the average time interval separating the rhythmic beat instants. According to some embodiments, transmission occurs once this average time interval. - Furthermore, the sound
effect generation instructions 208 are designed to select, from among the default audio data FXA, FXB and FXC, those which are associated with the musical genre GENRE of the ambient music, as provided by theinstructions 204. The selected default audio data will subsequently be denoted FXi and the pitch thereof Pi. Furthermore, the soundeffect generation instructions 208 are designed to determine a desired pitch P from the key KEY of the ambient music MUS as provided by theinstructions 204. Preferably, according to some embodiments, the desired pitch P is the tonic or the fifth of the key KEY. The soundeffect generation instructions 208 are designed to recover the selected default audio data FXi which, as indicated previously, have a default pitch Pi. - The sound
effect generation instructions 208 are designed to modify the recovered default audio data FXi so that the sound effect which they represent has the desired pitch P. The soundeffect generation instructions 208 are designed to define the selected and modified audio data as audio data FX which represents the desired sound effect. The soundeffect generation instructions 208 are designed to implement the transmission having the characteristics defined previously, that is to say: the instant T0 at which transmission starts, the audio data FX stemming from default audio data FXi corresponding to the genre of the ambient music and having the desired pitch P. - Having discussed the functional and executable components for generating a sound effect in a piece of game software, its operation will now be described with reference to
FIG. 3 .FIG. 3 is a flow chart showing the steps in amethod 300 for generating a sound effect, via thedata processing system 100 inFIG. 1 executing the instructions of the game software inFIG. 2 , in accordance an embodiment of the present disclosure. InStep 302, thedata processing system 100 receives a request for execution of thegame software 200 from the user through the man/machine interface 108. InStep 304, in response to reception of the request, thedata processing system 100 launches thegame software 200. InStep 305 in which the game is initialized, theprocessing unit 104 executing the soundeffect analysis instructions 204 analyzes the audio data FXA, FXB and FXC, determines the respective pitch PA, PB and PC thereof, in the manner indicated with reference toFIG. 2 , and creates associations between the audio data FXA, FXB and FXC and the respective pitch PA, PB, PC thereof. - In
Step 306, thecentral processing unit 104 executing thegame instructions 202 supplies game information to the user through the output devices (screen, sound reproduction device, etc.) of the man/machine interface 108 on the basis of commands which are input by the user using theinput devices 110, 112 (keyboard, mouse, etc.) of the man/machine interface 108. In parallel withStep 306, as in Step 308, theprocessing unit 104 executing the ambientmusic analysis instructions 204 recovers audio data MUS representing the ambient music. Still in parallel withStep 306, in Step 310, theprocessing unit 104 executing the ambientmusic analysis instructions 204 analyzes the audio data MUS in order to determine at least one characteristic of the ambient music, for example the three characteristics BEAT, GENRE and KEY indicated previously. - In
Step 316, thecentral processing unit 104 executing thegame instructions 202 receives a command from the user through the input devices of the man/machine interface 108 in order to perform an action in the game, where the action is associated with a sound effect. InStep 318, in response to reception of the command from the user, thecentral processing unit 104 executing thegame instructions 202 sends a request R for emission of a sound effect. InStep 320, in response to the request R, thecentral processing unit 104 executing the soundeffect generation instructions 208 defines the three characteristics T, FXi and P on the basis of, respectively, the three characteristics BEAT, GENRE and KEY of the ambient music which were determined during step 310. InStep 322, thecentral processing unit 104 executing the soundeffect generation instructions 208 recovers the selected default audio data FXi which, as indicated previously, represents a sound effect having the default pitch Pi. InStep 324, thecentral processing unit 104 executing the soundeffect generation instructions 208 modifies the default audio data FXi so that the sound effect which they represent changes from the pitch Pi to the desired pitch P. The audio data modified in this manner are denoted FX. InStep 326, thecentral processing unit 104 executing the soundeffect generation instructions 208 performs the transmission at the instant T, with the audio data FX which, firstly, represents a sound effect at the pitch P and, secondly, stems from the audio data FXi selected in accordance with the genre of the ambient music. - Thus, the generated sound effect is harmoniously incorporated into the ambient music on several levels: on a rhythmic level as a result of the transmission instant T0, on a melodic level as a result of the pitch P0 of said sound effect, and on a stylistic level as a result of the selection of the audio data FXi that matched the genre of the ambient music. The
method 300 then returns toSteps 306 and 308. -
FIG. 4 is a block diagram illustrating an internal architecture of an example of a computing device, as discussed indata processing system 100 ofFIGS. 1-3 , in accordance with one or more embodiments of the present disclosure. - A computing device as referred to herein refers to any device with a processor capable of executing logic or coded instructions, and could be, as understood in context, a server, personal computer, game console, set top box, smart phone, pad/tablet computer or media device, to name a few such devices.
- As shown in the example of
FIG. 4 , internal architecture 400 includes one or more processing units (also referred to herein as CPUs) 412, which interface with at least one computer bus 402. Also interfacing with computer bus 402 are persistent storage medium/media 406, network interface 414, memory 404, e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc., media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc. media, display interface 410 as interface for a monitor or other display device, keyboard interface 416 as interface for a keyboard, pointing device interface 418 as an interface for a mouse or other pointing device, and miscellaneous other interfaces not shown individually, such as parallel and serial port interfaces, a universal serial bus (USB) interface, and the like. - Memory 404 interfaces with computer bus 402 so as to provide information stored in memory 404 to CPU 412 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein. CPU 412 first loads computer executable process steps from storage, e.g., memory 404, storage medium/media 406, removable media drive, and/or other storage device. CPU 412 can then execute the stored process steps in order to execute the loaded computer-executable process steps. Stored data, e.g., data stored by a storage device, can be accessed by CPU 412 during the execution of computer-executable process steps.
- Persistent storage medium/media 406 is a computer readable storage medium(s) that can be used to store software and data, e.g., an operating system and one or more application programs. Persistent storage medium/media 406 can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage medium/media 406 can further include program modules and data files used to implement one or more embodiments of the present disclosure.
- Thus, from the above discussion, it is clear that a
computer program 200 and amethod 300 as described above allow harmonious incorporation of sound effects into any kind of ambient music chosen by a user, or even predefined by the game software. - Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many manners and as such are not to be limited by the foregoing exemplary embodiments and examples. In other words, functional elements being performed by single or multiple components, in various combinations of hardware and software or firmware, and individual functions, may be distributed among software applications at either the client or server or both.
- Thus, for example the system can be composed of a games console, of an input for music, of an input for introducing the game into the console, the console being provided so as to implement the whole of the method. The input for the music may be a USB port or a digital disk reader.
- In this regard, any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than, or more than, all of the features described herein are possible. Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known. Thus, myriad software/hardware/firmware combinations are possible in achieving the functions, features, interfaces and preferences described herein. Moreover, the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be understood by those skilled in the art now and hereafter
- In particular, the saved instances of the sound effect audio data could be associated with pitches outside of execution of the game software, either automatically (with software analysis) during development of the game or by the musicians or engineers of the sound themselves. In this case, step 305 of the
method 300 fromFIG. 3 may be unnecessary. - Furthermore, the sound effect audio data could be adapted not only to suit the possible musical genres of the ambient music but also to suit possible keys of the ambient music. For example, the sound effect audio data could be adapted to suit the twenty-four keys corresponding to the twelve possible tonics and to the two possible modes as discussed above. Thus, each saved instance of audio data would be associated, in addition to a genre, with a tonic and with a mode. According to some embodiments, the sound
effect generation instructions 208 would be designed to select, from among the default audio data, those which are associated not only with the musical genre of the ambient music but also with the key thereof. Step 322 of the method inFIG. 3 would be adapted as a result. Furthermore, it would no longer be necessary to analyze the sound effect audio data in order to determine the pitch thereof, nor to modify them in order to transpose said pitch, so that 305 and 324 of the method insteps FIG. 3 may be unnecessary. - Furthermore, the sound
effect generation instructions 208 could be designed to synthesize the sound effect, that is to provide the audio data corresponding to said sound effect on the basis of sound synthesis taking account of the characteristics of the ambient music which are determined by themeans 206, particularly the characteristics KEY, GENRE and BEAT. There would thus no longer be any need for sound effects to be saved nor for the analysis means 204 illustrated inFIG. 2 . By way of example, the sound synthesis could comprise, firstly, a software synthesizer having a certain number of modifiable parameters (for example the fundamental frequency or the waveform from an oscillator, or else the cutoff frequency of a filter) and, secondly, a set of relationships, defined by mathematical expressions, between the parameters of the software synthesizer and the characteristics of the ambient music. Thus, steps 322 and 324 of the method inFIG. 3 would be replaced by a step involving determination of the parameters of the software synthesizer from, firstly, the characteristics of the ambient music KEY and GENRE and, secondly, the defined relationships, and via a step involving implementation of the software synthesizer with the determined parameters so that it synthesizes sound effect audio data. - While the system and method have been described in terms of one or more embodiments, it is to be understood that the disclosure need not be limited to the disclosed embodiments. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures.
Claims (10)
1. A method comprising:
in response to a request for emission of a sound effect from game software, accessing, via a sound reproduction device, audio data representing the sound effect;
analyzing, via the sound reproduction device, ambient music from the game software in order to determine at least one characteristic of the ambient music, said ambient music comprising audio data representing music in a course of reproduction of the game software;
defining, via the sound reproduction device, at least one characteristic of the sound effect based on the at least one characteristic of the ambient music.
2. The method of claim 1 , further comprising:
analyzing the audio data of said ambient music for determining instants at which the ambient music has a rhythmic beat;
determining the at least one characteristic of the ambient music based on the instants of the rhythmic beat; and
defining an instant from the instants at which the ambient music has the rhythmic beat in order to determine the at least one characteristic of the sound effect in accordance with the at least one characteristic of the ambient music.
3. The method of claim 2 , further comprising:
defining the instant as an instant that follows a last instant at which the music has a rhythmic beat, said defining comprises identifying the instant by an integer number of times equal to an average time interval separating the instants at which the music has a rhythmic beat, said defining facilitates determining the instant from the instants at which the music has the rhythmic beat.
4. The method of claim 3 , further comprising:
analyzing the audio data representing the ambient music in order to determine a musical genre for the ambient music, said determination comprises determining the at least one characteristic of the ambient music; and
selecting, from the audio data associated with different musical genres of ambient music, audio data associated with at least one of said different genres of the ambient music in order to define the at least one characteristic of the sound effect in accordance with the at least one characteristic of the ambient music, wherein the selected audio data corresponds to the selected audio data.
5. The method of claim 4 , further comprising:
analyzing the audio data representing the ambient music for determining a key for the ambient music in order to analyze the audio data representing the ambient music, said analyzing comprises determining the at least one characteristic of the ambient music; and
determining a desired pitch from the determined key in order to determine the at least one characteristic of the sound effect in accordance with the at least one characteristic of the ambient music.
6. The method of claim 5 , further comprising:
analyzing the audio data representing the ambient music to determine a bass line and a melody line for the ambient music in order to analyze the audio data representing the ambient music and for determining the key for the ambient music; and
determining the key of the ambient music from the bass line and the melody line.
7. The method of claim 6 , further comprising:
recovering audio data representing a sound effect having a certain pitch; and
modifying the recovered audio data so that the sound effect has the certain pitch, wherein the modified audio data corresponds to the selected audio data.
8. The method of claim 3 , further comprising:
determining parameters of a software synthesizer from the at least one characteristic of the ambient music and defined relationships between the ambient music and the sound effect; and
implementing the software synthesizer with the determined parameters so that it synthesizes sound effect audio data, wherein the accessed audio data corresponds to the synthesized audio data.
9. A computer-readable storage medium tangibly encoded with computer-executable instructions, that when executed by a computing device, perform a method comprising:
in response to a request for emission of a sound effect from game software, accessing, via a sound reproduction device, audio data representing the sound effect;
analyzing, via the sound reproduction device, ambient music from the game software in order to determine at least one characteristic of the ambient music, said ambient music comprising audio data representing music in a course of reproduction of the game software;
defining, via the sound reproduction device, at least one characteristic of the sound effect based on the at least one characteristic of the ambient music.
10. A data processing system comprising:
a sound reproduction devices;
a storage device on which a computer program comprising computer-executable instructions are stored;
a central processing unit for executing the computer-executable instructions stored at the storage device, where upon execution, the central processing unit performs a method comprising:
in response to a request for emission of a sound effect from game software, accessing, via a sound reproduction device, audio data representing the sound effect;
analyzing, via the sound reproduction device, ambient music from the game software in order to determine at least one characteristic of the ambient music, said ambient music comprising audio data representing music in a course of reproduction of the game software;
defining, via the sound reproduction device, at least one characteristic of the sound effect based on the at least one characteristic of the ambient music.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR1153197A FR2974226A1 (en) | 2011-04-12 | 2011-04-12 | METHOD FOR GENERATING SOUND EFFECT IN GAME SOFTWARE, ASSOCIATED COMPUTER PROGRAM, AND COMPUTER SYSTEM FOR EXECUTING COMPUTER PROGRAM INSTRUCTIONS. |
| FR11/53,197 | 2011-04-12 | ||
| PCT/IB2011/003221 WO2012140468A1 (en) | 2011-04-12 | 2011-10-12 | Method for generating a sound effect in a piece of game software, associated computer program and data processing system for executing instructions of the computer program |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20140128160A1 true US20140128160A1 (en) | 2014-05-08 |
Family
ID=45558781
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/264,189 Abandoned US20140128160A1 (en) | 2011-04-12 | 2011-10-12 | Method and system for generating a sound effect in a piece of game software |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20140128160A1 (en) |
| FR (1) | FR2974226A1 (en) |
| WO (1) | WO2012140468A1 (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150031454A1 (en) * | 2013-07-23 | 2015-01-29 | Igt | Beat synchronization in a game |
| US9947170B2 (en) | 2015-09-28 | 2018-04-17 | Igt | Time synchronization of gaming machines |
| US10473224B2 (en) | 2015-03-25 | 2019-11-12 | Tacmina Corporation | Check valve and valve body |
| US10841702B2 (en) * | 2019-04-22 | 2020-11-17 | Nintendo Co., Ltd. | Computer-readable non-transitory storage medium having sound processing program stored therein, sound processing system, sound processing apparatus, and sound processing method |
| WO2020263073A1 (en) * | 2019-06-28 | 2020-12-30 | Ciscomani Davila Geovani Francesco | Two-way device for measuring electricity consumption with anti-theft system for monitoring an alternative energy source |
| CN112863466A (en) * | 2021-01-07 | 2021-05-28 | 广州欢城文化传媒有限公司 | Audio social voice changing method and device |
| EP4105924A1 (en) * | 2021-06-15 | 2022-12-21 | Lemon Inc. | System and method for selecting points in a music and audio signal for placement of sound effect |
| US20230128812A1 (en) * | 2021-10-21 | 2023-04-27 | Universal International Music B.V. | Generating tonally compatible, synchronized neural beats for digital audio files |
| US20230390642A1 (en) * | 2022-06-02 | 2023-12-07 | Electronic Arts Inc. | Neural Synthesis of Sound Effects Using Deep Generative Models |
| US12314554B1 (en) | 2024-08-23 | 2025-05-27 | Pocket Bard LLC | Apparatus and a method for providing a customizable and interactive ambient sound experience |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106935236A (en) * | 2017-02-14 | 2017-07-07 | 复旦大学 | A kind of piano performance appraisal procedure and system |
| US10453434B1 (en) | 2017-05-16 | 2019-10-22 | John William Byrd | System for synthesizing sounds from prototypes |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4497264B2 (en) * | 2001-01-22 | 2010-07-07 | 株式会社セガ | Game program, game apparatus, sound effect output method, and recording medium |
| JP2003122358A (en) * | 2001-10-11 | 2003-04-25 | Sega Corp | Acoustic signal output method, acoustic signal generation apparatus, and program |
| US7828657B2 (en) * | 2003-05-20 | 2010-11-09 | Turbine, Inc. | System and method for enhancing the experience of participant in a massively multiplayer game |
| US7674966B1 (en) * | 2004-05-21 | 2010-03-09 | Pierce Steven M | System and method for realtime scoring of games and other applications |
| GB2465917B (en) * | 2005-05-03 | 2010-08-04 | Codemasters Software Co | Rhythm action game apparatus and method |
| US8058544B2 (en) * | 2007-09-21 | 2011-11-15 | The University Of Western Ontario | Flexible music composition engine |
| EP2441071A2 (en) * | 2009-06-12 | 2012-04-18 | Jam Origin APS | Generative audio matching game system |
-
2011
- 2011-04-12 FR FR1153197A patent/FR2974226A1/en active Pending
- 2011-10-12 US US13/264,189 patent/US20140128160A1/en not_active Abandoned
- 2011-10-12 WO PCT/IB2011/003221 patent/WO2012140468A1/en not_active Ceased
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150031454A1 (en) * | 2013-07-23 | 2015-01-29 | Igt | Beat synchronization in a game |
| US9192857B2 (en) * | 2013-07-23 | 2015-11-24 | Igt | Beat synchronization in a game |
| US9607469B2 (en) | 2013-07-23 | 2017-03-28 | Igt | Beat synchronization in a game |
| US10473224B2 (en) | 2015-03-25 | 2019-11-12 | Tacmina Corporation | Check valve and valve body |
| US9947170B2 (en) | 2015-09-28 | 2018-04-17 | Igt | Time synchronization of gaming machines |
| US10841702B2 (en) * | 2019-04-22 | 2020-11-17 | Nintendo Co., Ltd. | Computer-readable non-transitory storage medium having sound processing program stored therein, sound processing system, sound processing apparatus, and sound processing method |
| WO2020263073A1 (en) * | 2019-06-28 | 2020-12-30 | Ciscomani Davila Geovani Francesco | Two-way device for measuring electricity consumption with anti-theft system for monitoring an alternative energy source |
| CN112863466A (en) * | 2021-01-07 | 2021-05-28 | 广州欢城文化传媒有限公司 | Audio social voice changing method and device |
| EP4105924A1 (en) * | 2021-06-15 | 2022-12-21 | Lemon Inc. | System and method for selecting points in a music and audio signal for placement of sound effect |
| US20230128812A1 (en) * | 2021-10-21 | 2023-04-27 | Universal International Music B.V. | Generating tonally compatible, synchronized neural beats for digital audio files |
| US12217730B2 (en) * | 2021-10-21 | 2025-02-04 | Universal International Music B.V. | Generating tonally compatible, synchronized neural beats for digital audio files |
| US20230390642A1 (en) * | 2022-06-02 | 2023-12-07 | Electronic Arts Inc. | Neural Synthesis of Sound Effects Using Deep Generative Models |
| US12420192B2 (en) * | 2022-06-02 | 2025-09-23 | Electronic Arts Inc. | Neural synthesis of sound effects using deep generative models |
| US12314554B1 (en) | 2024-08-23 | 2025-05-27 | Pocket Bard LLC | Apparatus and a method for providing a customizable and interactive ambient sound experience |
Also Published As
| Publication number | Publication date |
|---|---|
| FR2974226A1 (en) | 2012-10-19 |
| WO2012140468A1 (en) | 2012-10-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20140128160A1 (en) | Method and system for generating a sound effect in a piece of game software | |
| CN115066681B (en) | Music content generation | |
| US10799795B1 (en) | Real-time audio generation for electronic games based on personalized music preferences | |
| US7979146B2 (en) | System and method for automatically producing haptic events from a digital audio signal | |
| JP4640407B2 (en) | Signal processing apparatus, signal processing method, and program | |
| US20140080606A1 (en) | Methods and systems for generating a scenario of a game on the basis of a piece of music | |
| CN116194989A (en) | System and method for hierarchical audio source separation | |
| CN105684077A (en) | Automatically expanding sets of audio samples | |
| CN112669811B (en) | Song processing method and device, electronic equipment and readable storage medium | |
| CN109410972A (en) | Generate the method, apparatus and storage medium of sound effect parameters | |
| CN114078464B (en) | Audio processing method, device and equipment | |
| US11899713B2 (en) | Music streaming, playlist creation and streaming architecture | |
| US20180173400A1 (en) | Media Content Selection | |
| JP7694074B2 (en) | Data generation device, data generation method and program | |
| CN116185167A (en) | Haptic feedback method, system and related equipment for music track matching vibration | |
| US20240379130A1 (en) | Signal processing device and signal processing method | |
| CN113781989A (en) | Audio animation playing and rhythm stuck point identification method and related device | |
| CN112509538A (en) | Audio processing method, device, terminal and storage medium | |
| US20240168994A1 (en) | Music selection system and method | |
| CN120077430A (en) | Audio synthesis for synchronous communication | |
| Hsu | Strategies for managing timbre and interaction in automatic improvisation systems | |
| Carey | Designing for cumulative interactivity: the _derivations system | |
| WO2023273440A1 (en) | Method and apparatus for generating plurality of sound effects, and terminal device | |
| CN114896448A (en) | Song customization method and device, electronic equipment and storage medium | |
| US20240184515A1 (en) | Vocal Attenuation Mechanism in On-Device App |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: MXP4, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GILLET, OLIVIER;PIESCZEK-ALI, ELHAD;REEL/FRAME:027054/0763 Effective date: 20111012 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |