[go: up one dir, main page]

WO2025064375A1 - Wireless communication profile management - Google Patents

Wireless communication profile management Download PDF

Info

Publication number
WO2025064375A1
WO2025064375A1 PCT/US2024/047002 US2024047002W WO2025064375A1 WO 2025064375 A1 WO2025064375 A1 WO 2025064375A1 US 2024047002 W US2024047002 W US 2024047002W WO 2025064375 A1 WO2025064375 A1 WO 2025064375A1
Authority
WO
WIPO (PCT)
Prior art keywords
playback device
playback
group
audio
communicating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/047002
Other languages
French (fr)
Inventor
Jason P. White
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonos Inc
Original Assignee
Sonos Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonos Inc filed Critical Sonos Inc
Publication of WO2025064375A1 publication Critical patent/WO2025064375A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • H04N21/8113Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/003Digital PA systems using, e.g. LAN or internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/03Connection circuits to selectively connect loudspeakers or headphones to amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers

Definitions

  • the present disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.
  • Media content for example, songs, podcasts, video sound
  • playback devices such that each room with a playback device can play back corresponding different media content.
  • rooms can be grouped together for synchronous playback of the same media content, and/or the same media content can be heard in all rooms synchronously.
  • Figure 1A is a partial cutaway view of an environment having a media playback system configured in accordance with aspects of the disclosed technology.
  • Figure 1 B is a schematic diagram of the media playback system of Figure 1 A and one or more networks.
  • Figure 1 C is a block diagram of a playback device.
  • Figure 1 D is a block diagram of a playback device.
  • Figure 1 E is a block diagram of a bonded playback device.
  • Figure 1 F is a block diagram of a network microphone device.
  • Figure 1 G is a block diagram of a playback device.
  • Figure 1 H is a partial schematic diagram of a control device.
  • Figures 11 through 1 L are schematic diagrams of corresponding media playback system zones.
  • Figure 1 M is a schematic diagram of media playback system areas.
  • Figure 2A is a front isometric view of a playback device configured in accordance with aspects of the disclosed technology.
  • Figure 2B is a front isometric view of the playback device of Figure 2A without a grille.
  • Figure 2C is an exploded view of the playback device of Figure 2A.
  • Figure 3A is a front view of a network microphone device configured in accordance with aspects of the disclosed technology.
  • Figure 3B is a side isometric view of the network microphone device of Figure 3A.
  • Figure 3C is an exploded view of the network microphone device of Figures 3A and 3B.
  • Figure 3D is an enlarged view of a portion of Figure 3B.
  • Figure 3E is a block diagram of the network microphone device of Figures 3A through 3D.
  • Figure 3F is a schematic diagram of an example voice input.
  • Figures 4A through 4D are schematic diagrams of a control device in various stages of operation in accordance with aspects of the disclosed technology.
  • Figure 5 is front view of a control device.
  • Figure 6 is a message flow diagram of a media playback system.
  • Figure 7 is a flowchart illustrating an example method for selecting a communication technique for transmitting audio content between two playback devices.
  • Figure 8 is a flowchart illustrating an example method for synchronous playback of audio content at two or more playback devices, wherein the audio content is communicated between at least two playback devices using a communication technique that is selected based on user input.
  • Figure 9 illustrates an example decision matrix that can be used to select an audio content communication technique.
  • Figure 10A is a schematic diagram that illustrates formation of a bonded pair of playback devices.
  • Figure 10B is a schematic diagram that illustrates formation of a synchrony pair of playback devices.
  • Figure 10C is a schematic diagram that illustrates formation of a synchrony group of playback devices that receives audio content in accordance with the Broadcast Audio BLUETOOTH profile.
  • Figure 10D is a schematic diagram that illustrates connecting a BLUETOOTH- only device to a synchrony group of playback devices that receive audio content via WI-FI transmission.
  • Figure 11A is a schematic diagram that illustrates a control device sending commands to a primary playback device that subsequently communicates the audio content to one or more secondary playback devices which form an “off-LAN” synchrony group.
  • Figure 11 B is a schematic diagram that illustrates a control device sending commands to a primary playback device which forms a bonded stereo pair with a secondary playback device
  • the drawings are for the purpose of illustrating example embodiments, but those of ordinary skill in the art will understand that the technology disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings.
  • Sonos has a long history of creating innovative wireless audio products that provide an intuitive, convenient, and straightforward user experience. For example, critics and end users alike have praised Sonos for developing wireless multizone speaker systems that allow users to easily extend audio playback across multiple wireless playback devices. These audio systems can dynamically adapt to the requirements of a given implementation, thereby providing a consistent user experience notwithstanding changing conditions. For instance, such systems can be deployed without regard to the network resources which may — or may not — be available in a given operating environment.
  • an audio source may seamlessly transition from communicating with a playback device via a first communication technique (for example, via a local area network WI-FI connection) to communicating with the playback device via a second communication technique (for example, via a BLUETOOTH audio connection) when the first communication technique becomes unstable or is otherwise no longer reliably available.
  • a first communication technique for example, via a local area network WI-FI connection
  • a second communication technique for example, via a BLUETOOTH audio connection
  • Sonos has identified shortcomings of existing wireless audio communication protocols and has further identified functionality provided by developments in wireless networking technology that can be leveraged to address these shortcomings to further enhance the user experience.
  • BLUETOOTH Advanced Audio Distribution Profile also referred to as “BLUETOOTH Classic”
  • A2DP BLUETOOTH Advanced Audio Distribution Profile
  • BLUETOOTH Classic BLUETOOTH Advanced Audio Distribution Profile
  • A2DP is unable to ensure that multiple audio sink devices render their audio streams at exactly the same time such that playback is synchronized across the multiple devices.
  • BLUETOOTH LE Audio Low Energy
  • BLUETOOTH LE Isochronous Channels which provides an improved way of transferring time-bounded data between devices.
  • BLUETOOTH LE Isochronous Channels has, in turn, enabled bidirectional point-to-point audio transmission to a limited number of devices (referred to herein as communication via the “Direct LE Audio” BLUETOOTH profile) and unidirectional audio transmission to a larger number of recipients (referred to herein as communication via the “Broadcast Audio” BLUETOOTH profile).
  • Multi-Stream Audio which enables transmission of multiple, independent, synchronized audio streams between an audio source device and one or more audio sink devices.
  • Multi-Stream Audio can be used to, for example, send independent audio streams to truly wireless left/right earbuds.
  • a “profile” can be understood as defining the rules for how to use a wireless communication technology, such as BLUETOOTH, for a particular application.
  • a particular communication profile may dictate data packet content, such as where packets transmitted according to a point-to-point profile may include address information, while packets transmitted according to a broadcast profile may not include unique address information for particular recipients. Sonos has leveraged the Direct LE Audio and Broadcast Audio profiles to improve user experience. These profiles are briefly described in turn.
  • the Direct LE Audio BLUETOOTH profile defines a communication technique that provides the ability to transmit multiple, independent, bidirectional, synchronized audio streams between a single central device (for example, an audio source such as a smartphone) and a finite number of peripheral devices (for example, one or more audio sinks such as one or more wireless headsets).
  • the Direct LE Audio profile is therefore sometimes referred to as a one-to-one communication technique.
  • a connected isochronous group (CIG) is created by the audio source, which can include multiple connected isochronous streams (CISs).
  • Each CIS is a point-to-point data stream between the central device and the peripheral device that provides bidirectional communication with acknowledgement.
  • Bidirectional communication enables a playback device to send audio control information to a control device or audio source device. It also provides robust user interface control at both the audio source device and the audio sink device. Bidirectional communication also provides an improved user experience for assisted listening devices, headsets, and hands-free telephony devices that include microphone in and control features. This improves the performance of truly wireless earbuds, thereby providing a better stereo imaging experience, more seamless voice control services, and smoother switching between audio sources.
  • the Broadcast Audio BLUETOOTH profile defines a communication technique that enables an audio source device to broadcast an audio stream to an unlimited number of BLUETOOTH audio sink devices.
  • Each of these audio streams is referred to as a Broadcast Isochronous Stream (BIS); multiple streams can be grouped into a Broadcast Isochronous Group (BIG).
  • BIG Broadcast Isochronous Group
  • As a broadcast data stream data packets transmitted in accordance with the Broadcast Audio profile are not individually addressed to any particular recipient device. These audio broadcasts can be open (in which case any in-range audio sink device may participate) or closed (in which case only audio sink devices with the correct passkey can participate).
  • location-based sharing allows a large public venue to broadcast multiple audio streams, thus allowing any number of listeners to configure their headphones to receive, for example, public address announcements in a particular language.
  • the Direct LE Audio and Broadcast Audio profiles provide enhanced functionality vis-a-vis A2DP, one or the other might be more appropriate for a given application. And in some cases it may still be desirable to instead communicate using Multi-Stream Audio, an A2DP audio stream, or even a completely different communication protocol such as WI-FI.
  • the A2DP has broad compatibility across devices but is limited in the number of audio sink devices and has relatively higher power consumption.
  • the Direct LE Audio profile supports bidirectional communication with multiple audio sink devices, although the number of such devices is finite.
  • the Broadcast Audio profile supports an unlimited number of audio sink devices but does not provide a rich bidirectional communication path.
  • a Broadcast Audio data stream will typically transmit at a relatively high (or maximum) allotted power to reach as many potential recipients as possible. Broadcast streams will therefore have a larger range — but higher power consumption — as compared to data streams that are compliant with the Direct LE Audio profile.
  • the inventors have developed techniques for managing and selecting amongst different communication techniques in the context of a wireless audio playback application implemented in a dynamically changing operating environment. Regardless of the particular selected communication technique, encryption mechanisms are optionally implemented to ensure that audio streams are not playable at unauthorized playback devices.
  • a decision between communicating using the Direct LE Audio profile and the Broadcast Audio profile might be made based on either the group size or a prediction of future group size.
  • the profile selection may take into consideration profiles supported by one or more of the connected devices. More generally, choosing a communication profile based on an evaluation of the user’s intended action, and/or based on a prediction or assessment of how the connected devices will be used, allows the user to reap the benefits of the most appropriate profile for a given application, thus further enhancing user experience.
  • a first playback device comprises one or more processors.
  • the first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network.
  • the first playback device further comprises at least one non-transitory computer-readable medium comprising program instructions that are executable by the one or more processors.
  • the first playback device is configured to receive a command specifying that the first playback device will form part of a group that comprises a second playback device.
  • the second playback device is capable of communicating with the first playback device using at least one communication protocol.
  • the command identifies the second playback device and specifies a group type for the group.
  • the first playback device is further configured to, based on the group type and the at least one communication protocol, select a technique for communicating with the second playback device.
  • the first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected technique.
  • the first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
  • Figure 1A is a partial cutaway view of a media playback system 100 distributed in an environment 101 (for example, a house).
  • the media playback system 100 comprises one or more playback devices 110 (identified individually as playback devices 110a-n), one or more network microphone devices 120 (“NMDs”) (identified individually as NMDs 120a-c), and one or more control devices 130 (identified individually as control devices 130a and 130b).
  • NMDs network microphone devices 120
  • control devices 130 identified individually as control devices 130a and 130b.
  • a playback device can generally refer to a network device configured to receive, process, and output data of a media playback system.
  • a playback device can be a network device that receives and processes audio content.
  • a playback device includes one or more transducers or speakers powered by one or more amplifiers.
  • a playback device includes one of (or neither of) the speaker and the amplifier.
  • a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable.
  • NMD network microphone device
  • a network microphone device can generally refer to a network device that is configured for audio detection.
  • an NMD is a stand-alone device configured primarily for audio detection.
  • an NMD is incorporated into a playback device (or vice versa).
  • one or more of the playback devices 110 can be configured to play back a morning playlist upon detection of an associated trigger condition (for example, presence of a user in a kitchen, detection of a coffee machine operation, and so forth).
  • the media playback system 100 is configured to play back audio from a first playback device (for example, the playback device 110a) in synchrony with a second playback device (for example, the playback device 110b).
  • a first playback device for example, the playback device 110a
  • a second playback device for example, the playback device 110b
  • the environment 101 comprises a household having several rooms, spaces, and/or playback zones, including (clockwise from upper left) a master bathroom 101a, a master bedroom 101b, a second bedroom 101c, a family room or den 101d, an office 101 e, a living room 101f, a dining room 101g, a kitchen 101 h, and an outdoor patio 101 i. While certain embodiments and examples are described below in the context of a home environment, the technologies described herein may be implemented in other types of environments.
  • the media playback system 100 can be implemented in one or more commercial settings (for example, a restaurant, mall, airport, hotel, a retail or other store), one or more vehicles (for example, a sports utility vehicle, bus, car, a ship, a boat, an airplane, and so forth), multiple environments (for example, a combination of home and vehicle environments), and/or another suitable environment where multi-zone audio may be desirable.
  • the media playback system 100 can comprise one or more playback zones, some of which may correspond to the rooms in the environment 101.
  • the media playback system 100 can be established with one or more playback zones, after which additional zones may be added, or removed, to form, for example, the configuration shown in Figure 1A.
  • Each zone may be given a name according to a different room or space such as the office 101 e, master bathroom 101 a, master bedroom 101 b, the second bedroom 101c, kitchen 101 h, dining room 101g, living room 101f, and/or the balcony 101 i.
  • a single playback zone may include multiple rooms or spaces.
  • a single room or space may include multiple playback zones.
  • the second bedroom 101c, the office 101e, the living room 101f, the dining room 101g, the kitchen 101 h, and the outdoor patio 101 i each include one playback device 110
  • the master bathroom 101a, master bedroom 101 b, and the den 101 d each include a plurality of playback devices 110.
  • the playback devices 1101 and 110m may be configured, for example, to play back audio content in synchrony as individual ones of playback devices 110, as a bonded playback zone, as a consolidated playback device, and/or any combination thereof.
  • the playback devices 110h— k can be configured, for instance, to play back audio content in synchrony as individual ones of playback devices 110, as one or more bonded playback devices, and/or as one or more consolidated playback devices. Additional details regarding bonded and consolidated playback devices are described below with respect to Figures 1 B, 1 E, and 11 through 1 M.
  • one or more of the playback zones in the environment 101 may each be playing different audio content.
  • a user may be grilling on the patio 101 i and listening to hip hop music being played by the playback device 110c while another user is preparing food in the kitchen 101 h and listening to classical music played by the playback device 110b.
  • a playback zone may play the same audio content in synchrony with another playback zone.
  • the user may be in the office 101 e listening to the playback device 11 Of playing back the same hip hop music being played back by playback device 110c on the patio 101 i.
  • the playback devices 110c and 110f play back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U.S. Patent 8,234,395 entitled “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices”, which is incorporated herein by reference in its entirety. a. Suitable Media Playback System
  • Figure 1 B is a schematic diagram of the media playback system 100 and a cloud network 102. For ease of illustration, certain devices of the media playback system 100 and the cloud network 102 are omitted from Figure 1 B.
  • One or more communication links 103 (referred to hereinafter as “the links 103”) communicatively couple the media playback system 100 and the cloud network 102.
  • the links 103 can comprise, for example, one or more wired networks, one or more wireless networks, one or more wide area networks (WAN), one or more local area networks (LAN), one or more personal area networks (PAN), one or more telecommunication networks (for example, one or more Global System for Mobiles (GSM) networks, Code Division Multiple Access (CDMA) networks, Long-Term Evolution (LTE) networks, 5G communication networks, and/or other suitable data transmission protocol networks), and so forth.
  • the cloud network 102 is configured to deliver media content (for example, audio content, video content, photographs, social media content, and so forth) to the media playback system 100 in response to a request transmitted from the media playback system 100 via the links 103.
  • the cloud network 102 is further configured to receive data (for example, voice input data) from the media playback system 100 and correspondingly transmit commands and/or media content to the media playback system 100.
  • the cloud network 102 comprises computing devices 106 (identified separately as a first computing device 106a, a second computing device 106b, and a third computing device 106c).
  • the computing devices 106 can comprise individual computers or servers, such as, for example, a media streaming service server storing audio and/or other media content, a voice service server, a social media server, a media playback system control server, and so forth.
  • one or more of the computing devices 106 comprise modules of a single computer or server.
  • one or more of the computing devices 106 comprise one or more modules, computers, and/or servers.
  • the cloud network 102 is described above in the context of a single cloud network, in some embodiments the cloud network 102 comprises a plurality of cloud networks comprising communicatively coupled computing devices. Furthermore, while the cloud network 102 is shown in Figure 1 B as having three of the computing devices 106, in some embodiments, the cloud network 102 comprises fewer (or more than) three computing devices 106.
  • the media playback system 100 is configured to receive media content from the networks 102 via the links 103.
  • the received media content can comprise, for example, a Uniform Resource Identifier (URI) and/or a Uniform Resource Locator (URL).
  • URI Uniform Resource Identifier
  • URL Uniform Resource Locator
  • the media playback system 100 can stream, download, or otherwise obtain data from a URI or a URL corresponding to the received media content.
  • a network 104 communicatively couples the links 103 and at least a portion of the devices (for example, one or more of the playback devices 110, NMDs 120, and/or control devices 130) of the media playback system 100.
  • the network 104 can include, for example, a wireless network (for example, a WI-FI network, a BLUETOOTH network, a Z-WAVE network, a ZIGBEE network, and/or other suitable wireless communication protocol network) and/or a wired network (for example, a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication).
  • a wireless network for example, a WI-FI network, a BLUETOOTH network, a Z-WAVE network, a ZIGBEE network, and/or other suitable wireless communication protocol network
  • a wired network for example, a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication.
  • WI-FI can refer to several different communication protocols including, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.11 a, 802.11b, 802.11g, 802.11 n, 802.11ac, 802.11ac, 802.11 ad, 802.11 af, 802.11 ah, 802.11ai, 802.11 aj, 802.11 aq, 802.11 ax, 802.1 l ay, 802.15, and so forth transmitted at 2.4 Gigahertz (GHz), 5 GHz, and/or another suitable frequency.
  • IEEE Institute of Electrical and Electronics Engineers
  • the network 104 comprises a dedicated communication network that the media playback system 100 uses to transmit messages between individual devices and/or to transmit media content to and from media content sources (for example, one or more of the computing devices 106).
  • the network 104 is configured to be accessible only to devices in the media playback system 100, thereby reducing interference and competition with other household devices.
  • the network 104 comprises an existing household or commercial facility communication network (for example, a household or commercial facility WI-FI network).
  • the links 103 and the network 104 comprise one or more of the same networks.
  • the links 103 and the network 104 comprise a telecommunication network (for example, an LTE network, a 5G network, and so forth).
  • the media playback system 100 is implemented without the network 104, and devices comprising the media playback system 100 can communicate with each other, for example, via one or more direct connections, PANs, telecommunication networks, and/or other suitable communication links.
  • the network 104 may be referred to herein as a “local communication network” to differentiate the network 104 from the cloud network 102 that couples the media playback system 100 to remote devices, such as cloud servers that host cloud services.
  • audio content sources may be regularly added or removed from the media playback system 100.
  • the media playback system 100 performs an indexing of media items when one or more media content sources are updated, added to, and/or removed from the media playback system 100.
  • the media playback system 100 can scan identifiable media items in some or all folders and/or directories accessible to the playback devices 110, and generate or update a media content database comprising metadata (for example, title, artist, album, track length, and so forth) and other associated information (for example, URIs, URLs, and so forth) for each identifiable media item found.
  • the media content database is stored on one or more of the playback devices 110, network microphone devices 120, and/or control devices 130.
  • the playback devices 1101 and 110m comprise a group 107a.
  • the playback devices 1101 and 110m can be positioned in different rooms and be grouped together in the group 107a on a temporary or permanent basis based on user input received at the control device 130a and/or another control device 130 in the media playback system 100.
  • the playback devices 1101 and 110m can be configured to play back the same or similar audio content in synchrony from one or more audio content sources.
  • the group 107a comprises a bonded zone in which the playback devices 1101 and 110m comprise left audio and right audio channels, respectively, of multi-channel audio content, thereby producing or enhancing a stereo effect of the audio content.
  • the group 107a includes additional playback devices 110.
  • the media playback system 100 omits the group 107a and/or other grouped arrangements of the playback devices 110. Additional details regarding groups and other arrangements of playback devices are described in further detail below with respect to Figures 11 through 1M.
  • the media playback system 100 includes the NMDs 120a and 120b, each comprising one or more microphones configured to receive voice utterances from a user.
  • the NMD 120a is a standalone device and the NMD 120b is integrated into the playback device 11 On.
  • the NMD 120a for example, is configured to receive voice input 121 from a user 123.
  • the NMD 120a transmits data associated with the received voice input 121 to a voice assistant service (VAS) configured to (i) process the received voice input data and (ii) facilitate one or more operations on behalf of the media playback system 100.
  • VAS voice assistant service
  • the computing device 106c comprises one or more modules and/or servers of a VAS (for example, a VAS operated by one or more of SONOS, AMAZON, GOOGLE, APPLE, MICROSOFT, and so forth).
  • the computing device 106c can receive the voice input data from the NMD 120a via the network 104 and the links 103.
  • the computing device 106c In response to receiving the voice input data, the computing device 106c processes the voice input data (that is, “Play Hey Jude by The Beatles”), and determines that the processed voice input includes a command to play a song (for example, “Hey Jude”). In some embodiments, after processing the voice input, the computing device 106c accordingly transmits commands to the media playback system 100 to play back “Hey Jude” by the Beatles from a suitable media service (for example, via one or more of the computing devices 106) on one or more of the playback devices 110. In other embodiments, the computing device 106c may be configured to interface with media services on behalf of the media playback system 100.
  • the computing device 106c after processing the voice input, instead of the computing device 106c transmitting commands to the media playback system 100 causing the media playback system 100 to retrieve the requested media from a suitable media service, the computing device 106c itself causes a suitable media service to provide the requested media to the media playback system 100 in accordance with the user’s voice utterance.
  • the computing device 106c instead of the computing device 106c transmitting commands to the media playback system 100 causing the media playback system 100 to retrieve the requested media from a suitable media service, the computing device 106c itself causes a suitable media service to provide the requested media to the media playback system 100 in accordance with the user’s voice utterance.
  • Figure 1 C is a block diagram of the playback device 110a comprising an input/output 111.
  • the input/output 111 can include an analog I/O 111 a (for example, one or more wires, cables, and/or other suitable communication links configured to carry analog signals) and/or a digital I/O 111 b (for example, one or more wires, cables, or other suitable communication links configured to carry digital signals).
  • the analog I/O 111 a is an audio line-in input connection comprising, for example, an auto-detecting 3.5 mm audio line-in connection.
  • the digital I/O 111 b comprises a Sony/Philips Digital Interface Format (S/PDIF) communication interface and/or cable and/or a Toshiba Link (TOSLINK) cable.
  • the digital I/O 111 b comprises a High-Definition Multimedia Interface (HDMI) interface and/or cable.
  • the digital I/O 111 b includes one or more wireless communication links comprising, for example, a radio frequency (RF), infrared, WI-FI, BLUETOOTH, or another suitable communication link.
  • RF radio frequency
  • the analog I/O 111a and the digital I/O 111 b comprise interfaces (for example, ports, plugs, jacks, and so forth) configured to receive connectors of cables transmitting analog and digital signals, respectively, without necessarily including cables.
  • the playback device 110a can receive media content (for example, audio content comprising music and/or other sounds) from a local audio source 105 via the input/output 111 (for example, a cable, a wire, a PAN, a BLUETOOTH connection, an ad hoc wired or wireless communication network, and/or another suitable communication link).
  • the local audio source 105 can comprise, for example, a mobile device (for example, a smartphone, a tablet, a laptop computer, and so forth) or another suitable audio component (for example, a television, a desktop computer, an amplifier, a phonograph (such as n LP turntable), a Blu-ray player, a memory storing digital media files, and so forth).
  • the local audio source 105 includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files.
  • one or more of the playback devices 110, NMDs 120, and/or control devices 130 comprise the local audio source 105.
  • the media playback system omits the local audio source 105 altogether.
  • the playback device 110a does not include an input/output 111 and receives all audio content via the network 104.
  • the playback device 110a further comprises electronics 112, a user interface 113 (for example, one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens, and so forth), and one or more transducers 114 (referred to hereinafter as “the transducers 114”).
  • the electronics 112 are configured to receive audio from an audio source (for example, the local audio source 105) via the input/output 111 or one or more of the computing devices 106a-c via the network 104 ( Figure 1 B), amplify the received audio, and output the amplified audio for playback via one or more of the transducers 114.
  • the playback device 110a optionally includes one or more microphones 115 (for example, a single microphone, a plurality of microphones, a microphone array) (hereinafter referred to as “the microphones 115”).
  • the playback device 110a having one or more of the optional microphones 115 can operate as an NMD configured to receive voice input from a user and correspondingly perform one or more operations based on the received voice input.
  • the electronics 112 comprise one or more processors 112a (referred to hereinafter as “the processors 112a”), memory 112b, software components 112c, a network interface 112d, one or more audio processing components 112g (referred to hereinafter as “the audio components 112g”), one or more audio amplifiers 112h (referred to hereinafter as “the amplifiers 112h”), and power 112i (for example, one or more power supplies, power cables, power receptacles, batteries, induction coils, Power-over Ethernet (POE) interfaces, and/or other suitable sources of electric power).
  • the electronics 112 optionally include one or more other components 112j (for example, one or more sensors, video displays, touchscreens, battery charging bases, and so forth).
  • the processors 112a can comprise clock-driven computing component(s) configured to process data, and the memory 112b can comprise a computer-readable medium (for example, a tangible, non-transitory computer-readable medium loaded with one or more of the software components 112c) configured to store instructions for performing various operations and/or functions.
  • the processors 112a are configured to execute the instructions stored on the memory 112b to perform one or more of the operations.
  • the operations can include, for example, causing the playback device 110a to retrieve audio data from an audio source (for example, one or more of the computing devices 106a-c ( Figure 1 B)), and/or another one of the playback devices 110.
  • the operations further include causing the playback device 110a to send audio data to another one of the playback devices 110a and/or another device (for example, one of the NMDs 120).
  • Certain embodiments include operations causing the playback device 110a to pair with another of the one or more playback devices 110 to enable a multi-channel audio environment (for example, a stereo pair, a bonded zone, and so forth).
  • the processors 112a can be further configured to perform operations causing the playback device 110a to synchronize playback of audio content with another of the one or more playback devices 110.
  • a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device 110a and the other one or more other playback devices 110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Patent 8,234,395, which was incorporated by reference above.
  • the memory 112b is further configured to store data associated with the playback device 110a, such as one or more zones and/or zone groups of which the playback device 110a is a member, audio sources accessible to the playback device 110a, and/or a playback queue that the playback device 110a (and/or another of the one or more playback devices) can be associated with.
  • the stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device 110a.
  • the memory 112b can also include data associated with a state of one or more of the other devices (for example, the playback devices 110, NMDs 120, control devices 130) of the media playback system 100.
  • the state data is shared during predetermined intervals of time (for example, every 5 seconds, every 10 seconds, every 60 seconds, and so forth) among at least a portion of the devices of the media playback system 100, so that one or more of the devices have the most recent data associated with the media playback system 100.
  • the network interface 112d is configured to facilitate a transmission of data between the playback device 110a and one or more other devices on a data network such as, for example, the links 103 and/or the network 104 ( Figure 1 B).
  • the network interface 112d is configured to transmit and receive data corresponding to media content (for example, audio content, video content, text, photographs) and other signals (for example, non-transitory signals) comprising digital packet data including an Internet Protocol (IP)-based source address and/or an IP-based destination address.
  • IP Internet Protocol
  • the network interface 112d can parse the digital packet data such that the electronics 112 properly receive and process the data destined for the playback device 110a.
  • the network interface 112d comprises one or more wireless interfaces 112e (referred to hereinafter as “the wireless interface 112e”).
  • the wireless interface 112e (for example, a suitable interface comprising one or more antennae) can be configured to wirelessly communicate with one or more other devices (for example, one or more of the other playback devices 110, NMDs 120, and/or control devices 130) that are communicatively coupled to the network 104 ( Figure 1 B) in accordance with a suitable wireless communication protocol (for example, WI-FI, BLUETOOTH, LTE, and so forth).
  • a suitable wireless communication protocol for example, WI-FI, BLUETOOTH, LTE, and so forth.
  • the network interface 112d optionally includes a wired interface 112f (for example, an interface or receptacle configured to receive a network cable such as an Ethernet, a USB-A, USB-C, and/or Thunderbolt cable) configured to communicate over a wired connection with other devices in accordance with a suitable wired communication protocol.
  • the network interface 112d includes the wired interface 112f and excludes the wireless interface 112e.
  • the electronics 112 exclude the network interface 112d altogether and transmit and receive media content and/or other data via another communication path (for example, the input/output 111 ).
  • the audio components 112g are configured to process and/or filter data comprising media content received by the electronics 112 (for example, via the input/output 111 and/or the network interface 112d) to produce output audio signals.
  • the audio processing components 112g comprise, for example, one or more digital-to-analog converters (DACs), audio preprocessing components, audio enhancement components, digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, and so forth.
  • one or more of the audio processing components 112g can comprise one or more subcomponents of the processors 112a.
  • the electronics 112 omit the audio processing components 112g.
  • the processors 112a execute instructions stored on the memory 112b to perform audio processing operations to produce the output audio signals.
  • the amplifiers 112h are configured to receive and amplify the audio output signals produced by the audio processing components 112g and/or the processors 112a.
  • the amplifiers 112h can comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers 114.
  • the amplifiers 112h include one or more switching or class-D power amplifiers.
  • the amplifiers 112h include one or more other types of power amplifiers (for example, linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class- C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G amplifiers, class-H amplifiers, and/or another suitable type of power amplifier).
  • the amplifiers 112h comprise a suitable combination of two or more of the foregoing types of power amplifiers.
  • individual ones of the amplifiers 112h correspond to individual ones of the transducers 114.
  • the electronics 112 include a single one of the amplifiers 112h configured to output amplified audio signals to a plurality of the transducers 114. In some other embodiments, the electronics 112 omit the amplifiers 112h.
  • the transducers 114 receive the amplified audio signals from the amplifier 112h and render or output the amplified audio signals as sound (for example, audible sound waves having a frequency between about 20 hertz (Hz) and 20 kilohertz (kHz)).
  • the transducers 114 can comprise a single transducer. In other embodiments, however, the transducers 114 comprise a plurality of audio transducers. In some embodiments, the transducers 114 comprise more than one type of transducer.
  • the transducers 114 can include one or more low frequency transducers (for example, subwoofers, woofers), mid-range frequency transducers (for example, mid-range transducers, mid-woofers), and one or more high frequency transducers (for example, one or more tweeters).
  • low frequency can generally refer to audible frequencies below about 500 Hz
  • mid-range frequency can generally refer to audible frequencies between about 500 Hz and about 2 kHz
  • “high frequency” can generally refer to audible frequencies above 2 kHz.
  • one or more of the transducers 114 comprise transducers that do not adhere to the foregoing frequency ranges.
  • one of the transducers 114 may comprise a midwoofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz.
  • Sonos presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE”, “PLAY:1”, “PLAY:3”, “PLAY:5”, “PLAYBAR”, “PLAYBASE”, “CONNECT:AMP”, “CONNECT”, “AMP”, “PORT”, and “SUB”.
  • Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein.
  • a playback device is not limited to the examples described herein or to Sonos product offerings.
  • one or more playback devices 110 comprise wired or wireless headphones (for example, over-the-ear headphones, on-ear headphones, in- ear earphones, and so forth).
  • one or more of the playback devices 110 comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices.
  • a playback device may be integral to another device or component such as a television, an LP turntable, a lighting fixture, or some other device for indoor or outdoor use.
  • a playback device omits a user interface and/or one or more transducers.
  • Figure 1 D is a block diagram of a playback device 11 Op comprising the input/output 111 and electronics 112 without the user interface 113 or transducers 114.
  • Figure 1 E is a block diagram of a bonded playback device 110q comprising the playback device 110a ( Figure 1 C) sonically bonded with the playback device 110i (for example, a subwoofer) ( Figure 1A).
  • the playback devices 110a and 110i are separate ones of the playback devices 110 housed in separate enclosures.
  • the bonded playback device 110q comprises a single enclosure housing both the playback devices 110a and 110i.
  • the bonded playback device 110q can be configured to process and reproduce sound differently than an unbonded playback device (for example, the playback device 110a of Figure 1 C) and/or paired or bonded playback devices (for example, the playback devices 110I and 110m of Figure 1 B).
  • the playback device 110a is a full-range playback device configured to render low frequency, midrange frequency, and high frequency audio content
  • the playback device 110i is a subwoofer configured to render low frequency audio content.
  • the playback device 110a when bonded with the first playback device, is configured to render only the mid-range and high frequency components of a particular audio content, while the playback device 110i renders the low frequency component of the particular audio content.
  • the bonded playback device 110q includes additional playback devices and/or another bonded playback device. Additional playback device embodiments are described in further detail below with respect to Figures 2A through 3D. c. Suitable Network Microphone Devices (NMDs)
  • Figure 1 F is a block diagram of the NMD 120a ( Figures 1A and 1 B).
  • the NMD 120a includes one or more voice processing components 124 (hereinafter “the voice components 124”) and several components described with respect to the playback device 110a ( Figure 1 C) including the processors 112a, the memory 112b, and the microphones 115.
  • the NMD 120a optionally comprises other components also included in the playback device 110a ( Figure 1 C), such as the user interface 113 and/or the transducers 114.
  • the NMD 120a includes the processor 112a and the memory 112b ( Figure 1 C), while omitting one or more other components of the electronics 112.
  • the NMD 120a includes additional components (for example, one or more sensors, cameras, thermometers, barometers, hygrometers, and so forth).
  • an NMD can be integrated into a playback device.
  • Figure 1 G is a block diagram of a playback device 11 Or comprising an NMD 120d.
  • the playback device 11 Or can comprise many or all of the components of the playback device 110a and further include the microphones 115 and voice processing components 124 ( Figure 1 F).
  • the playback device 11 Or optionally includes an integrated control device 130c.
  • the control device 130c can comprise, for example, a user interface (for example, the user interface 113 of Figure 1 C) configured to receive user input (for example, touch input, voice input, and so forth) without a separate control device.
  • the playback device 11 Or receives commands from another control device (for example, the control device 130a of Figure 1 B). Additional NMD embodiments are described in further detail below with respect to Figures 3A through 3F.
  • the microphones 115 are configured to acquire, capture, and/or receive sound from an environment (for example, the environment 101 of Figure 1A) and/or a room in which the NMD 120a is positioned.
  • the received sound can include, for example, vocal utterances, audio played back by the NMD 120a and/or another playback device, background voices, ambient sounds, and so forth.
  • the microphones 115 convert the received sound into electrical signals to produce microphone data.
  • the voice processing components 124 receive and analyze the microphone data to determine whether a voice input is present in the microphone data.
  • the voice input can comprise, for example, an activation word followed by an utterance including a user request.
  • an activation word is a word or other audio cue signifying a user voice input. For instance, in querying the AMAZON VAS, a user might speak the activation word “Alexa”. Other examples include “Ok, Google” for invoking the GOOGLE VAS and “Hey, Siri” for invoking the APPLE VAS.
  • voice processing components 124 monitor the microphone data for an accompanying user request in the voice input.
  • the user request may include, for example, a command to control a third-party device, such as a thermostat (for example, NEST thermostat), an illumination device (for example, a PHILIPS HUE lighting device), or a media playback device (for example, a SONOS playback device).
  • a thermostat for example, NEST thermostat
  • an illumination device for example, a PHILIPS HUE lighting device
  • a media playback device for example, a SONOS playback device.
  • a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (for example, the environment 101 of Figure 1A).
  • the user might speak the same activation word followed by the utterance “turn on the living room” to turn on illumination devices in a living room area of the home.
  • the user may similarly speak an activation word followed by a request to play a particular song, an album, or a playlist of music on a playback device in the home. Additional description regarding receiving and processing voice input data can be found in further detail below with respect to Figures 3A through 3F. d. Suitable Control Devices
  • Figure 1 H is a partial schematic diagram of the control device 130a ( Figures 1A and 1 B).
  • the term “control device” can be used interchangeably with “controller” or “control system”.
  • the control device 130a is configured to receive user input related to the media playback system 100 and, in response, cause one or more devices in the media playback system 100 to perform an action(s) or operation(s) corresponding to the user input.
  • the control device 130a comprises a smartphone (for example, an iPhoneTM, an Android phone, and so forth) on which media playback system controller application software is installed.
  • control device 130a comprises, for example, a tablet (for example, an iPadTM), a computer (for example, a laptop computer, a desktop computer, and so forth), and/or another suitable device (for example, a television, an automobile audio head unit, an loT device, and so forth).
  • the control device 130a comprises a dedicated controller for the media playback system 100.
  • the control device 130a is integrated into another device in the media playback system 100 (for example, one more of the playback devices 110, NMDs 120, and/or other suitable devices configured to communicate over a network).
  • the memory 132b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.
  • the control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135.
  • the electronics 132 comprise one or more processors 132a (referred to hereinafter as “the processors 132a”), a memory 132b, software components 132c, and a network interface 132d.
  • the processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100.
  • the memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 132a to perform those functions.
  • the software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100.
  • the memory 132b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.
  • the network interface 132d is configured to facilitate network communications between the control device 130a and one or more other devices in the media playback system 100, and/or one or more remote devices.
  • the network interface 132d is configured to operate according to one or more suitable communication industry standards (for example, infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11 b, 802.11g, 802.11 n, 802.11 ac, 802.15, 4G, LTE, and so forth).
  • suitable communication industry standards for example, infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11 b, 802.11g, 802.11 n, 802.11 ac, 802.15, 4G, LTE, and so forth.
  • the network interface 132d can be configured, for example, to transmit data to and/or receive data from the playback devices 110, the NMDs 120, other ones of the control devices 130, one of the computing devices 106 of Figure 1 B, devices comprising one or more other media playback systems, and so forth.
  • the transmitted and/or received data can include, for example, playback device control commands, state variables, playback zone and/or zone group configurations.
  • the network interface 132d can transmit a playback device control command (for example, volume control, audio playback control, audio content selection, and so forth) from the control device 130a to one or more of the playback devices 110.
  • the network interface 132d can also transmit and/or receive configuration changes such as, for example, adding/removing one or more playback devices 110 to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or consolidated player, separating one or more playback devices from a bonded or consolidated player, among others. Additional description of zones and groups can be found below with respect to Figures 11 through 1 M.
  • the user interface 133 is configured to receive user input and can facilitate control of the media playback system 100.
  • the user interface 133 includes media content art 133a (for example, album art, lyrics, videos, and so forth), a playback status indicator 133b (for example, an elapsed and/or remaining time indicator), media content information region 133c, a playback control region 133d, and a zone indicator 133e.
  • the media content information region 133c can include a display of relevant information (for example, title, artist, album, genre, release year, and so forth) about media content currently playing and/or media content in a queue or playlist.
  • the playback control region 133d can include selectable (for example, via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, and so forth.
  • the playback control region 133d may also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions.
  • the user interface 133 comprises a display presented on a touch screen interface of a smartphone (for example, an iPhoneTM, an Android phone, and so forth). In some embodiments, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.
  • the one or more speakers 134 can be configured to output sound to the user of the control device 130a.
  • the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, mid-range frequencies, and/or high frequencies.
  • the control device 130a is configured as a playback device (for example, one of the playback devices 110).
  • the control device 130a is configured as an NMD (for example, one of the NMDs 120), receiving voice commands and other sounds via the one or more microphones 135.
  • the one or more microphones 135 can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some embodiments, two or more of the microphones 135 are arranged to capture location information of an audio source (for example, voice, audible sound, and so forth) and/or configured to facilitate filtering of background noise. Moreover, in certain embodiments, the control device 130a is configured to operate as a playback device and an NMD. In other embodiments, however, the control device 130a omits the one or more speakers 134 and/or the one or more microphones 135.
  • control device 130a may comprise a device (for example, a thermostat, an loT device, a network device, and so forth) comprising a portion of the electronics 132 and the user interface 133 (for example, a touch screen) without any speakers or microphones. Additional control device embodiments are described in further detail below with respect to Figures 4A through 4D and 5. e. Suitable Playback Device Configurations
  • Figures 11 through 1 M show example configurations of playback devices in zones and zone groups.
  • a single playback device may belong to a zone.
  • the playback device 110g in the second bedroom 101c may belong to Zone C.
  • multiple playback devices may be “bonded” to form a “bonded pair” which together form a single zone.
  • the playback device 1101 for example, a left playback device
  • the playback device 110m for example, a right playback device
  • Bonded playback devices may have different playback responsibilities (for example, channel responsibilities).
  • multiple playback devices may be merged to form a single zone.
  • the playback device 11 Oh (for example, a front playback device) may be merged with the playback device 110i (for example, a subwoofer), and the playback devices 110j and 110k (for example, left and right surround speakers, respectively) to form a single Zone D.
  • the playback devices 110b and 110d can be merged to form a merged group or a zone group 108b.
  • the merged playback devices 110b and 110d may not be specifically assigned different playback responsibilities. That is, the merged playback devices 110b and 110d may, aside from playing audio content in synchrony, each play audio content as they would if they were not merged.
  • Zone A may be provided as a single entity named Master Bathroom.
  • Zone B may be provided as a single entity named Master Bedroom.
  • Zone C may be provided as a single entity named Second Bedroom.
  • Playback devices that are bonded may have different playback responsibilities, such as responsibilities for certain audio channels.
  • the playback devices 1101 and 110m may be bonded so as to produce or enhance a stereo effect of audio content.
  • the playback device 1101 may be configured to play a left channel audio component
  • the playback device 110m may be configured to play a right channel audio component.
  • stereo bonding may be referred to as “pairing”.
  • bonded playback devices may have additional and/or different respective speaker drivers.
  • the playback device 110h named Front may be bonded with the playback device 110i named SUB.
  • the Front device 110h can be configured to render a range of mid to high frequencies and the SUB device 110i can be configured to render low frequencies. When unbonded, however, the Front device 11 Oh can be configured to render a full range of frequencies.
  • Figure 1 K shows the Front and SUB devices 11 Oh and 110i further bonded with Left and Right playback devices 110j and 110k, respectively.
  • the Left and Right devices 110j and 110k can be configured to form surround or “satellite” channels of a home theater system.
  • the bonded playback devices 110h, 110i, 110j, and 110k may form a single Zone D ( Figure 1 M).
  • Playback devices that are merged may not have assigned playback responsibilities, and may each render the full range of audio content the respective playback device is capable of. Nevertheless, merged devices may be represented as a single Ul entity (that is, a zone, as discussed above). For instance, the playback devices 110a and 110n in the master bathroom have the single III entity of Zone A. In one embodiment, the playback devices 110a and 11 On may each output the full range of audio content each respective playback devices 110a and 11 On are capable of, in synchrony.
  • an NMD is bonded or merged with another device so as to form a zone.
  • the NMD 120b may be bonded with the playback device 110e, which together form Zone F, named Living Room.
  • a stand-alone network microphone device may be in a zone by itself. In other embodiments, however, a stand-alone network microphone device may not be associated with a zone. Additional details regarding associating network microphone devices and playback devices as designated or default devices may be found, for example, in subsequently referenced U.S. Patent 10,499,146.
  • Zones of individual, bonded, and/or merged devices may be grouped to form a zone group.
  • Zone A may be grouped with Zone B to form a zone group 108a that includes the two zones.
  • Zone G may be grouped with Zone H to form the zone group 108b.
  • Zone A may be grouped with one or more other Zones C-l.
  • the Zones A-l may be grouped and ungrouped in numerous ways. For example, three, four, five, or more (for example, all) of the Zones A-l may be grouped.
  • the zones of individual and/or bonded playback devices may play back audio in synchrony with one another, as described in previously referenced U.S. Patent 8,234,395. Playback devices may be dynamically grouped and ungrouped to form new or different groups that synchronously play back audio content.
  • the zones in an environment may be the default name of a zone within the group or a combination of the names of the zones within a zone group.
  • zone group 108b can be assigned a name such as “Dining + Kitchen”, as shown in Figure 1 M.
  • a zone group may be given a unique name selected by a user.
  • Certain data may be stored in a memory of a playback device (for example, the memory 112b of Figure 1 C) as one or more state variables that are periodically updated and used to describe the state of a playback zone, the playback device(s), and/or a zone group associated therewith.
  • the memory may also include the data associated with the state of the other devices of the media system, and shared from time to time among the devices so that one or more of the devices have the most recent data associated with the system.
  • the memory may store instances of various variable types associated with the states.
  • Variable instances may be stored with identifiers (for example, tags) corresponding to type.
  • identifiers for example, tags
  • certain identifiers may be a first type “a1” to identify playback device(s) of a zone, a second type “b1” to identify playback device(s) that may be bonded in the zone, and a third type “c1” to identify a zone group to which the zone may belong.
  • identifiers associated with the second bedroom 101 c may indicate that the playback device is the only playback device of the Zone C and not in a zone group.
  • Identifiers associated with the den may indicate that the den is not grouped with other zones but includes bonded playback devices 110h-110k.
  • Identifiers associated with the dining room may indicate that the dining room is part of the Dining + Kitchen zone group 108b and that devices 110b and 110d are grouped ( Figure 1 L).
  • Identifiers associated with the kitchen may indicate the same or similar information by virtue of the kitchen being part of the Dining + Kitchen zone group 108b.
  • Other example zone variables and identifiers are described below.
  • the memory may store variables or identifiers representing other associations of zones and zone groups, such as identifiers associated with areas, as shown in Figure 1M.
  • An area may involve a cluster of zone groups and/or zones not within a zone group.
  • Figure 1 M shows an Upper Area 109a including Zones A-D and I, and a Lower Area 109b including Zones E-l.
  • an area may be used to invoke a cluster of zone groups and/or zones that share one or more zones and/or zone groups of another cluster. In another aspect, this differs from a zone group, which does not share a zone with another zone group. Further examples of techniques for implementing areas may be found, for example, in U.S.
  • Patent 10,712,997 filed 21 August 2017, and titled “Room Association Based on Name”, and U.S. Patent 8,483,853, filed 11 September 2007, and titled “Controlling and manipulating groupings in a multi-zone media system”.
  • the media playback system 100 may not implement areas, in which case the system may not store variables associated with areas.
  • FIG. 2A is a front isometric view of a playback device 210 configured in accordance with aspects of the disclosed technology.
  • Figure 2B is a front isometric view of the playback device 210 without a grille 216e.
  • Figure 2C is an exploded view of the playback device 210.
  • the playback device 210 comprises a housing 216 that includes an upper portion 216a, a right or first side portion 216b, a lower portion, a left or second side portion 216d, the grille 216e, and a rear portion 216f.
  • a plurality of fasteners 216g (for example, one or more screws, rivets, clips) attaches a frame 216h to the housing 216.
  • a cavity 216j (Figure 2C) in the housing 216 is configured to receive the frame 216h and electronics 212.
  • the frame 216h is configured to carry a plurality of transducers 214 (identified individually in Figure 2B as transducers 214a-f).
  • the electronics 212 (for example, the electronics 112 of Figure 1 C) are configured to receive audio content from an audio source and send electrical signals corresponding to the audio content to the transducers 214 for playback.
  • the transducers 214 are configured to receive the electrical signals from the electronics 112, and further configured to convert the received electrical signals into audible sound during playback.
  • the transducers 214a-c for example, tweeters
  • the transducers 214d-f can be configured output sound at frequencies lower than the transducers 214a-c (for example, sound waves having a frequency lower than about 2 kHz).
  • the playback device 210 includes a number of transducers different than those illustrated in Figures 2A through 2C.
  • a filter is axially aligned with the transducer 214b.
  • the filter can be configured to desirably attenuate a predetermined range of frequencies that the transducer 214b outputs to improve sound quality and a perceived sound stage output collectively by the transducers 214.
  • the playback device 210 omits the filter.
  • the playback device 210 includes one or more additional filters aligned with the transducers 214b and/or at least another of the transducers 214.
  • Figures 3A and 3B are front and right isometric side views, respectively, of an NMD 320 configured in accordance with embodiments of the disclosed technology.
  • Figure 3C is an exploded view of the NMD 320.
  • Figure 3D is an enlarged view of a portion of Figure 3B including a user interface 313 of the NMD 320.
  • the NMD 320 includes a housing 316 comprising an upper portion 316a, a lower portion 316b and an intermediate portion 316c (for example, a grille).
  • a plurality of ports, holes or apertures 316d in the upper portion 316a allow sound to pass through to one or more microphones 315 ( Figure 3C) positioned within the housing 316.
  • the one or more microphones 315 are configured to receive sound via the apertures 316d and produce electrical signals based on the received sound.
  • a frame 316e ( Figure 3C) of the housing 316 surrounds cavities 316f and 316g configured to house, respectively, a first transducer 314a (for example, a tweeter) and a second transducer 314b (for example, a mid-woofer, a midrange speaker, a woofer).
  • the NMD 320 includes a single transducer, or more than two (for example, two, five, six) transducers. In certain embodiments, the NMD 320 omits the transducers 314a and 314b altogether.
  • Electronics 312 ( Figure 3C) includes components configured to drive the transducers 314a and 314b, and further configured to analyze audio data corresponding to the electrical signals produced by the one or more microphones 315.
  • the electronics 312 comprises many or all of the components of the electronics 112 described above with respect to Figure 1 C.
  • the electronics 312 includes components described above with respect to Figure 1 F such as, for example, the one or more processors 112a, the memory 112b, the software components 112c, the network interface 112d, and so forth.
  • the electronics 312 includes additional suitable components (for example, proximity or other sensors).
  • the user interface 313 includes a plurality of control surfaces (for example, buttons, knobs, capacitive surfaces) including a first control surface 313a (for example, a previous control), a second control surface 313b (for example, a next control), and a third control surface 313c (for example, a play and/or pause control) that can be adjusted by a user 323.
  • a fourth control surface 313d is configured to receive touch input corresponding to activation and deactivation of the one or microphones 315.
  • a first indicator 313e (for example, one or more light emitting diodes (LEDs) or another suitable illuminator) can be configured to illuminate only when the one or more microphones 315 are activated.
  • a second indicator 313f (for example, one or more LEDs) can be configured to remain solid during normal operation and to blink or otherwise change from solid to indicate a detection of voice activity.
  • the user interface 313 includes additional or fewer control surfaces and illuminators.
  • the user interface 313 includes the first indicator 313e, omitting the second indicator 313f.
  • the NMD 320 comprises a playback device and a control device, and the user interface 313 comprises the user interface of the control device.
  • the NMD 320 is configured to receive voice commands from one or more adjacent users via the one or more microphones 315.
  • the one or more microphones 315 can acquire, capture, or record sound in a vicinity (for example, a region within 10 m or less of the NMD 320) and transmit electrical signals corresponding to the recorded sound to the electronics 312.
  • the electronics 312 can process the electrical signals and can analyze the resulting audio data to determine a presence of one or more voice commands (for example, one or more activation words).
  • the NMD 320 is configured to transmit a portion of the recorded audio data to another device and/or a remote server (for example, one or more of the computing devices 106 of Figure 1 B) for further analysis.
  • the remote server can analyze the audio data, determine an appropriate action based on the voice command, and transmit a message to the NMD 320 to perform the appropriate action. For instance, a user may speak “Sonos, play Michael Jackson”.
  • the NMD 320 can, via the one or more microphones 315, record the user’s voice utterance, determine the presence of a voice command, and transmit the audio data having the voice command to a remote server (for example, one or more of the remote computing devices 106 of Figure 1 B, one or more servers of a VAS and/or another suitable service).
  • the remote server can analyze the audio data and determine an action corresponding to the command.
  • the remote server can then transmit a command to the NMD 320 to perform the determined action (for example, play back audio content related to Michael Jackson).
  • the NMD 320 can receive the command and play back the audio content related to Michael Jackson from a media content source.
  • suitable content sources can include a device or storage communicatively coupled to the NMD 320 via a LAN (for example, the network 104 of Figure 1 B), a remote server (for example, one or more of the remote computing devices 106 of Figure 1 B), and so forth.
  • the NMD 320 determines and/or performs one or more actions corresponding to the one or more voice commands without intervention or involvement of an external device, computer, or server.
  • FIG. 3E is a functional block diagram showing additional features of the NMD 320 in accordance with aspects of the disclosure.
  • the NMD 320 includes components configured to facilitate voice command capture including voice activity detector component(s) 312k, beam former components 3121, acoustic echo cancellation (AEC) and/or self-sound suppression components 312m, activation word detector components 312n, and voice/speech conversion components 312o (for example, voice-to-text and text-to-voice).
  • voice activity detector component(s) 312k the beam former components 3121
  • AEC acoustic echo cancellation
  • self-sound suppression components 312m for example, voice-to-text and text-to-voice
  • voice/speech conversion components 312o for example, voice-to-text and text-to-voice.
  • the foregoing components 312k-312o are shown as separate components. In some embodiments, however, one or more of the components 312k-312o are subcomponents of the processors 112a.
  • the beamforming and self-sound suppression components 3121 and 312m are configured to detect an audio signal and determine aspects of voice input represented in the detected audio signal, such as the direction, amplitude, frequency spectrum, and so forth.
  • the voice activity detector activity components 312k are operably coupled with the beamforming and AEC components 3121 and 312m and are configured to determine a direction and/or directions from which voice activity is likely to have occurred in the detected audio signal.
  • Potential speech directions can be identified by monitoring metrics which distinguish speech from other sounds. Such metrics can include, for example, energy within the speech band relative to background noise and entropy within the speech band, which is measure of spectral structure. As those of ordinary skill in the art will appreciate, speech typically has a lower entropy than most common background noise.
  • the activation word detector components 312n are configured to monitor and analyze received audio to determine if any activation words (for example, wake words) are present in the received audio.
  • the activation word detector components 312n may analyze the received audio using an activation word detection algorithm. If the activation word detector 312n detects an activation word, the NMD 320 may process voice input contained in the received audio.
  • Example activation word detection algorithms accept audio as input and provide an indication of whether an activation word is present in the audio.
  • Many first- and third-party activation word detection algorithms are known and commercially available. For instance, operators of a voice service may make their algorithm available for use in third-party devices. Alternatively, an algorithm may be trained to detect certain activation words.
  • the activation word detector 312n runs multiple activation word detection algorithms on the received audio simultaneously (or substantially simultaneously).
  • different voice services for example, AMAZON’S ALEXA, APPLE’S SIRI, or MICROSOFT’S CORTANA
  • the activation word detector 312n may run the received audio through the activation word detection algorithm for each supported voice service in parallel.
  • the speech/text conversion components 312o may facilitate processing by converting speech in the voice input to text.
  • the electronics 312 can include voice recognition software that is trained to a particular user or a particular set of users associated with a household. Such voice recognition software may implement voice-processing algorithms that are tuned to specific voice profile(s). Tuning to specific voice profiles may require less computationally intensive algorithms than traditional voice activity services, which typically sample from a broad base of users and diverse requests that are not targeted to media playback systems.
  • FIG. 3F is a schematic diagram of an example voice input 328 captured by the NMD 320 in accordance with aspects of the disclosure.
  • the voice input 328 can include an activation word portion 328a and a voice utterance portion 328b.
  • the activation word 328a can be a known activation word, such as “Alexa”, which is associated with AMAZON’S ALEXA. In other embodiments, however, the voice input 328 may not include an activation word.
  • a network microphone device may output an audible and/or visible response upon detection of the activation word portion 328a.
  • an NMD may output an audible and/or visible response after processing a voice input and/or a series of voice inputs.
  • the voice utterance portion 328b may include, for example, one or more spoken commands (identified individually as a first command 328c and a second command 328e) and one or more spoken keywords (identified individually as a first keyword 328d and a second keyword 328f).
  • the first command 328c can be a command to play music, such as a specific song, album, playlist, and so forth.
  • the keywords may be one or words identifying one or more zones in which the music is to be played, such as the living room and the dining room shown in Figure 1A.
  • the voice utterance portion 328b can include other information, such as detected pauses (for example, periods of non-speech) between words spoken by a user, as shown in Figure 3F.
  • the pauses may demarcate the locations of separate commands, keywords, or other information spoke by the user within the voice utterance portion 328b.
  • the media playback system 100 is configured to temporarily reduce the volume of audio content that it is playing while detecting the activation word portion 328a.
  • the media playback system 100 may restore the volume after processing the voice input 328, as shown in Figure 3F.
  • Such a process can be referred to as ducking, examples of which are disclosed in U.S. Patent 10,499,146, which is incorporated by reference herein in its entirety.
  • FIGS 4A through 4D are schematic diagrams of a control device 430 (for example, the control device 130a of Figure 1 H, a smartphone, a tablet, a dedicated control device, an loT device, and/or another suitable device) showing corresponding user interface displays in various states of operation.
  • a first user interface display 431 a ( Figure 4A) includes a display name 433a (that is, “Rooms”).
  • a selected group region 433b displays audio content information (for example, artist name, track name, album art) of audio content played back in the selected group and/or zone.
  • Group regions 433c and 433d display corresponding group and/or zone name, and audio content information audio content played back or next in a playback queue of the respective group or zone.
  • An audio content region 433e includes information related to audio content in the selected group and/or zone (that is, the group and/or zone indicated in the selected group region 433b).
  • a lower display region 433f is configured to receive touch input to display one or more other user interface displays.
  • the control device 430 can be configured to output a second user interface display 431b ( Figure 4B) comprising a plurality of music services 433g (for example, Spotify, Radio by Tunein, Apple Music, Pandora, Amazon, TV, local music, line-in) through which the user can browse and from which the user can select media content for play back via one or more playback devices (for example, one of the playback devices 110 of Figure 1A).
  • a third user interface display 431 c Figure 4C.
  • a first media content region 433h can include graphical representations (for example, album art) corresponding to individual albums, stations, or playlists.
  • a second media content region 433i can include graphical representations (for example, album art) corresponding to individual songs, tracks, or other media content. If the user selects a graphical representation 433j ( Figure 4C), the control device 430 can be configured to begin play back of audio content corresponding to the graphical representation 433j and output a fourth user interface display 431 d that includes an enlarged version of the graphical representation 433j, media content information 433k (for example, track name, artist, album), transport controls 433m (for example, play, previous, next, pause, volume), and indication 433n of the currently selected group and/or zone name.
  • media content information 433k for example, track name, artist, album
  • transport controls 433m for example, play, previous, next, pause, volume
  • FIG. 5 is a schematic diagram of a control device 530 (for example, a laptop computer, a desktop computer).
  • the control device 530 includes transducers 534, a microphone 535, and a camera 536.
  • a user interface 531 includes a transport control region 533a, a playback status region 533c, a playback zone region 533b, a playback queue region 533d, and a media content source region 533e.
  • the transport control region comprises one or more controls for controlling media playback including, for example, volume, previous, play/pause, next, repeat, shuffle, track position, crossfade, equalization, and so forth.
  • the audio content source region 533e includes a listing of one or more media content sources from which a user can select media items for play back and/or adding to a playback queue.
  • the playback zone region 533b can include representations of playback zones within the media playback system 100 ( Figures 1A and 1 B).
  • the graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the media playback system, such as a creation of bonded zones, creation of zone groups, separation of zone groups, renaming of zone groups, and so forth.
  • a “group” icon is provided within each of the graphical representations of playback zones.
  • the “group” icon provided within a graphical representation of a particular zone may be selectable to bring up options to select one or more other zones in the media playback system to be grouped with the particular zone.
  • playback devices in the zones that have been grouped with the particular zone can be configured to play audio content in synchrony with the playback device(s) in the particular zone.
  • a “group” icon may be provided within a graphical representation of a zone group.
  • the “group” icon may be selectable to bring up options to deselect one or more zones in the zone group to be removed from the zone group.
  • the control device 530 includes other interactions and implementations for grouping and ungrouping zones via the user interface 531.
  • the representations of playback zones in the playback zone region 533b can be dynamically updated as playback zone or zone group configurations are modified.
  • the playback status region 533c includes graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group.
  • the selected playback zone or zone group may be visually distinguished on the user interface, such as within the playback zone region 533b and/or the playback queue region 533d.
  • the graphical representations may include track title, artist name, album name, album year, track length, and other relevant information that may be useful for the user to know when controlling the media playback system 100 via the user interface 531 .
  • the playback queue region 533d includes graphical representations of audio content in a playback queue associated with the selected playback zone or zone group.
  • each playback zone or zone group may be associated with a playback queue containing information corresponding to zero or more audio items for playback by the playback zone or zone group.
  • each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL) or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, possibly for playback by the playback device.
  • URI uniform resource identifier
  • URL uniform resource locator
  • a playlist can be added to a playback queue, in which information corresponding to each audio item in the playlist may be added to the playback queue.
  • audio items in a playback queue may be saved as a playlist.
  • a playback queue may be empty, or populated but “not in use” when the playback zone or zone group is playing continuously streaming audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations.
  • a playback queue can include Internet radio and/or other streaming audio content items and be “in use” when the playback zone or zone group is playing those items.
  • playback queues associated with the affected playback zones or zone groups may be cleared or re-associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues.
  • the resulting first playback zone may be re-associated with the previous first playback queue, or be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
  • the resulting second playback zone may be re-associated with the previous second playback queue, or be associated with a new playback queue that is empty, or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
  • Figure 6 is a message flow diagram illustrating data exchanges between devices of the media playback system 100 ( Figures 1 A through 1 M).
  • the media playback system 100 receives an indication of selected media content (for example, one or more songs, albums, playlists, podcasts, videos, stations) via the control device 130a.
  • the selected media content can comprise, for example, media items stored locally on one or more devices (for example, the audio source 105 of Figure 1 C) connected to the media playback system and/or media items stored on one or more media service servers (one or more of the remote computing devices 106 of Figure 1 B).
  • the control device 130a transmits a message 651a to the playback device 110a ( Figures 1A through 1C) to add the selected media content to a playback queue on the playback device 110a.
  • the playback device 110a receives the message 651a and adds the selected media content to the playback queue for play back.
  • the control device 130a receives input corresponding to a command to play back the selected media content.
  • the control device 130a transmits a message 651 b to the playback device 110a causing the playback device 110a to play back the selected media content.
  • the playback device 110a transmits a message 651 c to the computing device 106a requesting the selected media content.
  • the computing device 106a in response to receiving the message 651c, transmits a message 651 d comprising data (for example, audio data, video data, a URL, a URI) corresponding to the requested media content.
  • the playback device 110a receives the message 651 d with the data corresponding to the requested media content and plays back the associated media content.
  • the playback device 110a optionally causes one or more other devices to play back the selected media content.
  • the playback device 110a is one of a bonded zone of two or more players ( Figure 1 M).
  • the playback device 110a can receive the selected media content and transmit all or a portion of the media content to other devices in the bonded zone.
  • the playback device 110a is a coordinator of a group and is configured to transmit and receive timing information from one or more other devices in the group.
  • the other one or more devices in the group can receive the selected media content from the computing device 106a, and begin playback of the selected media content in response to a message from the playback device 110a such that all of the devices in the group play back the selected media content in synchrony.
  • the BLUETOOTH LE Audio specification improves the way that time-bounded data can be transferred between audio devices.
  • the BLUETOOTH LE Audio specification supports bidirectional communication to a finite number of audio sink devices in accordance with the “Direct LE Audio” profile.
  • the BLUETOOTH LE Audio specification also supports unidirectional broadcast communication to a potentially unlimited number of audio sink devices in accordance with the “Broadcast Audio” profile.
  • the “Multi-Stream Audio” profile enables transmission of multiple, independent, synchronized audio streams between an audio source device and one or more audio sink devices.
  • a preferred audio communication profile for a given application will depend on an assessment of a user’s intended action, and perhaps also on the user’s operating environment.
  • the profile is optionally defined in a data structure that can be used to initiate communication using a pre-established standard with known options set to particular values.
  • the user’s intended action may be represented by, for example, a user-generated command.
  • Disclosed herein are techniques that allow an audio device to intelligently choose an audio communication profile that is well-suited for a particular situation.
  • FIG. 7 is a flowchart illustrating an example method 700 for selecting a communication technique for transmitting audio content between two playback devices.
  • Method 700 includes a number of phases and sub-processes, the sequence of which may vary from one implementation to another. In some cases different operations may be performed in an overlapping fashion, particularly where the different overlapping operations are performed by different components. However, when considered in the aggregate, these phases and sub-processes are capable of selecting an audio content communication technique for a given application.
  • Method 700 commences when a command is received to group a first playback device with a second playback device. See reference numeral 701 in Figure 7.
  • the command is received at a control device, such as one of the control devices 130a-130c illustrated in Figure 1A.
  • the command is received at a playback device, such as one of the playback devices 1 al l Or illustrated in Figures 1A through 1 E and 1 G.
  • the command may be received when a user performs a “press-and-hold” input on a user interface of the first or second playback device.
  • the command may be received at any computing device capable of processing such command as disclosed herein, and/or controlling one or more playback devices as a result of such processing.
  • the third row of decision matrix 900 represents a situation where the received command specifies that a finite number of playback devices are to be grouped in a synchrony group in which each of the grouped devices receives and plays back all channels of multichannel audio content.
  • audio content can be communicated to the playback devices using the BLUETOOTH Direct LE Audio profile.
  • Figures 1 B group 107a
  • Figure 10B is a schematic diagram that illustrates formation of a synchrony group of playback devices.
  • control device 130a receives a command 1021 specifying that the user wishes to form a synchrony group with two (or more) sets of headphones.
  • control device 130a (for example, the tablet) makes a selection 1022 to communicate audio content to the playback devices 110 (for example, the friends’ sets of headphones) using the BLUETOOTH Direct LE Audio profile.
  • the playback devices 110 for example, the friends’ sets of headphones
  • maintaining synchronous playback amongst a relatively smaller and finite group of playback devices benefits from the more robust bidirectional communication path provided by the Direct LE Audio profile.
  • the Direct LE Audio profile may be considered preferred in this application because, for example, (a) the number of paired devices is relatively small, that is, within the capabilities of the Direct LE Audio profile; and (b) the received command reflects an application wherein the number of paired devices is unlikely to increase beyond a maximum supported by the Direct LE Audio profile.
  • Figure 10D is a schematic diagram that illustrates connecting a BLUETOOTH-only playback device 1044 to a synchrony group 1043 that receives audio content via WI-FI transmission.
  • control device 130a receives a command 1041 specifying that the user wishes to connect playback device 1044 (such as a set of BLUETOOTH headphones) to existing synchrony group 1043.
  • playback device 1044 such as a set of BLUETOOTH headphones
  • synchrony group 1043 includes at least one BLUETOOTH enabled WI-FI playback device 1043'.
  • control device 130a makes a selection 1042 to cause playback device 1044 to receive audio content from BLUETOOTH-enabled WI-FI playback device 1043' using the BLUETOOTH Direct LE Audio profile.
  • This can be implemented by sending command and control instructions from control device 130a to BLUETOOTH-only playback device 1044.
  • the user can cause his/her BLUETOOTH-only playback device 1044 to synchronously join such group without concern for the particular communication techniques that will be used to implement such command.
  • Such command can be generated, for example, with a “press-and-hold” input provided at playback device 1044.
  • receipt of a command provided via a playback device user interface causes the playback device itself to select an appropriate communication technique.
  • This allows the user to join an existing synchrony group without knowledge or concern for the particular communication technique used to accomplish such joinder. It also enables use of BLUETOOTH-only playback device 1044 in a way that it seamlessly connects to a WI-FI based system.
  • the received command specifies that a large, or potentially unknown, number of playback devices are to be grouped in a synchrony group
  • audio content can be communicated to the playback devices using the Broadcast Audio BLUETOOTH profile.
  • FIG. 10C is a schematic diagram that illustrates formation of a synchrony group of playback devices that receives audio content in accordance with the Broadcast Audio BLUETOOTH profile.
  • control device 130a receives a command 1031 specifying that the user wishes to create a large synchrony group with a quantity of playback devices 110 that is larger than that supported by the Direct LE Audio profile. This may occur, for example, where audio content is delivered to a playback device embedded in each seat in an aircraft. Or it may occur where a group of friends is having a party at the beach, and it is unknown how many attendees or associated playback devices will ultimately join the party. It may also occur where, for example, the bandwidth required for individual point-to-point data streams exceeds available bandwidth, even if the number of data streams is nominally supported by the Direct LE Audio profile.
  • control device 130a Upon receiving command 1031 , control device 130a makes a selection 1032 to communicate audio content to the playback devices 110 using the Broadcast Audio BLUETOOTH profile.
  • the Broadcast Audio profile might also be used where the user initially creates a relatively small synchrony group but wishes to accommodate an unknown or indefinite number of additional playback devices to be subsequently added to the synchrony group.
  • the Broadcast Audio profile can be selected whenever the user creates a synchrony group, regardless of the number of devices to be included in the group. This eliminates any need for future switching of communication techniques and leverages the fact that the demand for tightly synchronized audio playback is typically lower in the context of a synchrony group than a bonded pair.
  • the received command may indicate a required minimum audio playback quality that must be provided at one or more of the playback devices. More specifically, the received command may indicate that the user wishes to form a group of playback devices — either a bonded group or a synchrony group — capable of providing audio playback with a specified audio quality that cannot be provided by the BLUETOOTH LE Audio specification. Alternatively, the received command may identify a playback device that does not support BLUETOOTH communication.
  • the selected communication technique may use WI-FI technology.
  • the selected communication technique does not necessarily use the BLUETOOTH communication protocol.
  • the BLUETOOTH communication protocol is one example of a PAN protocol
  • the WI-FI communication protocol is one example of a LAN protocol.
  • the received command indicates a user preference to reduce power consumption, this may suggest selecting BLUETOOTH communications instead of WI-FI communications for a given application. More generally, in some cases the received command may indicate a preference to operate in an energy conservation mode, such as for portable playback applications, and the resulting selection of a communication technique may be based at least in part on such preference.
  • the selected communication technique may depend on the type of audio content that is to be communicated. For example, when one of the playback devices is used as a satellite speaker in a home theater system, the data packets transmitted to the satellite are relatively smaller than the data packets transmitted in a music-only application. In particular, a home theater system employs smaller data packets to maintain lip synchrony with corresponding video playback. In some cases it may be preferred to use the Direct LE Audio profile for communication in a home theater application because of the tighter synchronization such profile provides as compared to the Broadcast Audio profile.
  • the selected communication technique may dictate whether audio content is transmitted from a sound bar using a front haul network (such as when communicating audio content to a bonded group of satellites) or a back haul network (such as when communicating audio content to a playback device that is not part of a bonded group).
  • a front haul network such as when communicating audio content to a bonded group of satellites
  • a back haul network such as when communicating audio content to a playback device that is not part of a bonded group.
  • the communication technique can be used to communicate audio content to the first and second playback devices. See reference numeral 703 in Figure 7.
  • playback optionally begins.
  • the entity that selects the preferred communication technique can act as a group coordinator that sources audio content, sends the audio content over an established connection via the selected communication technique, and initiates synchronous playback.
  • the entity that selects the preferred communication technique optionally delegates additional functionality to another entity, such as one of the first and/or second playback devices.
  • the communication technique may be selected based only on user identification of one or more playback devices and/or a group type (bonded or synchrony).
  • the user need not be familiar with the various benefits or drawbacks of different communication techniques for a given application, as the framework disclosed herein can select a preferred technique based on an evaluation of the user’s indicated intent (for example, an intent to establish a bonded group) and without further user involvement.
  • playback may be initiated without the user even knowing which communication technique is being used. This simplifies and streamlines the user experience, eliminating any need for familiarity with various communication techniques, while simultaneously providing the user with an audio playback experience that leverages the benefits of available technologies.
  • the techniques disclosed herein can be implemented without regard to whether or not the devices which are to receive audio content are already connected to an existing network, such as a LAN. For example, a user may start audio playback using a BLUETOOTH LE Audio connection to a first playback device, then simply press-and- hold the play/pause button (or other similar user input) on another playback device to automatically have the second playback device join the first playback device for synchronous playback of the audio content.
  • Messages transmitted using the BLUETOOTH LE Audio profile can be used to negotiate when a playback device is to function as a soft access point (AP) (sometimes referred to as a “software AP”); create a network for other playback devices to join; and/or pass other control, volume, battery status, and configuration messages. These messages can also be used to establish communication using a selected protocol based on the techniques disclosed herein.
  • AP soft access point
  • FIG. 8 is a flowchart illustrating another example method 800 for synchronous playback of audio content at two or more playback devices, wherein the audio content is communicated between at least two playback devices using a communication technique that is selected based on user input.
  • Method 800 includes a number of phases and sub-processes, the sequence of which may vary from one implementation to another. In some cases different operations may be performed in an overlapping fashion, particularly where the different overlapping operations are performed by different components. However, when considered in the aggregate, these phases and subprocesses are capable of selecting and modifying an audio content communication technique for a given application.
  • Method 800 commences when a first playback device 881 receives a command 820 to group first playback device 881 with a second playback device 882.
  • command 820 includes one or more device identifiers 821 and a group type identifier 822.
  • device identifiers 821 may identify one or more of first playback device 881 or second playback device 882.
  • Group type identifier 822 may specify, for example, a bonded group (in which first playback device 881 and second playback device 882 each receive and play back different channels of multichannel audio content) or a synchrony group (in which first playback device 881 and second playback device 882 each receive and play back all channels of multichannel audio content).
  • Command 820 may identify master bedroom speakers and specify a bonded pair (see, for example, command 1011 illustrated in Figure 10A), may identify two sets of headphones and specify a finite synchrony group (see, for example, command 1021 illustrated in Figure 10B), may identify several portable playback devices and specify a finite or infinite synchrony group (see, for example, command 1031 in Figure 10C), or may identify a new playback device that is to be added to an existing synchrony group (see, for example, command 1041 in Figure 10D).
  • first playback device 881 selects a technique for transmitting audio content 830 to second playback device 882. See reference numeral 802 in Figure 8. As described above in connection with reference numeral 702 in Figure 7, the selection can be based on, for example, (a) capabilities of one or more devices identified by device identifier 821 and/or (b) group type 822. In some cases the selection is made with reference to a decision matrix (such as decision matrix 900 illustrated in Figure 9) that is saved at first playback device 881 or another networked location.
  • Example communication techniques include, but are not limited to, communication using the BLUETOOTH Direct LE Audio profile, communication using the BLUETOOTH Broadcast Audio profile, communication using the BLUETOOTH A2DP, and communication using WI-FI.
  • first playback device 881 uses the selected communication technique to send audio content 830 to second playback device 882. See reference numeral 803 in Figure 8. Second playback device 882 in turn receives audio content 830 from first playback device 881 . See reference numeral 804 in Figure 8. At this point, first playback device 881 and second playback device 882 together provide synchronous playback of audio content 830 at the respective playback devices. See reference number 805 in Figure 8.
  • first playback device 881 receives a subsequent command 840 to extend playback to one or more additional playback devices 883. See reference numeral 806 in Figure 8.
  • the subsequent command 840 includes one or more device identifiers 841 associated with the corresponding one or more additional playback devices 883.
  • the subsequent command 840 optionally specifies a group type 842 associated with an extended playback group that includes the one or more additional playback devices 883.
  • the subsequent command 840 omits any group type designation, in which case the group type may be automatically selected based on, for example, a total quantity of playback devices in the extended playback group.
  • first playback device 881 After receiving subsequent command 840, first playback device 881 reassesses the previously selected communication technique. See reference numeral 807 in Figure 8. This reassessment may result in a newly selected communication technique that is different than the previously selected communication technique. Such reassessment may be performed in similar fashion to that described above with respect to the initial selection made after receiving command 820, although the subsequent reassessment may result in identification of a different preferred communication technique due to, for example, a different number of playback devices that are to be grouped, or different capabilities of the playback devices that are to be grouped.
  • the reassessment may result in a determination that communication pursuant to the Broadcast Audio profile is preferred. Or, if one or more of additional playback devices 883 do not support the previously selected communication technique, then the reassessment may result in a determination that an older or more widely supported communication technique, such as communication pursuant to the A2DP, is preferred.
  • first playback device 881 uses the newly selected communication technique to send audio content 850 to second playback device 882 and the one or more additional playback devices 883. See reference numeral 808 in Figure 8. Second playback device 882 in turn receives audio content 850 from first playback device 881. See reference numeral 809 in Figure 8. And the one or more additional playback devices 883 receive audio content 850 from first playback device 881. See reference numeral 810 in Figure 8. At this point, first playback device 881 , second playback device 882, and the one or more additional playback devices 883 together provide synchronous playback of audio content 850 at the respective playback devices. See reference number 811 in Figure 8.
  • the capacities of the playback devices that affect the communication technique selection include not only the capacities of the receiving playback devices, but also the capacity of the sending playback device and/or another playback device that is not presently involved in distribution of the audio content. For example, consider a user who is playing back audio content on a phone, but wishes to play back the audio content using a home theater system, either in addition to or instead of the phone. If the user inputs a press-and-hold command on a satellite playback device of the home theater system, a primary playback device of the home theater system can be configured to respond to such command by receiving, decoding and transmitting audio content to the satellite playback device, even though the primary playback device does not actually participate in the playback of any audio content.
  • the subsequently-received command 840 specifies that first playback device 881 is to be excluded from subsequent audio playback, such that first playback device 881 functions as an audio source, but does not actually provide playback of audio content 850.
  • an alternative implementation may involve audio content being communicated to first playback device 881 using the initially identified communication technique (for example, using a BLUETOOTH LE Audio profile), while audio content is concurrently communicated to the one or more additional playback devices 883 using the second communication technique (for example, using WI-FI or a different BLUETOOTH LE Audio profile).
  • control device 130a (such as a smartphone) sends commands to a primary playback device 1101 that subsequently communicates the audio content to one or more secondary playback devices which form an “off-LAN” synchrony group 1102.
  • Control device 130a can communicate with primary playback device 1101 using the A2DP or the Direct LE Audio profile, while primary playback device 1101 can communicate with the secondary playback devices in synchrony group 1102 using the Broadcast Audio profile.
  • control device 130a (such as a smartphone) sends commands to primary playback device 1101 which forms a bonded stereo pair with a secondary playback device 1103.
  • control device 130a can communicate with primary playback device 1101 using the A2DP or the Direct LE Audio profile, while primary playback device 1101 can communicate with secondary device 1103 using the Direct LE Audio profile.
  • primary playback device 1101 is configured to receive and play back a Left Channel of multichannel audio content
  • secondary playback device 1103 is configured to receive and play back a Right Channel of the multichannel audio content.
  • Other configurations can be used in other implementations depending on specific user requirements as reflected in a received command.
  • primary playback device 1101 acts as both an audio sink (it receives audio content from control deice 130a) and an audio source (it transmits the audio content to the one or more secondary playback devices).
  • the playback device 1101 can be seen as an intermediary that effectively creates a bonded pair with secondary playback device 1103 via two distinct BLUETOOTH connections.
  • the selected communication technique for transmitting audio content to the one or more secondary playback devices 1103 optionally depends on the type of control device 130a and the stream quality that control device 130a is able to provide.
  • the selected communication technique for transmitting audio content to the one or more secondary playback devices 1103 additionally or alternatively may depend on remaining available bandwidth, given the bandwidth consumed by the link between control device 130a and primary playback device 1101. For example, if the A2DP is used to transmit audio content to primary playback device 1101 , this will consume relatively more power and bandwidth than a BLUETOOTH LE connection. This, in turn, may affect what communication technique is selected for communications with the one or more secondary playback devices 1103. In particular, if the A2DP is used to transmit audio content to primary playback device 1101 , then it may be desired to select a communication technique associated with secondary playback device 1103 that consumes a reduced amount of bandwidth and/or power, such as a BLUETOOTH LE Audio communication profile.
  • the techniques disclosed herein allow a playback device to intelligently choose an audio communication technique that is well-suited for a particular implementation. This is accomplished by assessing a user’s intended action, choosing a communication technique capable of handling the user request, and establishing a connection in accordance with the chosen technique.
  • audio content may be communicated using the BLUETOOTH Direct Audio profile, the BLUETOOTH Broadcast Audio profile, the BLUETOOTH A2DP, or WI-FI.
  • User knowledge of the particular audio content communication technique is not required, and in some implementations the selected communication technique may be modified without notifying the user.
  • choosing a communication profile based on an evaluation of the user’s intended action, and/or based on a prediction or assessment of how the connected devices will be used allows the user to reap the benefits of the most appropriate profile for a given application, thus enhancing user experience.
  • references herein to “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one example embodiment of an invention.
  • the appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
  • the embodiments described herein, explicitly and implicitly understood by one skilled in the art can be combined with other embodiments.
  • At least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
  • a first playback device comprises one or more processors.
  • the first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network.
  • the first playback device further comprises at least one non- transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to, based on a group type for the group, and further based on the at least one communication protocol, select a technique for communicating with the second playback device.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected technique.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
  • (Feature 2) The first playback device of Feature 1 , wherein the command identifies the second playback device and specifies the group type for the group.
  • (Feature 3) The first playback device of Feature 1 or 2, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) the selected technique for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
  • (Feature 4) The first playback device of Feature 1 or 2, wherein (a) the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) the selected technique for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
  • (Feature 6) The first playback device of Feature 5, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
  • (Feature 7) The first playback device of Feature 5, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.
  • (Feature 8) The first playback device of any preceding Feature, wherein (a) the command further specifies a proximity between the first and second playback devices; and (b) the technique for communicating is selected further based on the proximity.
  • (Feature 12) The first playback device of any preceding Feature, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that (a) associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first technique for communicating with the second playback device; and (b) associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second technique for communicating with the second playback device.
  • (Feature 13) The first playback device of any one of Features 1 to 10, wherein (a) the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected technique for communicating with the second playback device; and (b) selecting the technique for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure.
  • (Feature 14) The first playback device of Feature 1 or 2, wherein the group type is selected from (a) a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or (b) a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
  • a first playback device comprises one or more processors.
  • the first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network.
  • the first playback device further comprises at least one non- transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using a BLUETOOTH communication protocol.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to, based on a group type for the group, select a profile for communicating with the second playback device using the BLUETOOTH communication protocol.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected profile of the BLUETOOTH communication protocol.
  • the program instructions are executable by the one or more processors such that the first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
  • (Feature 17) The first playback device of Feature 15 or 16, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content; and (b) the selected profile for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
  • (Feature 18) The first playback device of Feature 15 or 16, wherein (a) the group type is a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; and (b) the selected profile for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
  • Feature 20 The first playback device of Feature 19, wherein the selected profile provides a one-to-one communication technique when the quantity is below a threshold quantity associated with the BLUETOOTH communication protocol.
  • (Feature 25) The first playback device of any one of Features 15 to 24, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that (a) associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first profile for communicating with the second playback device; and (b) associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second profile for communicating with the second playback device.
  • (Feature 26) The first playback device of any one of Features 15 to 23, wherein (a) the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected profile for communicating with the second playback device; and (b) selecting the profile for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure.
  • (Feature 27) The first playback device of Feature 15 or 16, wherein the group type is selected from (a) a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or (b) a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
  • a method comprises receiving, at a first playback device, a command that specifies that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol.
  • the method further comprises, based on a group type for the group, and further based on the at least one communication protocol, selecting a technique for communicating with the second playback device.
  • the method further comprises transmitting audio content to the second playback device using the selected technique.
  • the method further comprises playing back the audio content in synchrony with playback of the audio content by the second playback device.
  • (Feature 34) The method of Feature 32 or 33, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) transmitting the audio content to the second playback device using the selected technique comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
  • Feature 35 The method of Feature 32 or 33, wherein (a) the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) transmitting the audio content to the second playback device using the selected technique comprises transmitting packets of the audio content that are addressed to the second playback device.
  • Feature 37 The method of Feature 36, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
  • Feature 38 The method of Feature 36, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method comprises receiving, at a first playback device, a command that specifies that the first playback device will form part of a group that comprises a second playback device. The second playback device is capable of communicating with the first playback device using at least one communication protocol. The command identifies the second playback device and specifies a group type for the group. The method further comprises, based on the group type and the at least one communication protocol, selecting a technique for communicating with the second playback device. The method further comprises transmitting audio content to the second playback device using the selected technique. The method further comprises playing back the audio content in synchrony with playback of the audio content by the second playback device.

Description

WIRELESS COMMUNICATION PROFILE MANAGEMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
[OOOIJThis application claims the benefit of U.S. Provisional Patent Application 63/583,930 (filed 20 September 2023), the entire disclosure of which is hereby incorporated by reference herein.
FIELD OF THE DISCLOSURE
[0002] The present disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.
BACKGROUND
[0003] Options for accessing and listening to digital audio in an out-loud setting were limited until in 2002, when Sonos, Inc. began development of a new type of playback system. Sonos then filed one of its first patent applications in 2003, entitled “Method for Synchronizing Audio Playback between Multiple Networked Devices”, and began offering its first media playback systems for sale in 2005. The SONOS Wireless Home Sound System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a controller (for example, smartphone, tablet, computer, voice input device), one can play what she wants in any room having a networked playback device. Media content (for example, songs, podcasts, video sound) can be streamed to playback devices such that each room with a playback device can play back corresponding different media content. In addition, rooms can be grouped together for synchronous playback of the same media content, and/or the same media content can be heard in all rooms synchronously.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings, as listed below. A person skilled in the relevant art will understand that the features shown in the drawings are for purposes of illustrations, and variations, including different and/or additional features and arrangements thereof, are possible.
[0005] Figure 1A is a partial cutaway view of an environment having a media playback system configured in accordance with aspects of the disclosed technology.
[0006] Figure 1 B is a schematic diagram of the media playback system of Figure 1 A and one or more networks.
[0007] Figure 1 C is a block diagram of a playback device.
[0008] Figure 1 D is a block diagram of a playback device.
[0009] Figure 1 E is a block diagram of a bonded playback device.
[0010] Figure 1 F is a block diagram of a network microphone device.
[0011] Figure 1 G is a block diagram of a playback device.
[0012] Figure 1 H is a partial schematic diagram of a control device.
[0013] Figures 11 through 1 L are schematic diagrams of corresponding media playback system zones.
[0014] Figure 1 M is a schematic diagram of media playback system areas.
[0015] Figure 2A is a front isometric view of a playback device configured in accordance with aspects of the disclosed technology.
[0016] Figure 2B is a front isometric view of the playback device of Figure 2A without a grille.
[0017] Figure 2C is an exploded view of the playback device of Figure 2A.
[0018] Figure 3A is a front view of a network microphone device configured in accordance with aspects of the disclosed technology.
[0019] Figure 3B is a side isometric view of the network microphone device of Figure 3A.
[0020] Figure 3C is an exploded view of the network microphone device of Figures 3A and 3B.
[0021] Figure 3D is an enlarged view of a portion of Figure 3B.
[0022] Figure 3E is a block diagram of the network microphone device of Figures 3A through 3D.
[0023] Figure 3F is a schematic diagram of an example voice input. [0024] Figures 4A through 4D are schematic diagrams of a control device in various stages of operation in accordance with aspects of the disclosed technology.
[0025] Figure 5 is front view of a control device.
[0026] Figure 6 is a message flow diagram of a media playback system.
[0027] Figure 7 is a flowchart illustrating an example method for selecting a communication technique for transmitting audio content between two playback devices.
[0028] Figure 8 is a flowchart illustrating an example method for synchronous playback of audio content at two or more playback devices, wherein the audio content is communicated between at least two playback devices using a communication technique that is selected based on user input.
[0029] Figure 9 illustrates an example decision matrix that can be used to select an audio content communication technique.
[0030] Figure 10A is a schematic diagram that illustrates formation of a bonded pair of playback devices.
[0031] Figure 10B is a schematic diagram that illustrates formation of a synchrony pair of playback devices.
[0032] Figure 10C is a schematic diagram that illustrates formation of a synchrony group of playback devices that receives audio content in accordance with the Broadcast Audio BLUETOOTH profile.
[0033] Figure 10D is a schematic diagram that illustrates connecting a BLUETOOTH- only device to a synchrony group of playback devices that receive audio content via WI-FI transmission.
[0034] Figure 11A is a schematic diagram that illustrates a control device sending commands to a primary playback device that subsequently communicates the audio content to one or more secondary playback devices which form an “off-LAN” synchrony group.
[0035] Figure 11 B is a schematic diagram that illustrates a control device sending commands to a primary playback device which forms a bonded stereo pair with a secondary playback device [0036] The drawings are for the purpose of illustrating example embodiments, but those of ordinary skill in the art will understand that the technology disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings.
DETAILED DESCRIPTION
I. Overview
[0037] Sonos has a long history of creating innovative wireless audio products that provide an intuitive, convenient, and straightforward user experience. For example, critics and end users alike have praised Sonos for developing wireless multizone speaker systems that allow users to easily extend audio playback across multiple wireless playback devices. These audio systems can dynamically adapt to the requirements of a given implementation, thereby providing a consistent user experience notwithstanding changing conditions. For instance, such systems can be deployed without regard to the network resources which may — or may not — be available in a given operating environment. Thus, an audio source may seamlessly transition from communicating with a playback device via a first communication technique (for example, via a local area network WI-FI connection) to communicating with the playback device via a second communication technique (for example, via a BLUETOOTH audio connection) when the first communication technique becomes unstable or is otherwise no longer reliably available. These audio systems are also adaptable to changing user demands, thus enabling, for example, a user to start audio playback on a first playback device and later cause a second playback device to join in synchronous playback simply by pressing and holding a play/pause button on the second playback device.
[0038]As part of this ongoing innovation, Sonos has identified shortcomings of existing wireless audio communication protocols and has further identified functionality provided by developments in wireless networking technology that can be leveraged to address these shortcomings to further enhance the user experience. For example, while the widely adopted BLUETOOTH Advanced Audio Distribution Profile (A2DP), also referred to as “BLUETOOTH Classic”, has been successful in delivering a quality audio listening experience, this profile suffers from certain shortcomings, such as being limited to one or more point-to-point arrangements between audio source devices and audio sink devices. In particular, A2DP is unable to ensure that multiple audio sink devices render their audio streams at exactly the same time such that playback is synchronized across the multiple devices. The more recently developed BLUETOOTH Low Energy (LE) Audio specification provides features that can be used to address certain shortcomings of A2DP. One such BLUETOOTH LE Audio feature is “BLUETOOTH LE Isochronous Channels”, which provides an improved way of transferring time-bounded data between devices. BLUETOOTH LE Isochronous Channels has, in turn, enabled bidirectional point-to-point audio transmission to a limited number of devices (referred to herein as communication via the “Direct LE Audio” BLUETOOTH profile) and unidirectional audio transmission to a larger number of recipients (referred to herein as communication via the “Broadcast Audio” BLUETOOTH profile). Yet another feature provided by the BLUETOOTH LE Audio specification is “Multi-Stream Audio”, which enables transmission of multiple, independent, synchronized audio streams between an audio source device and one or more audio sink devices. Multi-Stream Audio can be used to, for example, send independent audio streams to truly wireless left/right earbuds.
[0039] As used herein, a “profile” can be understood as defining the rules for how to use a wireless communication technology, such as BLUETOOTH, for a particular application. A particular communication profile may dictate data packet content, such as where packets transmitted according to a point-to-point profile may include address information, while packets transmitted according to a broadcast profile may not include unique address information for particular recipients. Sonos has leveraged the Direct LE Audio and Broadcast Audio profiles to improve user experience. These profiles are briefly described in turn.
[0040] The Direct LE Audio BLUETOOTH profile defines a communication technique that provides the ability to transmit multiple, independent, bidirectional, synchronized audio streams between a single central device (for example, an audio source such as a smartphone) and a finite number of peripheral devices (for example, one or more audio sinks such as one or more wireless headsets). The Direct LE Audio profile is therefore sometimes referred to as a one-to-one communication technique. To support communication using the Direct LE Audio profile, a connected isochronous group (CIG) is created by the audio source, which can include multiple connected isochronous streams (CISs). Each CIS is a point-to-point data stream between the central device and the peripheral device that provides bidirectional communication with acknowledgement. As a point-to-point data stream, data packets transmitted in accordance with the Direct LE Audio profile will be specifically addressed to a particular recipient device. Bidirectional communication enables a playback device to send audio control information to a control device or audio source device. It also provides robust user interface control at both the audio source device and the audio sink device. Bidirectional communication also provides an improved user experience for assisted listening devices, headsets, and hands-free telephony devices that include microphone in and control features. This improves the performance of truly wireless earbuds, thereby providing a better stereo imaging experience, more seamless voice control services, and smoother switching between audio sources.
[0041] The Broadcast Audio BLUETOOTH profile, marketed under the trademark AURACAST, defines a communication technique that enables an audio source device to broadcast an audio stream to an unlimited number of BLUETOOTH audio sink devices. Each of these audio streams is referred to as a Broadcast Isochronous Stream (BIS); multiple streams can be grouped into a Broadcast Isochronous Group (BIG). As a broadcast data stream, data packets transmitted in accordance with the Broadcast Audio profile are not individually addressed to any particular recipient device. These audio broadcasts can be open (in which case any in-range audio sink device may participate) or closed (in which case only audio sink devices with the correct passkey can participate). This allows, for example, an individual to share his/her audio stream, such as from a phone or tablet, to nearby users’ headphones. Any authorized device within range of the broadcaster can receive and render the broadcasted audio stream. On a larger scale, location-based sharing allows a large public venue to broadcast multiple audio streams, thus allowing any number of listeners to configure their headphones to receive, for example, public address announcements in a particular language.
[0042] While the Direct LE Audio and Broadcast Audio profiles provide enhanced functionality vis-a-vis A2DP, one or the other might be more appropriate for a given application. And in some cases it may still be desirable to instead communicate using Multi-Stream Audio, an A2DP audio stream, or even a completely different communication protocol such as WI-FI. For example, the A2DP has broad compatibility across devices but is limited in the number of audio sink devices and has relatively higher power consumption. The Direct LE Audio profile supports bidirectional communication with multiple audio sink devices, although the number of such devices is finite. The Broadcast Audio profile supports an unlimited number of audio sink devices but does not provide a rich bidirectional communication path. In addition, a Broadcast Audio data stream will typically transmit at a relatively high (or maximum) allotted power to reach as many potential recipients as possible. Broadcast streams will therefore have a larger range — but higher power consumption — as compared to data streams that are compliant with the Direct LE Audio profile. Given these different benefits and tradeoffs, the inventors have developed techniques for managing and selecting amongst different communication techniques in the context of a wireless audio playback application implemented in a dynamically changing operating environment. Regardless of the particular selected communication technique, encryption mechanisms are optionally implemented to ensure that audio streams are not playable at unauthorized playback devices.
[0043] Disclosed herein are techniques that allow a playback device to intelligently choose an audio communication technique that is well-suited for a particular implementation. This may involve assessing a user’s intended action, choosing a communication technique capable of handling the user request, and establishing a connection in accordance with the chosen technique. For example, in a situation where a user wishes to create a bonded stereo pair in which two playback devices comprise left and right audio channels, a connection using the Direct LE Audio profile might be used, thus providing the benefit of a richer bidirectional communication path where there are only a limited number of audio sink devices. On the other hand, where the user wishes to establish a synchrony group, a decision between communicating using the Direct LE Audio profile and the Broadcast Audio profile might be made based on either the group size or a prediction of future group size. In some cases the profile selection may take into consideration profiles supported by one or more of the connected devices. More generally, choosing a communication profile based on an evaluation of the user’s intended action, and/or based on a prediction or assessment of how the connected devices will be used, allows the user to reap the benefits of the most appropriate profile for a given application, thus further enhancing user experience.
[0044] In some embodiments, for example, a first playback device comprises one or more processors. The first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network. The first playback device further comprises at least one non-transitory computer-readable medium comprising program instructions that are executable by the one or more processors. The first playback device is configured to receive a command specifying that the first playback device will form part of a group that comprises a second playback device. The second playback device is capable of communicating with the first playback device using at least one communication protocol. The command identifies the second playback device and specifies a group type for the group. The first playback device is further configured to, based on the group type and the at least one communication protocol, select a technique for communicating with the second playback device. The first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected technique. The first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
[0045] While some examples described herein may refer to functions performed by given actors such as “users”, “listeners”, and/or other entities, it should be understood that such references are for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves.
[0046] In the Figures, identical reference numbers identify generally similar, and/or identical, elements. To facilitate the discussion of any particular element, the most significant digit or digits of a reference number refers to the Figure in which that element is first introduced. For example, element 110a is first introduced and discussed with reference to Figure 1A. Many of the details, dimensions, angles, and other features shown in the Figures are merely illustrative of particular embodiments of the disclosed technology. Accordingly, other embodiments can have other details, dimensions, angles, and features without departing from the spirit or scope of the disclosure. In addition, those of ordinary skill in the art will appreciate that further embodiments of the various disclosed technologies can be practiced without several of the details described below.
II. Suitable Operating Environment
[0047] Figure 1A is a partial cutaway view of a media playback system 100 distributed in an environment 101 (for example, a house). The media playback system 100 comprises one or more playback devices 110 (identified individually as playback devices 110a-n), one or more network microphone devices 120 (“NMDs”) (identified individually as NMDs 120a-c), and one or more control devices 130 (identified individually as control devices 130a and 130b).
[0048] As used herein the term “playback device” can generally refer to a network device configured to receive, process, and output data of a media playback system. For example, a playback device can be a network device that receives and processes audio content. In some embodiments, a playback device includes one or more transducers or speakers powered by one or more amplifiers. In other embodiments, however, a playback device includes one of (or neither of) the speaker and the amplifier. For instance, a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable. [0049] Moreover, as used herein the term “NMD” (that is, a “network microphone device”) can generally refer to a network device that is configured for audio detection. In some embodiments, an NMD is a stand-alone device configured primarily for audio detection. In other embodiments, an NMD is incorporated into a playback device (or vice versa).
[0050] The term “control device” can generally refer to a network device configured to perform functions relevant to facilitating user access, control, and/or configuration of the media playback system 100. [0051] Each of the playback devices 110 is configured to receive audio signals or data from one or more media sources (for example, one or more remote servers, one or more local devices, and so forth) and play back the received audio signals or data as sound. The one or more NMDs 120 are configured to receive spoken word commands, and the one or more control devices 130 are configured to receive user input. In response to the received spoken word commands and/or user input, the media playback system 100 can play back audio via one or more of the playback devices 110. In certain embodiments, the playback devices 110 are configured to commence playback of media content in response to a trigger. For instance, one or more of the playback devices 110 can be configured to play back a morning playlist upon detection of an associated trigger condition (for example, presence of a user in a kitchen, detection of a coffee machine operation, and so forth). In some embodiments, for example, the media playback system 100 is configured to play back audio from a first playback device (for example, the playback device 110a) in synchrony with a second playback device (for example, the playback device 110b). Interactions between the playback devices 110, NMDs 120, and/or control devices 130 of the media playback system 100 configured in accordance with the various embodiments of the disclosure are described in greater detail below with respect to Figures 1 B through 6.
[0052] In the illustrated embodiment of Figure 1 A, the environment 101 comprises a household having several rooms, spaces, and/or playback zones, including (clockwise from upper left) a master bathroom 101a, a master bedroom 101b, a second bedroom 101c, a family room or den 101d, an office 101 e, a living room 101f, a dining room 101g, a kitchen 101 h, and an outdoor patio 101 i. While certain embodiments and examples are described below in the context of a home environment, the technologies described herein may be implemented in other types of environments. In some embodiments, for example, the media playback system 100 can be implemented in one or more commercial settings (for example, a restaurant, mall, airport, hotel, a retail or other store), one or more vehicles (for example, a sports utility vehicle, bus, car, a ship, a boat, an airplane, and so forth), multiple environments (for example, a combination of home and vehicle environments), and/or another suitable environment where multi-zone audio may be desirable. [0053] The media playback system 100 can comprise one or more playback zones, some of which may correspond to the rooms in the environment 101. The media playback system 100 can be established with one or more playback zones, after which additional zones may be added, or removed, to form, for example, the configuration shown in Figure 1A. Each zone may be given a name according to a different room or space such as the office 101 e, master bathroom 101 a, master bedroom 101 b, the second bedroom 101c, kitchen 101 h, dining room 101g, living room 101f, and/or the balcony 101 i. In some aspects, a single playback zone may include multiple rooms or spaces. In certain aspects, a single room or space may include multiple playback zones.
[0054] In the illustrated embodiment of Figure 1A, the second bedroom 101c, the office 101e, the living room 101f, the dining room 101g, the kitchen 101 h, and the outdoor patio 101 i each include one playback device 110, and the master bathroom 101a, master bedroom 101 b, and the den 101 d each include a plurality of playback devices 110. In the master bedroom 101 b, the playback devices 1101 and 110m may be configured, for example, to play back audio content in synchrony as individual ones of playback devices 110, as a bonded playback zone, as a consolidated playback device, and/or any combination thereof. Similarly, in the den 101 d, the playback devices 110h— k can be configured, for instance, to play back audio content in synchrony as individual ones of playback devices 110, as one or more bonded playback devices, and/or as one or more consolidated playback devices. Additional details regarding bonded and consolidated playback devices are described below with respect to Figures 1 B, 1 E, and 11 through 1 M.
[0055] In some aspects, one or more of the playback zones in the environment 101 may each be playing different audio content. For instance, a user may be grilling on the patio 101 i and listening to hip hop music being played by the playback device 110c while another user is preparing food in the kitchen 101 h and listening to classical music played by the playback device 110b. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office 101 e listening to the playback device 11 Of playing back the same hip hop music being played back by playback device 110c on the patio 101 i. In some aspects, the playback devices 110c and 110f play back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U.S. Patent 8,234,395 entitled “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices”, which is incorporated herein by reference in its entirety. a. Suitable Media Playback System
[0056] Figure 1 B is a schematic diagram of the media playback system 100 and a cloud network 102. For ease of illustration, certain devices of the media playback system 100 and the cloud network 102 are omitted from Figure 1 B. One or more communication links 103 (referred to hereinafter as “the links 103”) communicatively couple the media playback system 100 and the cloud network 102.
[0057] The links 103 can comprise, for example, one or more wired networks, one or more wireless networks, one or more wide area networks (WAN), one or more local area networks (LAN), one or more personal area networks (PAN), one or more telecommunication networks (for example, one or more Global System for Mobiles (GSM) networks, Code Division Multiple Access (CDMA) networks, Long-Term Evolution (LTE) networks, 5G communication networks, and/or other suitable data transmission protocol networks), and so forth. The cloud network 102 is configured to deliver media content (for example, audio content, video content, photographs, social media content, and so forth) to the media playback system 100 in response to a request transmitted from the media playback system 100 via the links 103. In some embodiments, the cloud network 102 is further configured to receive data (for example, voice input data) from the media playback system 100 and correspondingly transmit commands and/or media content to the media playback system 100.
[0058] The cloud network 102 comprises computing devices 106 (identified separately as a first computing device 106a, a second computing device 106b, and a third computing device 106c). The computing devices 106 can comprise individual computers or servers, such as, for example, a media streaming service server storing audio and/or other media content, a voice service server, a social media server, a media playback system control server, and so forth. In some embodiments, one or more of the computing devices 106 comprise modules of a single computer or server. In certain embodiments, one or more of the computing devices 106 comprise one or more modules, computers, and/or servers. Moreover, while the cloud network 102 is described above in the context of a single cloud network, in some embodiments the cloud network 102 comprises a plurality of cloud networks comprising communicatively coupled computing devices. Furthermore, while the cloud network 102 is shown in Figure 1 B as having three of the computing devices 106, in some embodiments, the cloud network 102 comprises fewer (or more than) three computing devices 106.
[0059] The media playback system 100 is configured to receive media content from the networks 102 via the links 103. The received media content can comprise, for example, a Uniform Resource Identifier (URI) and/or a Uniform Resource Locator (URL). For instance, in some examples, the media playback system 100 can stream, download, or otherwise obtain data from a URI or a URL corresponding to the received media content. A network 104 communicatively couples the links 103 and at least a portion of the devices (for example, one or more of the playback devices 110, NMDs 120, and/or control devices 130) of the media playback system 100. The network 104 can include, for example, a wireless network (for example, a WI-FI network, a BLUETOOTH network, a Z-WAVE network, a ZIGBEE network, and/or other suitable wireless communication protocol network) and/or a wired network (for example, a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication). As those of ordinary skill in the art will appreciate, as used herein, “WI-FI” can refer to several different communication protocols including, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.11 a, 802.11b, 802.11g, 802.11 n, 802.11ac, 802.11ac, 802.11 ad, 802.11 af, 802.11 ah, 802.11ai, 802.11 aj, 802.11 aq, 802.11 ax, 802.1 l ay, 802.15, and so forth transmitted at 2.4 Gigahertz (GHz), 5 GHz, and/or another suitable frequency.
[0060] In some embodiments, the network 104 comprises a dedicated communication network that the media playback system 100 uses to transmit messages between individual devices and/or to transmit media content to and from media content sources (for example, one or more of the computing devices 106). In certain embodiments, the network 104 is configured to be accessible only to devices in the media playback system 100, thereby reducing interference and competition with other household devices. In other embodiments, however, the network 104 comprises an existing household or commercial facility communication network (for example, a household or commercial facility WI-FI network). In some embodiments, the links 103 and the network 104 comprise one or more of the same networks. In some aspects, for example, the links 103 and the network 104 comprise a telecommunication network (for example, an LTE network, a 5G network, and so forth). Moreover, in some embodiments, the media playback system 100 is implemented without the network 104, and devices comprising the media playback system 100 can communicate with each other, for example, via one or more direct connections, PANs, telecommunication networks, and/or other suitable communication links. The network 104 may be referred to herein as a “local communication network” to differentiate the network 104 from the cloud network 102 that couples the media playback system 100 to remote devices, such as cloud servers that host cloud services.
[0061] In some embodiments, audio content sources may be regularly added or removed from the media playback system 100. In some embodiments, for example, the media playback system 100 performs an indexing of media items when one or more media content sources are updated, added to, and/or removed from the media playback system 100. The media playback system 100 can scan identifiable media items in some or all folders and/or directories accessible to the playback devices 110, and generate or update a media content database comprising metadata (for example, title, artist, album, track length, and so forth) and other associated information (for example, URIs, URLs, and so forth) for each identifiable media item found. In some embodiments, for example, the media content database is stored on one or more of the playback devices 110, network microphone devices 120, and/or control devices 130.
[0062] In the illustrated embodiment of Figure 1 B, the playback devices 1101 and 110m comprise a group 107a. The playback devices 1101 and 110m can be positioned in different rooms and be grouped together in the group 107a on a temporary or permanent basis based on user input received at the control device 130a and/or another control device 130 in the media playback system 100. When arranged in the group 107a, the playback devices 1101 and 110m can be configured to play back the same or similar audio content in synchrony from one or more audio content sources. In certain embodiments, for example, the group 107a comprises a bonded zone in which the playback devices 1101 and 110m comprise left audio and right audio channels, respectively, of multi-channel audio content, thereby producing or enhancing a stereo effect of the audio content. In some embodiments, the group 107a includes additional playback devices 110. In other embodiments, however, the media playback system 100 omits the group 107a and/or other grouped arrangements of the playback devices 110. Additional details regarding groups and other arrangements of playback devices are described in further detail below with respect to Figures 11 through 1M.
[0063] The media playback system 100 includes the NMDs 120a and 120b, each comprising one or more microphones configured to receive voice utterances from a user. In the illustrated embodiment of Figure 1 B, the NMD 120a is a standalone device and the NMD 120b is integrated into the playback device 11 On. The NMD 120a, for example, is configured to receive voice input 121 from a user 123. In some embodiments, the NMD 120a transmits data associated with the received voice input 121 to a voice assistant service (VAS) configured to (i) process the received voice input data and (ii) facilitate one or more operations on behalf of the media playback system 100.
[0064] In some aspects, for example, the computing device 106c comprises one or more modules and/or servers of a VAS (for example, a VAS operated by one or more of SONOS, AMAZON, GOOGLE, APPLE, MICROSOFT, and so forth). The computing device 106c can receive the voice input data from the NMD 120a via the network 104 and the links 103.
[0065] In response to receiving the voice input data, the computing device 106c processes the voice input data (that is, “Play Hey Jude by The Beatles”), and determines that the processed voice input includes a command to play a song (for example, “Hey Jude”). In some embodiments, after processing the voice input, the computing device 106c accordingly transmits commands to the media playback system 100 to play back “Hey Jude” by the Beatles from a suitable media service (for example, via one or more of the computing devices 106) on one or more of the playback devices 110. In other embodiments, the computing device 106c may be configured to interface with media services on behalf of the media playback system 100. In such embodiments, after processing the voice input, instead of the computing device 106c transmitting commands to the media playback system 100 causing the media playback system 100 to retrieve the requested media from a suitable media service, the computing device 106c itself causes a suitable media service to provide the requested media to the media playback system 100 in accordance with the user’s voice utterance. b. Suitable Playback Devices
[0066] Figure 1 C is a block diagram of the playback device 110a comprising an input/output 111. The input/output 111 can include an analog I/O 111 a (for example, one or more wires, cables, and/or other suitable communication links configured to carry analog signals) and/or a digital I/O 111 b (for example, one or more wires, cables, or other suitable communication links configured to carry digital signals). In some embodiments, the analog I/O 111 a is an audio line-in input connection comprising, for example, an auto-detecting 3.5 mm audio line-in connection. In some embodiments, the digital I/O 111 b comprises a Sony/Philips Digital Interface Format (S/PDIF) communication interface and/or cable and/or a Toshiba Link (TOSLINK) cable. In some embodiments, the digital I/O 111 b comprises a High-Definition Multimedia Interface (HDMI) interface and/or cable. In some embodiments, the digital I/O 111 b includes one or more wireless communication links comprising, for example, a radio frequency (RF), infrared, WI-FI, BLUETOOTH, or another suitable communication link. In certain embodiments, the analog I/O 111a and the digital I/O 111 b comprise interfaces (for example, ports, plugs, jacks, and so forth) configured to receive connectors of cables transmitting analog and digital signals, respectively, without necessarily including cables.
[0067] The playback device 110a, for example, can receive media content (for example, audio content comprising music and/or other sounds) from a local audio source 105 via the input/output 111 (for example, a cable, a wire, a PAN, a BLUETOOTH connection, an ad hoc wired or wireless communication network, and/or another suitable communication link). The local audio source 105 can comprise, for example, a mobile device (for example, a smartphone, a tablet, a laptop computer, and so forth) or another suitable audio component (for example, a television, a desktop computer, an amplifier, a phonograph (such as n LP turntable), a Blu-ray player, a memory storing digital media files, and so forth). In some aspects, the local audio source 105 includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files. In certain embodiments, one or more of the playback devices 110, NMDs 120, and/or control devices 130 comprise the local audio source 105. In other embodiments, however, the media playback system omits the local audio source 105 altogether. In some embodiments, the playback device 110a does not include an input/output 111 and receives all audio content via the network 104.
[0068]The playback device 110a further comprises electronics 112, a user interface 113 (for example, one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens, and so forth), and one or more transducers 114 (referred to hereinafter as “the transducers 114”). The electronics 112 are configured to receive audio from an audio source (for example, the local audio source 105) via the input/output 111 or one or more of the computing devices 106a-c via the network 104 (Figure 1 B), amplify the received audio, and output the amplified audio for playback via one or more of the transducers 114. In some embodiments, the playback device 110a optionally includes one or more microphones 115 (for example, a single microphone, a plurality of microphones, a microphone array) (hereinafter referred to as “the microphones 115”). In certain embodiments, for example, the playback device 110a having one or more of the optional microphones 115 can operate as an NMD configured to receive voice input from a user and correspondingly perform one or more operations based on the received voice input.
[0069] In the illustrated embodiment of Figure 1 C, the electronics 112 comprise one or more processors 112a (referred to hereinafter as “the processors 112a”), memory 112b, software components 112c, a network interface 112d, one or more audio processing components 112g (referred to hereinafter as “the audio components 112g”), one or more audio amplifiers 112h (referred to hereinafter as “the amplifiers 112h”), and power 112i (for example, one or more power supplies, power cables, power receptacles, batteries, induction coils, Power-over Ethernet (POE) interfaces, and/or other suitable sources of electric power). In some embodiments, the electronics 112 optionally include one or more other components 112j (for example, one or more sensors, video displays, touchscreens, battery charging bases, and so forth).
[0070]The processors 112a can comprise clock-driven computing component(s) configured to process data, and the memory 112b can comprise a computer-readable medium (for example, a tangible, non-transitory computer-readable medium loaded with one or more of the software components 112c) configured to store instructions for performing various operations and/or functions. The processors 112a are configured to execute the instructions stored on the memory 112b to perform one or more of the operations. The operations can include, for example, causing the playback device 110a to retrieve audio data from an audio source (for example, one or more of the computing devices 106a-c (Figure 1 B)), and/or another one of the playback devices 110. In some embodiments, the operations further include causing the playback device 110a to send audio data to another one of the playback devices 110a and/or another device (for example, one of the NMDs 120). Certain embodiments include operations causing the playback device 110a to pair with another of the one or more playback devices 110 to enable a multi-channel audio environment (for example, a stereo pair, a bonded zone, and so forth).
[0071] The processors 112a can be further configured to perform operations causing the playback device 110a to synchronize playback of audio content with another of the one or more playback devices 110. As those of ordinary skill in the art will appreciate, during synchronous playback of audio content on a plurality of playback devices, a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device 110a and the other one or more other playback devices 110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Patent 8,234,395, which was incorporated by reference above.
[0072] In some embodiments, the memory 112b is further configured to store data associated with the playback device 110a, such as one or more zones and/or zone groups of which the playback device 110a is a member, audio sources accessible to the playback device 110a, and/or a playback queue that the playback device 110a (and/or another of the one or more playback devices) can be associated with. The stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device 110a. The memory 112b can also include data associated with a state of one or more of the other devices (for example, the playback devices 110, NMDs 120, control devices 130) of the media playback system 100. In some aspects, for example, the state data is shared during predetermined intervals of time (for example, every 5 seconds, every 10 seconds, every 60 seconds, and so forth) among at least a portion of the devices of the media playback system 100, so that one or more of the devices have the most recent data associated with the media playback system 100.
[0073]The network interface 112d is configured to facilitate a transmission of data between the playback device 110a and one or more other devices on a data network such as, for example, the links 103 and/or the network 104 (Figure 1 B). The network interface 112d is configured to transmit and receive data corresponding to media content (for example, audio content, video content, text, photographs) and other signals (for example, non-transitory signals) comprising digital packet data including an Internet Protocol (IP)-based source address and/or an IP-based destination address. The network interface 112d can parse the digital packet data such that the electronics 112 properly receive and process the data destined for the playback device 110a.
[0074] In the illustrated embodiment of Figure 1 C, the network interface 112d comprises one or more wireless interfaces 112e (referred to hereinafter as “the wireless interface 112e”). The wireless interface 112e (for example, a suitable interface comprising one or more antennae) can be configured to wirelessly communicate with one or more other devices (for example, one or more of the other playback devices 110, NMDs 120, and/or control devices 130) that are communicatively coupled to the network 104 (Figure 1 B) in accordance with a suitable wireless communication protocol (for example, WI-FI, BLUETOOTH, LTE, and so forth). In some embodiments, the network interface 112d optionally includes a wired interface 112f (for example, an interface or receptacle configured to receive a network cable such as an Ethernet, a USB-A, USB-C, and/or Thunderbolt cable) configured to communicate over a wired connection with other devices in accordance with a suitable wired communication protocol. In certain embodiments, the network interface 112d includes the wired interface 112f and excludes the wireless interface 112e. In some embodiments, the electronics 112 exclude the network interface 112d altogether and transmit and receive media content and/or other data via another communication path (for example, the input/output 111 ). [0075] The audio components 112g are configured to process and/or filter data comprising media content received by the electronics 112 (for example, via the input/output 111 and/or the network interface 112d) to produce output audio signals. In some embodiments, the audio processing components 112g comprise, for example, one or more digital-to-analog converters (DACs), audio preprocessing components, audio enhancement components, digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, and so forth. In certain embodiments, one or more of the audio processing components 112g can comprise one or more subcomponents of the processors 112a. In some embodiments, the electronics 112 omit the audio processing components 112g. In some aspects, for example, the processors 112a execute instructions stored on the memory 112b to perform audio processing operations to produce the output audio signals.
[0076]The amplifiers 112h are configured to receive and amplify the audio output signals produced by the audio processing components 112g and/or the processors 112a. The amplifiers 112h can comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers 114. In some embodiments, for example, the amplifiers 112h include one or more switching or class-D power amplifiers. In other embodiments, however, the amplifiers 112h include one or more other types of power amplifiers (for example, linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class- C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G amplifiers, class-H amplifiers, and/or another suitable type of power amplifier). In certain embodiments, the amplifiers 112h comprise a suitable combination of two or more of the foregoing types of power amplifiers. Moreover, in some embodiments, individual ones of the amplifiers 112h correspond to individual ones of the transducers 114. In other embodiments, however, the electronics 112 include a single one of the amplifiers 112h configured to output amplified audio signals to a plurality of the transducers 114. In some other embodiments, the electronics 112 omit the amplifiers 112h.
[0077] The transducers 114 (for example, one or more speakers and/or speaker drivers) receive the amplified audio signals from the amplifier 112h and render or output the amplified audio signals as sound (for example, audible sound waves having a frequency between about 20 hertz (Hz) and 20 kilohertz (kHz)). In some embodiments, the transducers 114 can comprise a single transducer. In other embodiments, however, the transducers 114 comprise a plurality of audio transducers. In some embodiments, the transducers 114 comprise more than one type of transducer. For example, the transducers 114 can include one or more low frequency transducers (for example, subwoofers, woofers), mid-range frequency transducers (for example, mid-range transducers, mid-woofers), and one or more high frequency transducers (for example, one or more tweeters). As used herein, “low frequency” can generally refer to audible frequencies below about 500 Hz, “mid-range frequency” can generally refer to audible frequencies between about 500 Hz and about 2 kHz, and “high frequency” can generally refer to audible frequencies above 2 kHz. In certain embodiments, however, one or more of the transducers 114 comprise transducers that do not adhere to the foregoing frequency ranges. For example, one of the transducers 114 may comprise a midwoofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz.
[0078] By way of illustration, Sonos presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE”, “PLAY:1”, “PLAY:3”, “PLAY:5”, “PLAYBAR”, “PLAYBASE”, “CONNECT:AMP”, “CONNECT”, “AMP”, “PORT”, and “SUB”. Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, one of ordinary skill in the art will appreciate that a playback device is not limited to the examples described herein or to Sonos product offerings. In some embodiments, for example, one or more playback devices 110 comprise wired or wireless headphones (for example, over-the-ear headphones, on-ear headphones, in- ear earphones, and so forth). In other embodiments, one or more of the playback devices 110 comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices. In certain embodiments, a playback device may be integral to another device or component such as a television, an LP turntable, a lighting fixture, or some other device for indoor or outdoor use. In some embodiments, a playback device omits a user interface and/or one or more transducers. For example, Figure 1 D is a block diagram of a playback device 11 Op comprising the input/output 111 and electronics 112 without the user interface 113 or transducers 114.
[0079] Figure 1 E is a block diagram of a bonded playback device 110q comprising the playback device 110a (Figure 1 C) sonically bonded with the playback device 110i (for example, a subwoofer) (Figure 1A). In the illustrated embodiment, the playback devices 110a and 110i are separate ones of the playback devices 110 housed in separate enclosures. In some embodiments, however, the bonded playback device 110q comprises a single enclosure housing both the playback devices 110a and 110i. The bonded playback device 110q can be configured to process and reproduce sound differently than an unbonded playback device (for example, the playback device 110a of Figure 1 C) and/or paired or bonded playback devices (for example, the playback devices 110I and 110m of Figure 1 B). In some embodiments, for example, the playback device 110a is a full-range playback device configured to render low frequency, midrange frequency, and high frequency audio content, and the playback device 110i is a subwoofer configured to render low frequency audio content. In some aspects, the playback device 110a, when bonded with the first playback device, is configured to render only the mid-range and high frequency components of a particular audio content, while the playback device 110i renders the low frequency component of the particular audio content. In some embodiments, the bonded playback device 110q includes additional playback devices and/or another bonded playback device. Additional playback device embodiments are described in further detail below with respect to Figures 2A through 3D. c. Suitable Network Microphone Devices (NMDs)
[0080] Figure 1 F is a block diagram of the NMD 120a (Figures 1A and 1 B). The NMD 120a includes one or more voice processing components 124 (hereinafter “the voice components 124”) and several components described with respect to the playback device 110a (Figure 1 C) including the processors 112a, the memory 112b, and the microphones 115. The NMD 120a optionally comprises other components also included in the playback device 110a (Figure 1 C), such as the user interface 113 and/or the transducers 114. In some embodiments, the NMD 120a is configured as a media playback device (for example, one or more of the playback devices 110), and further includes, for example, one or more of the audio components 112g (Figure 10), the amplifiers 112h, and/or other playback device components. In certain embodiments, the NMD 120a comprises an Internet of Things (loT) device such as, for example, a thermostat, alarm panel, fire and/or smoke detector, and so forth. In some embodiments, the NMD 120a comprises the microphones 115, the voice processing components 124, and only a portion of the components of the electronics 112 described above with respect to Figure 1 C. In some aspects, for example, the NMD 120a includes the processor 112a and the memory 112b (Figure 1 C), while omitting one or more other components of the electronics 112. In some embodiments, the NMD 120a includes additional components (for example, one or more sensors, cameras, thermometers, barometers, hygrometers, and so forth).
[0081] In some embodiments, an NMD can be integrated into a playback device. Figure 1 G is a block diagram of a playback device 11 Or comprising an NMD 120d. The playback device 11 Or can comprise many or all of the components of the playback device 110a and further include the microphones 115 and voice processing components 124 (Figure 1 F). The playback device 11 Or optionally includes an integrated control device 130c. The control device 130c can comprise, for example, a user interface (for example, the user interface 113 of Figure 1 C) configured to receive user input (for example, touch input, voice input, and so forth) without a separate control device. In other embodiments, however, the playback device 11 Or receives commands from another control device (for example, the control device 130a of Figure 1 B). Additional NMD embodiments are described in further detail below with respect to Figures 3A through 3F.
[0082] Referring again to Figure 1 F, the microphones 115 are configured to acquire, capture, and/or receive sound from an environment (for example, the environment 101 of Figure 1A) and/or a room in which the NMD 120a is positioned. The received sound can include, for example, vocal utterances, audio played back by the NMD 120a and/or another playback device, background voices, ambient sounds, and so forth. The microphones 115 convert the received sound into electrical signals to produce microphone data. The voice processing components 124 receive and analyze the microphone data to determine whether a voice input is present in the microphone data. The voice input can comprise, for example, an activation word followed by an utterance including a user request. As those of ordinary skill in the art will appreciate, an activation word is a word or other audio cue signifying a user voice input. For instance, in querying the AMAZON VAS, a user might speak the activation word “Alexa”. Other examples include "Ok, Google" for invoking the GOOGLE VAS and "Hey, Siri" for invoking the APPLE VAS.
[0083] After detecting the activation word, voice processing components 124 monitor the microphone data for an accompanying user request in the voice input. The user request may include, for example, a command to control a third-party device, such as a thermostat (for example, NEST thermostat), an illumination device (for example, a PHILIPS HUE lighting device), or a media playback device (for example, a SONOS playback device). For example, a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (for example, the environment 101 of Figure 1A). The user might speak the same activation word followed by the utterance “turn on the living room” to turn on illumination devices in a living room area of the home. The user may similarly speak an activation word followed by a request to play a particular song, an album, or a playlist of music on a playback device in the home. Additional description regarding receiving and processing voice input data can be found in further detail below with respect to Figures 3A through 3F. d. Suitable Control Devices
[0084] Figure 1 H is a partial schematic diagram of the control device 130a (Figures 1A and 1 B). As used herein, the term “control device” can be used interchangeably with “controller” or “control system”. Among other features, the control device 130a is configured to receive user input related to the media playback system 100 and, in response, cause one or more devices in the media playback system 100 to perform an action(s) or operation(s) corresponding to the user input. In the illustrated embodiment, the control device 130a comprises a smartphone (for example, an iPhone™, an Android phone, and so forth) on which media playback system controller application software is installed. In some embodiments, the control device 130a comprises, for example, a tablet (for example, an iPad™), a computer (for example, a laptop computer, a desktop computer, and so forth), and/or another suitable device (for example, a television, an automobile audio head unit, an loT device, and so forth). In certain embodiments, the control device 130a comprises a dedicated controller for the media playback system 100. In other embodiments, as described above with respect to Figure 1 G, the control device 130a is integrated into another device in the media playback system 100 (for example, one more of the playback devices 110, NMDs 120, and/or other suitable devices configured to communicate over a network).
[0085] The control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135. The electronics 132 comprise one or more processors 132a (referred to hereinafter as “the processors 132a”), a memory 132b, software components 132c, and a network interface 132d. The processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100. The memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 132a to perform those functions. The software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100. The memory 132b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user. [0086] The control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135. The electronics 132 comprise one or more processors 132a (referred to hereinafter as “the processors 132a”), a memory 132b, software components 132c, and a network interface 132d. The processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100. The memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 132a to perform those functions. The software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100. The memory 132b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.
[0087]The network interface 132d is configured to facilitate network communications between the control device 130a and one or more other devices in the media playback system 100, and/or one or more remote devices. In some embodiments, the network interface 132d is configured to operate according to one or more suitable communication industry standards (for example, infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11 b, 802.11g, 802.11 n, 802.11 ac, 802.15, 4G, LTE, and so forth). The network interface 132d can be configured, for example, to transmit data to and/or receive data from the playback devices 110, the NMDs 120, other ones of the control devices 130, one of the computing devices 106 of Figure 1 B, devices comprising one or more other media playback systems, and so forth. The transmitted and/or received data can include, for example, playback device control commands, state variables, playback zone and/or zone group configurations. For instance, based on user input received at the user interface 133, the network interface 132d can transmit a playback device control command (for example, volume control, audio playback control, audio content selection, and so forth) from the control device 130a to one or more of the playback devices 110. The network interface 132d can also transmit and/or receive configuration changes such as, for example, adding/removing one or more playback devices 110 to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or consolidated player, separating one or more playback devices from a bonded or consolidated player, among others. Additional description of zones and groups can be found below with respect to Figures 11 through 1 M.
[0088] The user interface 133 is configured to receive user input and can facilitate control of the media playback system 100. The user interface 133 includes media content art 133a (for example, album art, lyrics, videos, and so forth), a playback status indicator 133b (for example, an elapsed and/or remaining time indicator), media content information region 133c, a playback control region 133d, and a zone indicator 133e. The media content information region 133c can include a display of relevant information (for example, title, artist, album, genre, release year, and so forth) about media content currently playing and/or media content in a queue or playlist. The playback control region 133d can include selectable (for example, via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, and so forth. The playback control region 133d may also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions. In the illustrated embodiment, the user interface 133 comprises a display presented on a touch screen interface of a smartphone (for example, an iPhone™, an Android phone, and so forth). In some embodiments, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.
[0089] The one or more speakers 134 (for example, one or more transducers) can be configured to output sound to the user of the control device 130a. In some embodiments, the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, mid-range frequencies, and/or high frequencies. In some aspects, for example, the control device 130a is configured as a playback device (for example, one of the playback devices 110). Similarly, in some embodiments the control device 130a is configured as an NMD (for example, one of the NMDs 120), receiving voice commands and other sounds via the one or more microphones 135.
[0090]The one or more microphones 135 can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some embodiments, two or more of the microphones 135 are arranged to capture location information of an audio source (for example, voice, audible sound, and so forth) and/or configured to facilitate filtering of background noise. Moreover, in certain embodiments, the control device 130a is configured to operate as a playback device and an NMD. In other embodiments, however, the control device 130a omits the one or more speakers 134 and/or the one or more microphones 135. For instance, the control device 130a may comprise a device (for example, a thermostat, an loT device, a network device, and so forth) comprising a portion of the electronics 132 and the user interface 133 (for example, a touch screen) without any speakers or microphones. Additional control device embodiments are described in further detail below with respect to Figures 4A through 4D and 5. e. Suitable Playback Device Configurations
[0091] Figures 11 through 1 M show example configurations of playback devices in zones and zone groups. Referring first to Figure 1 M, in one example, a single playback device may belong to a zone. For example, the playback device 110g in the second bedroom 101c (Figure 1A) may belong to Zone C. In some implementations described below, multiple playback devices may be “bonded” to form a “bonded pair” which together form a single zone. For example, the playback device 1101 (for example, a left playback device) can be bonded to the playback device 110m (for example, a right playback device) to form Zone B. Bonded playback devices may have different playback responsibilities (for example, channel responsibilities). In another implementation described below, multiple playback devices may be merged to form a single zone. For example, the playback device 11 Oh (for example, a front playback device) may be merged with the playback device 110i (for example, a subwoofer), and the playback devices 110j and 110k (for example, left and right surround speakers, respectively) to form a single Zone D. In another example, the playback devices 110b and 110d can be merged to form a merged group or a zone group 108b. The merged playback devices 110b and 110d may not be specifically assigned different playback responsibilities. That is, the merged playback devices 110b and 110d may, aside from playing audio content in synchrony, each play audio content as they would if they were not merged. [0092] Each zone in the media playback system 100 may be provided for control as a single user interface (III) entity. For example, Zone A may be provided as a single entity named Master Bathroom. Zone B may be provided as a single entity named Master Bedroom. Zone C may be provided as a single entity named Second Bedroom.
[0093] Playback devices that are bonded may have different playback responsibilities, such as responsibilities for certain audio channels. For example, as shown in Figure 11, the playback devices 1101 and 110m may be bonded so as to produce or enhance a stereo effect of audio content. In this example, the playback device 1101 may be configured to play a left channel audio component, while the playback device 110m may be configured to play a right channel audio component. In some implementations, such stereo bonding may be referred to as “pairing”.
[0094] Additionally, bonded playback devices may have additional and/or different respective speaker drivers. As shown in Figure 1 J, the playback device 110h named Front may be bonded with the playback device 110i named SUB. The Front device 110h can be configured to render a range of mid to high frequencies and the SUB device 110i can be configured to render low frequencies. When unbonded, however, the Front device 11 Oh can be configured to render a full range of frequencies. As another example, Figure 1 K shows the Front and SUB devices 11 Oh and 110i further bonded with Left and Right playback devices 110j and 110k, respectively. In some implementations, the Left and Right devices 110j and 110k can be configured to form surround or “satellite” channels of a home theater system. The bonded playback devices 110h, 110i, 110j, and 110k may form a single Zone D (Figure 1 M).
[0095] Playback devices that are merged may not have assigned playback responsibilities, and may each render the full range of audio content the respective playback device is capable of. Nevertheless, merged devices may be represented as a single Ul entity (that is, a zone, as discussed above). For instance, the playback devices 110a and 110n in the master bathroom have the single III entity of Zone A. In one embodiment, the playback devices 110a and 11 On may each output the full range of audio content each respective playback devices 110a and 11 On are capable of, in synchrony.
[0096] In some embodiments, an NMD is bonded or merged with another device so as to form a zone. For example, the NMD 120b may be bonded with the playback device 110e, which together form Zone F, named Living Room. In other embodiments, a stand-alone network microphone device may be in a zone by itself. In other embodiments, however, a stand-alone network microphone device may not be associated with a zone. Additional details regarding associating network microphone devices and playback devices as designated or default devices may be found, for example, in subsequently referenced U.S. Patent 10,499,146.
[0097] Zones of individual, bonded, and/or merged devices may be grouped to form a zone group. For example, referring to Figure 1 M, Zone A may be grouped with Zone B to form a zone group 108a that includes the two zones. Similarly, Zone G may be grouped with Zone H to form the zone group 108b. As another example, Zone A may be grouped with one or more other Zones C-l. The Zones A-l may be grouped and ungrouped in numerous ways. For example, three, four, five, or more (for example, all) of the Zones A-l may be grouped. When grouped, the zones of individual and/or bonded playback devices may play back audio in synchrony with one another, as described in previously referenced U.S. Patent 8,234,395. Playback devices may be dynamically grouped and ungrouped to form new or different groups that synchronously play back audio content.
[0098] In various implementations, the zones in an environment may be the default name of a zone within the group or a combination of the names of the zones within a zone group. For example, zone group 108b can be assigned a name such as “Dining + Kitchen”, as shown in Figure 1 M. In some embodiments, a zone group may be given a unique name selected by a user.
[0099] Certain data may be stored in a memory of a playback device (for example, the memory 112b of Figure 1 C) as one or more state variables that are periodically updated and used to describe the state of a playback zone, the playback device(s), and/or a zone group associated therewith. The memory may also include the data associated with the state of the other devices of the media system, and shared from time to time among the devices so that one or more of the devices have the most recent data associated with the system.
[0100] In some embodiments, the memory may store instances of various variable types associated with the states. Variable instances may be stored with identifiers (for example, tags) corresponding to type. For example, certain identifiers may be a first type “a1” to identify playback device(s) of a zone, a second type “b1” to identify playback device(s) that may be bonded in the zone, and a third type “c1” to identify a zone group to which the zone may belong. As a related example, identifiers associated with the second bedroom 101 c may indicate that the playback device is the only playback device of the Zone C and not in a zone group. Identifiers associated with the den may indicate that the den is not grouped with other zones but includes bonded playback devices 110h-110k. Identifiers associated with the dining room may indicate that the dining room is part of the Dining + Kitchen zone group 108b and that devices 110b and 110d are grouped (Figure 1 L). Identifiers associated with the kitchen may indicate the same or similar information by virtue of the kitchen being part of the Dining + Kitchen zone group 108b. Other example zone variables and identifiers are described below.
[0101] In yet another example, the memory may store variables or identifiers representing other associations of zones and zone groups, such as identifiers associated with areas, as shown in Figure 1M. An area may involve a cluster of zone groups and/or zones not within a zone group. For instance, Figure 1 M shows an Upper Area 109a including Zones A-D and I, and a Lower Area 109b including Zones E-l. In one aspect, an area may be used to invoke a cluster of zone groups and/or zones that share one or more zones and/or zone groups of another cluster. In another aspect, this differs from a zone group, which does not share a zone with another zone group. Further examples of techniques for implementing areas may be found, for example, in U.S. Patent 10,712,997, filed 21 August 2017, and titled “Room Association Based on Name”, and U.S. Patent 8,483,853, filed 11 September 2007, and titled “Controlling and manipulating groupings in a multi-zone media system”. Each of these patents is incorporated herein by reference in its entirety. In some embodiments, the media playback system 100 may not implement areas, in which case the system may not store variables associated with areas.
III. Example Systems and Devices
[0102] Figure 2A is a front isometric view of a playback device 210 configured in accordance with aspects of the disclosed technology. Figure 2B is a front isometric view of the playback device 210 without a grille 216e. Figure 2C is an exploded view of the playback device 210. Referring to Figures 2A through 2C together, the playback device 210 comprises a housing 216 that includes an upper portion 216a, a right or first side portion 216b, a lower portion, a left or second side portion 216d, the grille 216e, and a rear portion 216f. A plurality of fasteners 216g (for example, one or more screws, rivets, clips) attaches a frame 216h to the housing 216. A cavity 216j (Figure 2C) in the housing 216 is configured to receive the frame 216h and electronics 212. The frame 216h is configured to carry a plurality of transducers 214 (identified individually in Figure 2B as transducers 214a-f). The electronics 212 (for example, the electronics 112 of Figure 1 C) are configured to receive audio content from an audio source and send electrical signals corresponding to the audio content to the transducers 214 for playback.
[0103] The transducers 214 are configured to receive the electrical signals from the electronics 112, and further configured to convert the received electrical signals into audible sound during playback. For instance, the transducers 214a-c (for example, tweeters) can be configured to output high frequency sound (for example, sound waves having a frequency greater than about 2 kHz). The transducers 214d-f (for example, mid-woofers, woofers, midrange speakers) can be configured output sound at frequencies lower than the transducers 214a-c (for example, sound waves having a frequency lower than about 2 kHz). In some embodiments, the playback device 210 includes a number of transducers different than those illustrated in Figures 2A through 2C. For example, as described in further detail below with respect to Figures 3A through 3C, the playback device 210 can include fewer than six transducers (for example, one, two, three). In other embodiments, however, the playback device 210 includes more than six transducers (for example, nine, ten). Moreover, in some embodiments, all or a portion of the transducers 214 are configured to operate as a phased array to desirably adjust (for example, narrow or widen) a radiation pattern of the transducers 214, thereby altering a user’s perception of the sound emitted from the playback device 210.
[0104] In some examples, a filter is axially aligned with the transducer 214b. The filter can be configured to desirably attenuate a predetermined range of frequencies that the transducer 214b outputs to improve sound quality and a perceived sound stage output collectively by the transducers 214. In some embodiments, however, the playback device 210 omits the filter. In other embodiments, the playback device 210 includes one or more additional filters aligned with the transducers 214b and/or at least another of the transducers 214.
[0105] Figures 3A and 3B are front and right isometric side views, respectively, of an NMD 320 configured in accordance with embodiments of the disclosed technology. Figure 3C is an exploded view of the NMD 320. Figure 3D is an enlarged view of a portion of Figure 3B including a user interface 313 of the NMD 320. Referring first to Figures 3A through 3C, the NMD 320 includes a housing 316 comprising an upper portion 316a, a lower portion 316b and an intermediate portion 316c (for example, a grille). A plurality of ports, holes or apertures 316d in the upper portion 316a allow sound to pass through to one or more microphones 315 (Figure 3C) positioned within the housing 316. The one or more microphones 315 are configured to receive sound via the apertures 316d and produce electrical signals based on the received sound. In the illustrated embodiment, a frame 316e (Figure 3C) of the housing 316 surrounds cavities 316f and 316g configured to house, respectively, a first transducer 314a (for example, a tweeter) and a second transducer 314b (for example, a mid-woofer, a midrange speaker, a woofer). In other embodiments, however, the NMD 320 includes a single transducer, or more than two (for example, two, five, six) transducers. In certain embodiments, the NMD 320 omits the transducers 314a and 314b altogether.
[0106] Electronics 312 (Figure 3C) includes components configured to drive the transducers 314a and 314b, and further configured to analyze audio data corresponding to the electrical signals produced by the one or more microphones 315. In some embodiments, for example, the electronics 312 comprises many or all of the components of the electronics 112 described above with respect to Figure 1 C. In certain embodiments, the electronics 312 includes components described above with respect to Figure 1 F such as, for example, the one or more processors 112a, the memory 112b, the software components 112c, the network interface 112d, and so forth. In some embodiments, the electronics 312 includes additional suitable components (for example, proximity or other sensors).
[0107] Referring to Figure 3D, the user interface 313 includes a plurality of control surfaces (for example, buttons, knobs, capacitive surfaces) including a first control surface 313a (for example, a previous control), a second control surface 313b (for example, a next control), and a third control surface 313c (for example, a play and/or pause control) that can be adjusted by a user 323. A fourth control surface 313d is configured to receive touch input corresponding to activation and deactivation of the one or microphones 315. A first indicator 313e (for example, one or more light emitting diodes (LEDs) or another suitable illuminator) can be configured to illuminate only when the one or more microphones 315 are activated. A second indicator 313f (for example, one or more LEDs) can be configured to remain solid during normal operation and to blink or otherwise change from solid to indicate a detection of voice activity. In some embodiments, the user interface 313 includes additional or fewer control surfaces and illuminators. In one embodiment, for example, the user interface 313 includes the first indicator 313e, omitting the second indicator 313f. Moreover, in certain embodiments, the NMD 320 comprises a playback device and a control device, and the user interface 313 comprises the user interface of the control device.
[0108] Referring to Figures 3A through 3D together, the NMD 320 is configured to receive voice commands from one or more adjacent users via the one or more microphones 315. As described above with respect to Figure 1 B, the one or more microphones 315 can acquire, capture, or record sound in a vicinity (for example, a region within 10 m or less of the NMD 320) and transmit electrical signals corresponding to the recorded sound to the electronics 312. The electronics 312 can process the electrical signals and can analyze the resulting audio data to determine a presence of one or more voice commands (for example, one or more activation words). In some embodiments, for example, after detection of one or more suitable voice commands, the NMD 320 is configured to transmit a portion of the recorded audio data to another device and/or a remote server (for example, one or more of the computing devices 106 of Figure 1 B) for further analysis. The remote server can analyze the audio data, determine an appropriate action based on the voice command, and transmit a message to the NMD 320 to perform the appropriate action. For instance, a user may speak “Sonos, play Michael Jackson”. The NMD 320 can, via the one or more microphones 315, record the user’s voice utterance, determine the presence of a voice command, and transmit the audio data having the voice command to a remote server (for example, one or more of the remote computing devices 106 of Figure 1 B, one or more servers of a VAS and/or another suitable service). The remote server can analyze the audio data and determine an action corresponding to the command. The remote server can then transmit a command to the NMD 320 to perform the determined action (for example, play back audio content related to Michael Jackson). The NMD 320 can receive the command and play back the audio content related to Michael Jackson from a media content source. As described above with respect to Figure 1 B, suitable content sources can include a device or storage communicatively coupled to the NMD 320 via a LAN (for example, the network 104 of Figure 1 B), a remote server (for example, one or more of the remote computing devices 106 of Figure 1 B), and so forth. In certain embodiments, however, the NMD 320 determines and/or performs one or more actions corresponding to the one or more voice commands without intervention or involvement of an external device, computer, or server.
[0109] Figure 3E is a functional block diagram showing additional features of the NMD 320 in accordance with aspects of the disclosure. The NMD 320 includes components configured to facilitate voice command capture including voice activity detector component(s) 312k, beam former components 3121, acoustic echo cancellation (AEC) and/or self-sound suppression components 312m, activation word detector components 312n, and voice/speech conversion components 312o (for example, voice-to-text and text-to-voice). In the illustrated embodiment of Figure 3E, the foregoing components 312k-312o are shown as separate components. In some embodiments, however, one or more of the components 312k-312o are subcomponents of the processors 112a. [0110] The beamforming and self-sound suppression components 3121 and 312m are configured to detect an audio signal and determine aspects of voice input represented in the detected audio signal, such as the direction, amplitude, frequency spectrum, and so forth. The voice activity detector activity components 312k are operably coupled with the beamforming and AEC components 3121 and 312m and are configured to determine a direction and/or directions from which voice activity is likely to have occurred in the detected audio signal. Potential speech directions can be identified by monitoring metrics which distinguish speech from other sounds. Such metrics can include, for example, energy within the speech band relative to background noise and entropy within the speech band, which is measure of spectral structure. As those of ordinary skill in the art will appreciate, speech typically has a lower entropy than most common background noise.
[0111] The activation word detector components 312n are configured to monitor and analyze received audio to determine if any activation words (for example, wake words) are present in the received audio. The activation word detector components 312n may analyze the received audio using an activation word detection algorithm. If the activation word detector 312n detects an activation word, the NMD 320 may process voice input contained in the received audio. Example activation word detection algorithms accept audio as input and provide an indication of whether an activation word is present in the audio. Many first- and third-party activation word detection algorithms are known and commercially available. For instance, operators of a voice service may make their algorithm available for use in third-party devices. Alternatively, an algorithm may be trained to detect certain activation words. In some embodiments, the activation word detector 312n runs multiple activation word detection algorithms on the received audio simultaneously (or substantially simultaneously). As noted above, different voice services (for example, AMAZON’S ALEXA, APPLE’S SIRI, or MICROSOFT’S CORTANA) can each use a different activation word for invoking their respective voice service. To support multiple services, the activation word detector 312n may run the received audio through the activation word detection algorithm for each supported voice service in parallel. [0112] The speech/text conversion components 312o may facilitate processing by converting speech in the voice input to text. In some embodiments, the electronics 312 can include voice recognition software that is trained to a particular user or a particular set of users associated with a household. Such voice recognition software may implement voice-processing algorithms that are tuned to specific voice profile(s). Tuning to specific voice profiles may require less computationally intensive algorithms than traditional voice activity services, which typically sample from a broad base of users and diverse requests that are not targeted to media playback systems.
[0113] Figure 3F is a schematic diagram of an example voice input 328 captured by the NMD 320 in accordance with aspects of the disclosure. The voice input 328 can include an activation word portion 328a and a voice utterance portion 328b. In some embodiments, the activation word 328a can be a known activation word, such as “Alexa”, which is associated with AMAZON’S ALEXA. In other embodiments, however, the voice input 328 may not include an activation word. In some embodiments, a network microphone device may output an audible and/or visible response upon detection of the activation word portion 328a. In addition, or alternately, an NMD may output an audible and/or visible response after processing a voice input and/or a series of voice inputs.
[0114]The voice utterance portion 328b may include, for example, one or more spoken commands (identified individually as a first command 328c and a second command 328e) and one or more spoken keywords (identified individually as a first keyword 328d and a second keyword 328f). In one example, the first command 328c can be a command to play music, such as a specific song, album, playlist, and so forth. In this example, the keywords may be one or words identifying one or more zones in which the music is to be played, such as the living room and the dining room shown in Figure 1A. In some examples, the voice utterance portion 328b can include other information, such as detected pauses (for example, periods of non-speech) between words spoken by a user, as shown in Figure 3F. The pauses may demarcate the locations of separate commands, keywords, or other information spoke by the user within the voice utterance portion 328b. [0115] In some embodiments, the media playback system 100 is configured to temporarily reduce the volume of audio content that it is playing while detecting the activation word portion 328a. The media playback system 100 may restore the volume after processing the voice input 328, as shown in Figure 3F. Such a process can be referred to as ducking, examples of which are disclosed in U.S. Patent 10,499,146, which is incorporated by reference herein in its entirety.
[0116] Figures 4A through 4D are schematic diagrams of a control device 430 (for example, the control device 130a of Figure 1 H, a smartphone, a tablet, a dedicated control device, an loT device, and/or another suitable device) showing corresponding user interface displays in various states of operation. A first user interface display 431 a (Figure 4A) includes a display name 433a (that is, “Rooms”). A selected group region 433b displays audio content information (for example, artist name, track name, album art) of audio content played back in the selected group and/or zone. Group regions 433c and 433d display corresponding group and/or zone name, and audio content information audio content played back or next in a playback queue of the respective group or zone. An audio content region 433e includes information related to audio content in the selected group and/or zone (that is, the group and/or zone indicated in the selected group region 433b). A lower display region 433f is configured to receive touch input to display one or more other user interface displays. For example, if a user selects “Browse” in the lower display region 433f, the control device 430 can be configured to output a second user interface display 431b (Figure 4B) comprising a plurality of music services 433g (for example, Spotify, Radio by Tunein, Apple Music, Pandora, Amazon, TV, local music, line-in) through which the user can browse and from which the user can select media content for play back via one or more playback devices (for example, one of the playback devices 110 of Figure 1A). Alternatively, if the user selects “My Sonos” in the lower display region 433f, the control device 430 can be configured to output a third user interface display 431 c (Figure 4C). A first media content region 433h can include graphical representations (for example, album art) corresponding to individual albums, stations, or playlists. A second media content region 433i can include graphical representations (for example, album art) corresponding to individual songs, tracks, or other media content. If the user selects a graphical representation 433j (Figure 4C), the control device 430 can be configured to begin play back of audio content corresponding to the graphical representation 433j and output a fourth user interface display 431 d that includes an enlarged version of the graphical representation 433j, media content information 433k (for example, track name, artist, album), transport controls 433m (for example, play, previous, next, pause, volume), and indication 433n of the currently selected group and/or zone name.
[0117] Figure 5 is a schematic diagram of a control device 530 (for example, a laptop computer, a desktop computer). The control device 530 includes transducers 534, a microphone 535, and a camera 536. A user interface 531 includes a transport control region 533a, a playback status region 533c, a playback zone region 533b, a playback queue region 533d, and a media content source region 533e. The transport control region comprises one or more controls for controlling media playback including, for example, volume, previous, play/pause, next, repeat, shuffle, track position, crossfade, equalization, and so forth. The audio content source region 533e includes a listing of one or more media content sources from which a user can select media items for play back and/or adding to a playback queue.
[0118] The playback zone region 533b can include representations of playback zones within the media playback system 100 (Figures 1A and 1 B). In some embodiments, the graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the media playback system, such as a creation of bonded zones, creation of zone groups, separation of zone groups, renaming of zone groups, and so forth. In the illustrated embodiment, a “group” icon is provided within each of the graphical representations of playback zones. The “group” icon provided within a graphical representation of a particular zone may be selectable to bring up options to select one or more other zones in the media playback system to be grouped with the particular zone. Once grouped, playback devices in the zones that have been grouped with the particular zone can be configured to play audio content in synchrony with the playback device(s) in the particular zone. Analogously, a “group” icon may be provided within a graphical representation of a zone group. In the illustrated embodiment, the “group” icon may be selectable to bring up options to deselect one or more zones in the zone group to be removed from the zone group. In some embodiments, the control device 530 includes other interactions and implementations for grouping and ungrouping zones via the user interface 531. In certain embodiments, the representations of playback zones in the playback zone region 533b can be dynamically updated as playback zone or zone group configurations are modified.
[0119] The playback status region 533c includes graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group. The selected playback zone or zone group may be visually distinguished on the user interface, such as within the playback zone region 533b and/or the playback queue region 533d. The graphical representations may include track title, artist name, album name, album year, track length, and other relevant information that may be useful for the user to know when controlling the media playback system 100 via the user interface 531 .
[0120] The playback queue region 533d includes graphical representations of audio content in a playback queue associated with the selected playback zone or zone group. In some embodiments, each playback zone or zone group may be associated with a playback queue containing information corresponding to zero or more audio items for playback by the playback zone or zone group. For instance, each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL) or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, possibly for playback by the playback device. In some embodiments, for example, a playlist can be added to a playback queue, in which information corresponding to each audio item in the playlist may be added to the playback queue. In some embodiments, audio items in a playback queue may be saved as a playlist. In certain embodiments, a playback queue may be empty, or populated but “not in use” when the playback zone or zone group is playing continuously streaming audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations. In some embodiments, a playback queue can include Internet radio and/or other streaming audio content items and be “in use” when the playback zone or zone group is playing those items.
[0121] When playback zones or zone groups are “grouped” or “ungrouped,” playback queues associated with the affected playback zones or zone groups may be cleared or re-associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues. Subsequently, if the established zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue, or be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue, or be associated with a new playback queue that is empty, or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
[0122] Figure 6 is a message flow diagram illustrating data exchanges between devices of the media playback system 100 (Figures 1 A through 1 M).
[0123] At step 650a, the media playback system 100 receives an indication of selected media content (for example, one or more songs, albums, playlists, podcasts, videos, stations) via the control device 130a. The selected media content can comprise, for example, media items stored locally on one or more devices (for example, the audio source 105 of Figure 1 C) connected to the media playback system and/or media items stored on one or more media service servers (one or more of the remote computing devices 106 of Figure 1 B). In response to receiving the indication of the selected media content, the control device 130a transmits a message 651a to the playback device 110a (Figures 1A through 1C) to add the selected media content to a playback queue on the playback device 110a. [0124]At step 650b, the playback device 110a receives the message 651a and adds the selected media content to the playback queue for play back.
[0125] At step 650c, the control device 130a receives input corresponding to a command to play back the selected media content. In response to receiving the input corresponding to the command to play back the selected media content, the control device 130a transmits a message 651 b to the playback device 110a causing the playback device 110a to play back the selected media content. In response to receiving the message 651 b, the playback device 110a transmits a message 651 c to the computing device 106a requesting the selected media content. The computing device 106a, in response to receiving the message 651c, transmits a message 651 d comprising data (for example, audio data, video data, a URL, a URI) corresponding to the requested media content.
[0126] At step 650d, the playback device 110a receives the message 651 d with the data corresponding to the requested media content and plays back the associated media content.
[0127]At step 650e, the playback device 110a optionally causes one or more other devices to play back the selected media content. In one example, the playback device 110a is one of a bonded zone of two or more players (Figure 1 M). The playback device 110a can receive the selected media content and transmit all or a portion of the media content to other devices in the bonded zone. In another example, the playback device 110a is a coordinator of a group and is configured to transmit and receive timing information from one or more other devices in the group. The other one or more devices in the group can receive the selected media content from the computing device 106a, and begin playback of the selected media content in response to a message from the playback device 110a such that all of the devices in the group play back the selected media content in synchrony.
IV. Wireless Communication Profile Management
[0128]As disclosed above, the BLUETOOTH LE Audio specification improves the way that time-bounded data can be transferred between audio devices. The BLUETOOTH LE Audio specification supports bidirectional communication to a finite number of audio sink devices in accordance with the “Direct LE Audio” profile. The BLUETOOTH LE Audio specification also supports unidirectional broadcast communication to a potentially unlimited number of audio sink devices in accordance with the “Broadcast Audio” profile. The “Multi-Stream Audio” profile enables transmission of multiple, independent, synchronized audio streams between an audio source device and one or more audio sink devices. In general, a preferred audio communication profile for a given application will depend on an assessment of a user’s intended action, and perhaps also on the user’s operating environment. The profile is optionally defined in a data structure that can be used to initiate communication using a pre-established standard with known options set to particular values. The user’s intended action may be represented by, for example, a user-generated command. Disclosed herein are techniques that allow an audio device to intelligently choose an audio communication profile that is well-suited for a particular situation.
[0129] Figure 7 is a flowchart illustrating an example method 700 for selecting a communication technique for transmitting audio content between two playback devices. Method 700 includes a number of phases and sub-processes, the sequence of which may vary from one implementation to another. In some cases different operations may be performed in an overlapping fashion, particularly where the different overlapping operations are performed by different components. However, when considered in the aggregate, these phases and sub-processes are capable of selecting an audio content communication technique for a given application.
[0130] Method 700 commences when a command is received to group a first playback device with a second playback device. See reference numeral 701 in Figure 7. In some implementations the command is received at a control device, such as one of the control devices 130a-130c illustrated in Figure 1A. In other implementations the command is received at a playback device, such as one of the playback devices 1 al l Or illustrated in Figures 1A through 1 E and 1 G. For example, the command may be received when a user performs a “press-and-hold” input on a user interface of the first or second playback device. In general, the command may be received at any computing device capable of processing such command as disclosed herein, and/or controlling one or more playback devices as a result of such processing. In some cases, the command may be the result of user input provided by a user interface, while in other cases the command may be automatically generated as a result of other preceding processing operations. The command includes one or more device identifiers, each of which identifies a corresponding playback device which is to receive and/or transmit audio content. The command optionally defines a group type for two or more playback devices. Example group types include a bonded group (in which first and second playback devices each play back different channels of multichannel audio content) and a synchrony group (in which first and second playback devices each play back the same set of one or more channels of multichannel audio content).
[0131] After receiving the command to group the first and second playback devices, a technique for communicating audio content to the first and second playback devices is selected. See reference numeral 702 in Figure 7. In some implementations, the selection is made by the control device or playback device which received the command, while in other implementations the command can be forwarded to another device for additional processing which results in selection of a communication technique.
[0132] The selection can be based on a group type specified in the command. In some cases the selection is additionally or alternatively based on a device type associated with the first and/or second playback device. In some cases the selection is additionally or alternatively based on the communication technique used to communicate audio content to playback devices in an existing synchrony group. Where a playback device is capable of receiving audio content according to first and second communication techniques (such as the WI-FI and BLUETOOTH communication protocols), the selection can be understood as a selection to transmit audio content to the playback device using either the first or the second communication technique. In some cases the selection can be based on a detected presence of a network resource, such as a known LAN (for example, as may be provided in a user’s home or office). The selection can be based on a synchronization timing parameter that should be satisfied, such as a maximum timing differential between playback devices. Such a synchronization timing parameter may be specified in the received command, or it may be preprogrammed as a default parameter. For example, a one-to-one communication technique (such as provided by the Direct LE Audio profile) tends to have tighter synchronization between two different playback devices than a broadcast communication technique (such as provided by the Broadcast Audio profile). More generally, the communication technique selection can be made with reference to a decision matrix, as will be described in turn.
[0133] Example communication techniques that may be selected include, but are not limited to, a communication technique associated with the BLUETOOTH Direct LE Audio profile, a communication technique associated with the BLUETOOTH Broadcast Audio profile, a communication technique associated with the BLUETOOTH A2DP, and a communication technique using WI-FI, such as might be implemented using a LAN. Other communication techniques may be selected in other implementations depending on, for example, information provided in the received command and/or information provided in the aforementioned decision matrix. For example, where the command identifies a device that cannot support communications in accordance with the BLUETOOTH LE Audio specification, a communication technique associated with the BLUETOOTH A2DP may be selected.
[0134] Figure 9 illustrates an example decision matrix 900 that can be used to select an audio content communication technique. In particular, decision matrix 900 associates various group types with various corresponding communication techniques. Thus, if the received command specifies a particular group type for the two or more playback devices, the communication technique associated with that group type, as specified in decision matrix 900, can be selected. In some cases the decision matrix can be preprogrammed and optionally subject to periodic updates.
[0135] For example, with reference to the first row of decision matrix 900, where the received command specifies that first and second playback devices are to be grouped in a bonded pair, audio content can be communicated to the playback devices using the BLUETOOTH Direct LE Audio profile. This may occur where a user wishes to have two or more speakers each play different channels of multichannel audio content. Such a configuration is schematically illustrated in Figures 1 E, 11, and 10A. In particular, Figure 10A is a schematic diagram that illustrates formation of a bonded pair of playback devices. In this example, control device 130a receives a command 1011 specifying that the user wishes to connect master bedroom speakers in a bonded pair. Upon receiving command 1011 , control device 130a makes a selection 1012 to communicate audio content to playback devices 110 using the BLUETOOTH Direct LE Audio profile. In this application, maintaining synchronous playback amongst a relatively smaller group of bonded playback devices benefits from the more robust bidirectional communication path, the relatively tighter synchronization, and the higher audio quality provided by the Direct LE Audio profile.
[0136] In some applications, the received command may specify that the first and second playback devices are to be grouped in a bonded pair, but one of the identified devices that is to be paired supports only the older BLUETOOTH A2DP instead of the preferred Direct LE Audio profile. In this case, and as reflected in the second row of decision matrix 900, the audio content can be communicated to the playback devices using the BLUETOOTH A2DP. In this case, the selected communication technique is at least partially based on the communication profiles that are supported (or not supported) by one or more of the devices identified in the received command.
[0137] The third row of decision matrix 900 represents a situation where the received command specifies that a finite number of playback devices are to be grouped in a synchrony group in which each of the grouped devices receives and plays back all channels of multichannel audio content. In this case, audio content can be communicated to the playback devices using the BLUETOOTH Direct LE Audio profile. Such a configuration is schematically illustrated in Figures 1 B (group 107a), 1 L, and 10B. In particular, Figure 10B is a schematic diagram that illustrates formation of a synchrony group of playback devices. In this example, control device 130a receives a command 1021 specifying that the user wishes to form a synchrony group with two (or more) sets of headphones. This might occur, for example, when a group of friends sitting together wish to watch video content on a shared screen, such as a tablet, but also wish to each use their own headphones to listen to the corresponding audio content. Upon receiving command 1021 , control device 130a (for example, the tablet) makes a selection 1022 to communicate audio content to the playback devices 110 (for example, the friends’ sets of headphones) using the BLUETOOTH Direct LE Audio profile. In this application, maintaining synchronous playback amongst a relatively smaller and finite group of playback devices benefits from the more robust bidirectional communication path provided by the Direct LE Audio profile. More specifically, the Direct LE Audio profile may be considered preferred in this application because, for example, (a) the number of paired devices is relatively small, that is, within the capabilities of the Direct LE Audio profile; and (b) the received command reflects an application wherein the number of paired devices is unlikely to increase beyond a maximum supported by the Direct LE Audio profile.
[0138]Another application where the selected communication technique uses the Direct LE Audio profile is illustrated in Figure 10D. In particular, Figure 10D is a schematic diagram that illustrates connecting a BLUETOOTH-only playback device 1044 to a synchrony group 1043 that receives audio content via WI-FI transmission. In this example, control device 130a receives a command 1041 specifying that the user wishes to connect playback device 1044 (such as a set of BLUETOOTH headphones) to existing synchrony group 1043. Even though the playback devices in synchrony group 1043 receive audio content from network 104 via WI-FI transmission, synchrony group 1043 includes at least one BLUETOOTH enabled WI-FI playback device 1043'. Thus control device 130a makes a selection 1042 to cause playback device 1044 to receive audio content from BLUETOOTH-enabled WI-FI playback device 1043' using the BLUETOOTH Direct LE Audio profile. This can be implemented by sending command and control instructions from control device 130a to BLUETOOTH-only playback device 1044. Thus, by issuing a command that simply identifies the existing synchrony group 1043, the user can cause his/her BLUETOOTH-only playback device 1044 to synchronously join such group without concern for the particular communication techniques that will be used to implement such command. And such command can be generated, for example, with a “press-and-hold” input provided at playback device 1044. In particular, in certain implementations receipt of a command provided via a playback device user interface (such as the user interface 313 illustrated in Figures 3A through 3E) causes the playback device itself to select an appropriate communication technique. This allows the user to join an existing synchrony group without knowledge or concern for the particular communication technique used to accomplish such joinder. It also enables use of BLUETOOTH-only playback device 1044 in a way that it seamlessly connects to a WI-FI based system. [0139] In contrast, where the received command specifies that a large, or potentially unknown, number of playback devices are to be grouped in a synchrony group, audio content can be communicated to the playback devices using the Broadcast Audio BLUETOOTH profile. This situation is represented by the fourth row of decision matrix 900 illustrated in Figure 9, and is also schematically illustrated in Figure 10C. In particular, Figure 10C is a schematic diagram that illustrates formation of a synchrony group of playback devices that receives audio content in accordance with the Broadcast Audio BLUETOOTH profile. In this example, control device 130a receives a command 1031 specifying that the user wishes to create a large synchrony group with a quantity of playback devices 110 that is larger than that supported by the Direct LE Audio profile. This may occur, for example, where audio content is delivered to a playback device embedded in each seat in an aircraft. Or it may occur where a group of friends is having a party at the beach, and it is unknown how many attendees or associated playback devices will ultimately join the party. It may also occur where, for example, the bandwidth required for individual point-to-point data streams exceeds available bandwidth, even if the number of data streams is nominally supported by the Direct LE Audio profile.
[0140] Upon receiving command 1031 , control device 130a makes a selection 1032 to communicate audio content to the playback devices 110 using the Broadcast Audio BLUETOOTH profile. The Broadcast Audio profile might also be used where the user initially creates a relatively small synchrony group but wishes to accommodate an unknown or indefinite number of additional playback devices to be subsequently added to the synchrony group. In an alternative implementation, the Broadcast Audio profile can be selected whenever the user creates a synchrony group, regardless of the number of devices to be included in the group. This eliminates any need for future switching of communication techniques and leverages the fact that the demand for tightly synchronized audio playback is typically lower in the context of a synchrony group than a bonded pair. In general, the benefits associated with the large number of audio sinks supported by the Broadcast Audio profile outweighs the need for a rich bidirectional communication path or the tighter audio synchronization associated with Direct LE Audio. [0141] In some cases the received command may indicate a required minimum audio playback quality that must be provided at one or more of the playback devices. More specifically, the received command may indicate that the user wishes to form a group of playback devices — either a bonded group or a synchrony group — capable of providing audio playback with a specified audio quality that cannot be provided by the BLUETOOTH LE Audio specification. Alternatively, the received command may identify a playback device that does not support BLUETOOTH communication. In either case, as represented by the fifth row of decision matrix 900, the selected communication technique may use WI-FI technology. Thus it will be appreciated that the selected communication technique does not necessarily use the BLUETOOTH communication protocol. For example, the BLUETOOTH communication protocol is one example of a PAN protocol, while the WI-FI communication protocol is one example of a LAN protocol.
[0142] In contrast, because of the higher power consumption requirements associated with WI-FI communications (as compared to BLUETOOTH communications), if the received command indicates a user preference to reduce power consumption, this may suggest selecting BLUETOOTH communications instead of WI-FI communications for a given application. More generally, in some cases the received command may indicate a preference to operate in an energy conservation mode, such as for portable playback applications, and the resulting selection of a communication technique may be based at least in part on such preference.
[0143] In some cases the selected communication technique may depend on the type of audio content that is to be communicated. For example, when one of the playback devices is used as a satellite speaker in a home theater system, the data packets transmitted to the satellite are relatively smaller than the data packets transmitted in a music-only application. In particular, a home theater system employs smaller data packets to maintain lip synchrony with corresponding video playback. In some cases it may be preferred to use the Direct LE Audio profile for communication in a home theater application because of the tighter synchronization such profile provides as compared to the Broadcast Audio profile. In the context of a home theater implementation, the selected communication technique may dictate whether audio content is transmitted from a sound bar using a front haul network (such as when communicating audio content to a bonded group of satellites) or a back haul network (such as when communicating audio content to a playback device that is not part of a bonded group).
[0144] Referring again to Figure 7, once the communication technique is selected, it can be used to communicate audio content to the first and second playback devices. See reference numeral 703 in Figure 7. Once the audio content is communicated to the playback devices, playback optionally begins. For example, the entity that selects the preferred communication technique can act as a group coordinator that sources audio content, sends the audio content over an established connection via the selected communication technique, and initiates synchronous playback. In other embodiments, the entity that selects the preferred communication technique optionally delegates additional functionality to another entity, such as one of the first and/or second playback devices.
[0145] In certain implementations the communication technique may be selected based only on user identification of one or more playback devices and/or a group type (bonded or synchrony). In such case the user need not be familiar with the various benefits or drawbacks of different communication techniques for a given application, as the framework disclosed herein can select a preferred technique based on an evaluation of the user’s indicated intent (for example, an intent to establish a bonded group) and without further user involvement. Indeed, once the command is submitted, playback may be initiated without the user even knowing which communication technique is being used. This simplifies and streamlines the user experience, eliminating any need for familiarity with various communication techniques, while simultaneously providing the user with an audio playback experience that leverages the benefits of available technologies.
[0146]The techniques disclosed herein can be implemented without regard to whether or not the devices which are to receive audio content are already connected to an existing network, such as a LAN. For example, a user may start audio playback using a BLUETOOTH LE Audio connection to a first playback device, then simply press-and- hold the play/pause button (or other similar user input) on another playback device to automatically have the second playback device join the first playback device for synchronous playback of the audio content. Messages transmitted using the BLUETOOTH LE Audio profile can be used to negotiate when a playback device is to function as a soft access point (AP) (sometimes referred to as a “software AP”); create a network for other playback devices to join; and/or pass other control, volume, battery status, and configuration messages. These messages can also be used to establish communication using a selected protocol based on the techniques disclosed herein.
[0147] Additional details regarding formation of playback groups in the absence of an existing network connection are disclosed in U.S. Patent Application Publication US 2023/0409280 A1 (published 21 December 2023), the entirety of which is hereby incorporated by reference herein. Additional details regarding methods for seamlessly transitioning from a first communication technique (for example, via WI-FI over a LAN) to a second communication technique (for example, via a BLUETOOTH LE Audio connection) are disclosed in International Patent Application Publication WO 2023/039294 A2 (published 16 March 2023), the entirety of which is hereby incorporated by reference herein. The techniques disclosed in these patent applications can be used for, among other things, device identification, device and/or network setup, inter-device communication, and providing location and/or proximity information. These techniques can also be used to forward a command received via a user interface provided on a first playback device to a second playback device.
[0148] Figure 8 is a flowchart illustrating another example method 800 for synchronous playback of audio content at two or more playback devices, wherein the audio content is communicated between at least two playback devices using a communication technique that is selected based on user input. Method 800 includes a number of phases and sub-processes, the sequence of which may vary from one implementation to another. In some cases different operations may be performed in an overlapping fashion, particularly where the different overlapping operations are performed by different components. However, when considered in the aggregate, these phases and subprocesses are capable of selecting and modifying an audio content communication technique for a given application. [0149] Method 800 commences when a first playback device 881 receives a command 820 to group first playback device 881 with a second playback device 882. See reference numeral 801 in Figure 8. In some implementations command 820 includes one or more device identifiers 821 and a group type identifier 822. For example, device identifiers 821 may identify one or more of first playback device 881 or second playback device 882. Group type identifier 822 may specify, for example, a bonded group (in which first playback device 881 and second playback device 882 each receive and play back different channels of multichannel audio content) or a synchrony group (in which first playback device 881 and second playback device 882 each receive and play back all channels of multichannel audio content). Command 820 may identify master bedroom speakers and specify a bonded pair (see, for example, command 1011 illustrated in Figure 10A), may identify two sets of headphones and specify a finite synchrony group (see, for example, command 1021 illustrated in Figure 10B), may identify several portable playback devices and specify a finite or infinite synchrony group (see, for example, command 1031 in Figure 10C), or may identify a new playback device that is to be added to an existing synchrony group (see, for example, command 1041 in Figure 10D).
[0150]After receiving command 820, first playback device 881 selects a technique for transmitting audio content 830 to second playback device 882. See reference numeral 802 in Figure 8. As described above in connection with reference numeral 702 in Figure 7, the selection can be based on, for example, (a) capabilities of one or more devices identified by device identifier 821 and/or (b) group type 822. In some cases the selection is made with reference to a decision matrix (such as decision matrix 900 illustrated in Figure 9) that is saved at first playback device 881 or another networked location. Example communication techniques include, but are not limited to, communication using the BLUETOOTH Direct LE Audio profile, communication using the BLUETOOTH Broadcast Audio profile, communication using the BLUETOOTH A2DP, and communication using WI-FI.
[0151] Once the selection is made, first playback device 881 uses the selected communication technique to send audio content 830 to second playback device 882. See reference numeral 803 in Figure 8. Second playback device 882 in turn receives audio content 830 from first playback device 881 . See reference numeral 804 in Figure 8. At this point, first playback device 881 and second playback device 882 together provide synchronous playback of audio content 830 at the respective playback devices. See reference number 805 in Figure 8.
[0152]As further illustrated in Figure 8, a number of additional operations are optionally performed, as indicated by broken lines, after the synchronous playback has begun. For example, in some implementations first playback device 881 receives a subsequent command 840 to extend playback to one or more additional playback devices 883. See reference numeral 806 in Figure 8. The subsequent command 840 includes one or more device identifiers 841 associated with the corresponding one or more additional playback devices 883. The subsequent command 840 optionally specifies a group type 842 associated with an extended playback group that includes the one or more additional playback devices 883. In an alternative implementation, the subsequent command 840 omits any group type designation, in which case the group type may be automatically selected based on, for example, a total quantity of playback devices in the extended playback group.
[0153]After receiving subsequent command 840, first playback device 881 reassesses the previously selected communication technique. See reference numeral 807 in Figure 8. This reassessment may result in a newly selected communication technique that is different than the previously selected communication technique. Such reassessment may be performed in similar fashion to that described above with respect to the initial selection made after receiving command 820, although the subsequent reassessment may result in identification of a different preferred communication technique due to, for example, a different number of playback devices that are to be grouped, or different capabilities of the playback devices that are to be grouped. For instance, if the quantity of additional playback devices identified in subsequent command 840 makes it impossible to communicate audio content using the Direct LE Audio profile, the reassessment may result in a determination that communication pursuant to the Broadcast Audio profile is preferred. Or, if one or more of additional playback devices 883 do not support the previously selected communication technique, then the reassessment may result in a determination that an older or more widely supported communication technique, such as communication pursuant to the A2DP, is preferred.
[0154] Once the reassessment occurs, first playback device 881 uses the newly selected communication technique to send audio content 850 to second playback device 882 and the one or more additional playback devices 883. See reference numeral 808 in Figure 8. Second playback device 882 in turn receives audio content 850 from first playback device 881. See reference numeral 809 in Figure 8. And the one or more additional playback devices 883 receive audio content 850 from first playback device 881. See reference numeral 810 in Figure 8. At this point, first playback device 881 , second playback device 882, and the one or more additional playback devices 883 together provide synchronous playback of audio content 850 at the respective playback devices. See reference number 811 in Figure 8.
[0155] In certain implementations the capacities of the playback devices that affect the communication technique selection include not only the capacities of the receiving playback devices, but also the capacity of the sending playback device and/or another playback device that is not presently involved in distribution of the audio content. For example, consider a user who is playing back audio content on a phone, but wishes to play back the audio content using a home theater system, either in addition to or instead of the phone. If the user inputs a press-and-hold command on a satellite playback device of the home theater system, a primary playback device of the home theater system can be configured to respond to such command by receiving, decoding and transmitting audio content to the satellite playback device, even though the primary playback device does not actually participate in the playback of any audio content.
[0156] In an alternative implementation, the subsequently-received command 840 specifies that first playback device 881 is to be excluded from subsequent audio playback, such that first playback device 881 functions as an audio source, but does not actually provide playback of audio content 850. Likewise, it should be appreciated that when the reassessment identifies a second communication technique that is preferred vis-a-vis the one or more additional playback devices 883, an alternative implementation may involve audio content being communicated to first playback device 881 using the initially identified communication technique (for example, using a BLUETOOTH LE Audio profile), while audio content is concurrently communicated to the one or more additional playback devices 883 using the second communication technique (for example, using WI-FI or a different BLUETOOTH LE Audio profile).
[0157] While method 800 illustrates first playback device 881 receiving commands 820, 840, as indicated in method 700, such commands need not be received by a playback device that ultimately plays back the communicated audio content. For example, as illustrated in Figure 11 A, in one alternative implementation control device 130a (such as a smartphone) sends commands to a primary playback device 1101 that subsequently communicates the audio content to one or more secondary playback devices which form an “off-LAN” synchrony group 1102. Control device 130a can communicate with primary playback device 1101 using the A2DP or the Direct LE Audio profile, while primary playback device 1101 can communicate with the secondary playback devices in synchrony group 1102 using the Broadcast Audio profile. Or, as illustrated in Figure 11 B, in another alternative implementation control device 130a (such as a smartphone) sends commands to primary playback device 1101 which forms a bonded stereo pair with a secondary playback device 1103. In this case, control device 130a can communicate with primary playback device 1101 using the A2DP or the Direct LE Audio profile, while primary playback device 1101 can communicate with secondary device 1103 using the Direct LE Audio profile. As illustrated, primary playback device 1101 is configured to receive and play back a Left Channel of multichannel audio content, while secondary playback device 1103 is configured to receive and play back a Right Channel of the multichannel audio content. Other configurations can be used in other implementations depending on specific user requirements as reflected in a received command.
[0158] In the example implementations illustrated in Figures 11A and 11 B, primary playback device 1101 acts as both an audio sink (it receives audio content from control deice 130a) and an audio source (it transmits the audio content to the one or more secondary playback devices). In Figure 11 B, the playback device 1101 can be seen as an intermediary that effectively creates a bonded pair with secondary playback device 1103 via two distinct BLUETOOTH connections. In this case, the selected communication technique for transmitting audio content to the one or more secondary playback devices 1103 optionally depends on the type of control device 130a and the stream quality that control device 130a is able to provide. The selected communication technique for transmitting audio content to the one or more secondary playback devices 1103 additionally or alternatively may depend on remaining available bandwidth, given the bandwidth consumed by the link between control device 130a and primary playback device 1101. For example, if the A2DP is used to transmit audio content to primary playback device 1101 , this will consume relatively more power and bandwidth than a BLUETOOTH LE connection. This, in turn, may affect what communication technique is selected for communications with the one or more secondary playback devices 1103. In particular, if the A2DP is used to transmit audio content to primary playback device 1101 , then it may be desired to select a communication technique associated with secondary playback device 1103 that consumes a reduced amount of bandwidth and/or power, such as a BLUETOOTH LE Audio communication profile.
[0159] As noted above, the techniques disclosed herein allow a playback device to intelligently choose an audio communication technique that is well-suited for a particular implementation. This is accomplished by assessing a user’s intended action, choosing a communication technique capable of handling the user request, and establishing a connection in accordance with the chosen technique. For example, audio content may be communicated using the BLUETOOTH Direct Audio profile, the BLUETOOTH Broadcast Audio profile, the BLUETOOTH A2DP, or WI-FI. User knowledge of the particular audio content communication technique is not required, and in some implementations the selected communication technique may be modified without notifying the user. Regardless of the particular communication technique selected for a given application, choosing a communication profile based on an evaluation of the user’s intended action, and/or based on a prediction or assessment of how the connected devices will be used, allows the user to reap the benefits of the most appropriate profile for a given application, thus enhancing user experience.
V. Conclusion
[0160] The above discussions relating to playback devices, controller devices, playback zone configurations, and media content sources provide only some examples of operating environments within which functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein may also be applicable and suitable for implementation of the functions and methods.
[0161] The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only ways to implement such systems, methods, apparatus, and/or articles of manufacture.
[0162] Additionally, references herein to “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one example embodiment of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. As such, the embodiments described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other embodiments.
[0163] The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description of embodiments.
[0164] When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
VI. Example Features
[0165] The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.
[0166] (Feature 1 ) A first playback device comprises one or more processors. The first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network. The first playback device further comprises at least one non- transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol. The program instructions are executable by the one or more processors such that the first playback device is further configured to, based on a group type for the group, and further based on the at least one communication protocol, select a technique for communicating with the second playback device. The program instructions are executable by the one or more processors such that the first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected technique. The program instructions are executable by the one or more processors such that the first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
[0167] (Feature 2) The first playback device of Feature 1 , wherein the command identifies the second playback device and specifies the group type for the group. [0168] (Feature 3) The first playback device of Feature 1 or 2, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) the selected technique for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
[0169] (Feature 4) The first playback device of Feature 1 or 2, wherein (a) the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) the selected technique for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
[0170] (Feature 5) The first playback device of any preceding Feature, wherein (a) the command further specifies a quantity of playback devices in the group; and (b) the technique for communicating is selected further based on the quantity of playback devices in the group.
[0171] (Feature 6) The first playback device of Feature 5, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
[0172] (Feature 7) The first playback device of Feature 5, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.
[0173] (Feature 8) The first playback device of any preceding Feature, wherein (a) the command further specifies a proximity between the first and second playback devices; and (b) the technique for communicating is selected further based on the proximity.
[0174] (Feature 9) The first playback device of any preceding Feature, wherein (a) the command further specifies an audio source from which the first playback device acquires the audio content; and (b) the technique for communicating is selected further based on the audio source. [0175] (Feature 10) The first playback device of Feature 1 or 2, wherein the technique for communicating with the second playback device is (a) communicating via a WI-FI connection or (b) communicating via a BLUETOOTH connection.
[0176] (Feature 11 ) The first playback device of any preceding Feature, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected technique for communicating with the second playback device.
[0177] (Feature 12) The first playback device of any preceding Feature, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that (a) associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first technique for communicating with the second playback device; and (b) associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second technique for communicating with the second playback device.
[0178] (Feature 13) The first playback device of any one of Features 1 to 10, wherein (a) the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected technique for communicating with the second playback device; and (b) selecting the technique for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure.
[0179] (Feature 14) The first playback device of Feature 1 or 2, wherein the group type is selected from (a) a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or (b) a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
[0180] (Feature 15) A first playback device comprises one or more processors. The first playback device further comprises one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network. The first playback device further comprises at least one non- transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using a BLUETOOTH communication protocol. The program instructions are executable by the one or more processors such that the first playback device is further configured to, based on a group type for the group, select a profile for communicating with the second playback device using the BLUETOOTH communication protocol. The program instructions are executable by the one or more processors such that the first playback device is further configured to transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected profile of the BLUETOOTH communication protocol. The program instructions are executable by the one or more processors such that the first playback device is further configured to play back the audio content in synchrony with playback of the audio content by the second playback device.
[0181] (Feature 16) The first playback device of Feature 15, wherein the command identifies the second playback device and specifies the group type for the group.
[0182] (Feature 17) The first playback device of Feature 15 or 16, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content; and (b) the selected profile for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
[0183] (Feature 18) The first playback device of Feature 15 or 16, wherein (a) the group type is a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; and (b) the selected profile for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
[0184] (Feature 19) The first playback device of any one of Features 15 to 18, wherein (a) the command further specifies a quantity of playback devices in the group; and (b) the profile for communicating is selected further based on the quantity of playback devices in the group.
[0185] (Feature 20) The first playback device of Feature 19, wherein the selected profile provides a one-to-one communication technique when the quantity is below a threshold quantity associated with the BLUETOOTH communication protocol.
[0186] (Feature 21 ) The first playback device of Feature 19, wherein the selected profile provides a broadcast communication technique when the quantity is above a threshold quantity associated with the BLUETOOTH communication protocol.
[0187] (Feature 22) The first playback device of any one of Features 15 to 21 , wherein (a) the command further specifies a proximity between the first and second playback devices; and (b) the profile for communicating is selected further based on the proximity.
[0188] (Feature 23) The first playback device of any one of Features 15 to 22, wherein (a) the command further specifies an audio source from which the first playback device acquires the audio content; and (b) the profile for communicating is selected further based on the audio source.
[0189] (Feature 24) The first playback device of any one of Features 15 to 23, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected profile for communicating with the second playback device.
[0190] (Feature 25) The first playback device of any one of Features 15 to 24, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that (a) associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first profile for communicating with the second playback device; and (b) associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second profile for communicating with the second playback device.
[0191] (Feature 26) The first playback device of any one of Features 15 to 23, wherein (a) the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected profile for communicating with the second playback device; and (b) selecting the profile for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure. [0192] (Feature 27) The first playback device of Feature 15 or 16, wherein the group type is selected from (a) a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or (b) a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
[0193] (Feature 28) The first playback device of any one of Features 15 to 27, wherein (a) the command further specifies one or more supported profiles for communicating with the second playback device; and (b) the selected profile is included in the one or more supported profiles.
[0194] (Feature 29) The first playback device of any one of Features 15 to 28, wherein the BLUETOOTH communication profile is LE Audio or A2DP.
[0195] (Feature 30) The first playback device of any one of Features 15 to 28, wherein the selected profile for communicating with the second playback device is a Direct LE Audio profile or a Broadcast Audio profile.
[0196] (Feature 31 ) The first playback device of any one of Features 15 to 30, further comprising a physical interface that is integrated into a housing of the first playback device and that is operably connected to the one or more processors, wherein the command is provided via the physical interface, the physical interface comprising a push button and a visual indicator.
[0197] (Feature 32) A method comprises receiving, at a first playback device, a command that specifies that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol. The method further comprises, based on a group type for the group, and further based on the at least one communication protocol, selecting a technique for communicating with the second playback device. The method further comprises transmitting audio content to the second playback device using the selected technique. The method further comprises playing back the audio content in synchrony with playback of the audio content by the second playback device.
[0198] (Feature 33) The method of Feature 32, wherein the command identifies the second playback device and specifies the group type for the group.
[0199] (Feature 34) The method of Feature 32 or 33, wherein (a) the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) transmitting the audio content to the second playback device using the selected technique comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
[0200] (Feature 35) The method of Feature 32 or 33, wherein (a) the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; (b) the at least one communication protocol is a BLUETOOTH communication protocol; and (c) transmitting the audio content to the second playback device using the selected technique comprises transmitting packets of the audio content that are addressed to the second playback device.
[0201] (Feature 36) The method of any one of Features 32 to 35, wherein (a) the command further specifies a quantity of playback devices in the group; and (b) the technique for communicating is selected based on the quantity of playback devices in the group.
[0202] (Feature 37) The method of Feature 36, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
[0203] (Feature 38) The method of Feature 36, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.
[0204] (Feature 39) The method of any one of Features 32 to 38, wherein (a) the command further specifies a proximity between the first and second playback devices; and (b) the technique for communicating is selected further based on the proximity. [0205] (Feature 40) The method of any one of Features 32 to 39, wherein (a) the command further specifies an audio source from which the first playback device acquires the audio content; and (b) the technique for communicating is selected further based on the audio source.
[0206] (Feature 41 ) The method of Features 32 or 33, wherein the technique for communicating with the second playback device is (a) communicating via a WI-FI connection or (b) communicating via a BLUETOOTH connection.
[0207] (Feature 42) The method of Features 32 or 33, wherein the group type is selected from (a) a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or (b) a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
[0208] (Feature 43) The method of Features 34 or 35, wherein the BLUETOOTH communication profile is LE Audio or A2DP.
[0209] (Feature 44) The method of any one of Features 32 to 42, wherein the selected technique for communicating with the second playback device is a Direct LE Audio profile or a Broadcast Audio profile.
[0210] (Feature 45) The method of any one of Features 32 to 44, wherein (a) the command is received via a physical interface that is integrated into a housing of the first playback device; and (b) the physical interface comprises a push button and a visual indicator.

Claims

1 . A first playback device comprising: one or more processors; one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network; and at least one non-transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to: receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol, based on a group type for the group, and further based on the at least one communication protocol, select a technique for communicating with the second playback device, transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected technique, and play back the audio content in synchrony with playback of the audio content by the second playback device.
2. The first playback device of Claim 1 , wherein the command identifies the second playback device and specifies the group type for the group.
3. The first playback device of Claim 1 or 2, wherein: the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; the at least one communication protocol is a BLUETOOTH communication protocol; and the selected technique for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
4. The first playback device of Claim 1 or 2, wherein: the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; the at least one communication protocol is a BLUETOOTH communication protocol; and the selected technique for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
5. The first playback device of any preceding claim, wherein: the command further specifies a quantity of playback devices in the group; and the technique for communicating is selected further based on the quantity of playback devices in the group.
6. The first playback device of Claim 5, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
7. The first playback device of Claim 5, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.
8. The first playback device of any preceding claim, wherein: the command further specifies a proximity between the first and second playback devices; and the technique for communicating is selected further based on the proximity.
9. The first playback device of any preceding claim, wherein: the command further specifies an audio source from which the first playback device acquires the audio content; and the technique for communicating is selected further based on the audio source.
10. The first playback device of Claim 1 or 2, wherein the technique for communicating with the second playback device is (a) communicating via a WI-FI connection or (b) communicating via a BLUETOOTH connection.
11. The first playback device of any preceding claim, wherein the at least one non- transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected technique for communicating with the second playback device.
12. The first playback device of any preceding claim, wherein the at least one non- transitory computer readable medium has stored thereon a data structure that: associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first technique for communicating with the second playback device; and associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second technique for communicating with the second playback device.
13. The first playback device of any one of Claims 1 to 10, wherein: the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected technique for communicating with the second playback device; and selecting the technique for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure.
14. The first playback device of Claim 1 or 2, wherein the group type is selected from: a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
15. A first playback device comprising: one or more processors; one or more communication interfaces operably connected to the one or more processors and configured to facilitate communication over at least one network; and at least one non-transitory computer-readable medium comprising program instructions that are executable by the one or more processors such that the first playback device is configured to: receive a command specifying that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using a BLUETOOTH communication protocol, based on a group type for the group, select a profile for communicating with the second playback device using the BLUETOOTH communication protocol, transmit audio content to the second playback device, via at least one of the one or more communication interfaces, using the selected profile of the BLUETOOTH communication protocol, and play back the audio content in synchrony with playback of the audio content by the second playback device.
16. The first playback device of Claim 15, wherein the command identifies the second playback device and specifies the group type for the group.
17. The first playback device of Claim 15 or 16, wherein: the group type is a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content; and the selected profile for communicating with the second playback device comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
18. The first playback device of Claim 15 or 16, wherein: the group type is a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; and the selected profile for communicating with the second playback device comprises transmitting packets of the audio content that are addressed to the second playback device.
19. The first playback device of any one of Claims 15 to 18, wherein: the command further specifies a quantity of playback devices in the group; and the profile for communicating is selected further based on the quantity of playback devices in the group.
20. The first playback device of Claim 19, wherein the selected profile provides a one- to-one communication technique when the quantity is below a threshold quantity associated with the BLUETOOTH communication protocol.
21. The first playback device of Claim 19, wherein the selected profile provides a broadcast communication technique when the quantity is above a threshold quantity associated with the BLUETOOTH communication protocol.
22. The first playback device of any one of Claims 15 to 21 , wherein: the command further specifies a proximity between the first and second playback devices; and the profile for communicating is selected further based on the proximity.
23. The first playback device of any one of Claims 15 to 22, wherein: the command further specifies an audio source from which the first playback device acquires the audio content; and the profile for communicating is selected further based on the audio source.
24. The first playback device of any one of Claims 15 to 23, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected profile for communicating with the second playback device.
25. The first playback device of any one of Claims 15 to 24, wherein the at least one non-transitory computer readable medium has stored thereon a data structure that: associates a first combination of a first communication protocol and a first characterizing feature of the second playback device with a first profile for communicating with the second playback device; and associates a second combination of a second communication protocol and a second characterizing feature of the second playback device with a second profile for communicating with the second playback device.
26. The first playback device of any one of Claims 15 to 23, wherein: the at least one non-transitory computer readable medium has stored thereon a data structure that associates the group type and a characterizing feature of the second playback device with the selected profile for communicating with the second playback device; and selecting the profile for communicating with the second playback device comprises looking up the characterizing feature and the group type in the data structure.
27. The first playback device of Claim 15 or 16, wherein the group type is selected from: a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
28. The first playback device of any one of Claims 15 to 27, wherein: the command further specifies one or more supported profiles for communicating with the second playback device; and the selected profile is included in the one or more supported profiles.
29. The first playback device of any one of Claims 15 to 28, wherein the BLUETOOTH communication profile is LE Audio or A2DP.
30. The first playback device of any one of Claims 15 to 28, wherein the selected profile for communicating with the second playback device is a Direct LE Audio profile or a Broadcast Audio profile.
31. The first playback device of any one of Claims 15 to 30, further comprising a physical interface that is integrated into a housing of the first playback device and that is operably connected to the one or more processors, wherein the command is provided via the physical interface, the physical interface comprising a push button and a visual indicator.
32. A method comprising: receiving, at a first playback device, a command that specifies that the first playback device will form part of a group, the group comprising a second playback device, and the second playback device being capable of communicating with the first playback device using at least one communication protocol; based on a group type for the group, and further based on the at least one communication protocol, selecting a technique for communicating with the second playback device; transmitting audio content to the second playback device using the selected technique; and playing back the audio content in synchrony with playback of the audio content by the second playback device.
33. The method of Claim 32, wherein the command identifies the second playback device and specifies the group type for the group.
34. The method of Claim 32 or 33, wherein: the group type is a synchronous playback group in which the first and second playback devices each play back all channels of multichannel audio content; the at least one communication protocol is a BLUETOOTH communication protocol; and transmitting the audio content to the second playback device using the selected technique comprises broadcasting unaddressed packets of the audio content to a plurality of recipient playback devices, one of which is the second playback device.
35. The method of Claim 32 or 33, wherein: the group type is a bonded playback group in which the first and second playback devices each play back different channels of multichannel audio content; the at least one communication protocol is a BLUETOOTH communication protocol; and transmitting the audio content to the second playback device using the selected technique comprises transmitting packets of the audio content that are addressed to the second playback device.
36. The method of any one of Claims 32 to 35, wherein: the command further specifies a quantity of playback devices in the group; and the technique for communicating is selected based on the quantity of playback devices in the group.
37. The method of Claim 36, wherein the selected technique is a one-to-one communication technique when the quantity is below a threshold quantity associated with the at least one communication protocol.
38. The method of Claim 36, wherein the selected technique is a broadcast communication technique when the quantity is above a threshold quantity associated with the at least one communication protocol.
39. The method of any one of Claims 32 to 38, wherein: the command further specifies a proximity between the first and second playback devices; and the technique for communicating is selected further based on the proximity.
40. The method of any one of Claims 32 to 39, wherein: the command further specifies an audio source from which the first playback device acquires the audio content; and the technique for communicating is selected further based on the audio source.
41. The method of Claim 32 or 33, wherein the technique for communicating with the second playback device is (a) communicating via a WI-FI connection or (b) communicating via a BLUETOOTH connection.
42. The method of Claim 32 or 33, wherein the group type is selected from: a bonded playback group in which the first and second playback devices each receive different channels of multichannel audio content; or a synchronous playback group in which the first and second playback devices each receive all channels of multichannel audio content.
43. The method of Claim 34 or 35, wherein the BLUETOOTH communication profile is LE Audio or A2DP.
44. The method of any one of Claims 32 to 42, wherein the selected technique for communicating with the second playback device is a Direct LE Audio profile or a Broadcast Audio profile.
45. The method of any one of Claims 32 to 44, wherein: the command is received via a physical interface that is integrated into a housing of the first playback device; and the physical interface comprises a push button and a visual indicator.
PCT/US2024/047002 2023-09-20 2024-09-17 Wireless communication profile management Pending WO2025064375A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363583930P 2023-09-20 2023-09-20
US63/583,930 2023-09-20

Publications (1)

Publication Number Publication Date
WO2025064375A1 true WO2025064375A1 (en) 2025-03-27

Family

ID=92966785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/047002 Pending WO2025064375A1 (en) 2023-09-20 2024-09-17 Wireless communication profile management

Country Status (1)

Country Link
WO (1) WO2025064375A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234395B2 (en) 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
US8483853B1 (en) 2006-09-12 2013-07-09 Sonos, Inc. Controlling and manipulating groupings in a multi-zone media system
US10499146B2 (en) 2016-02-22 2019-12-03 Sonos, Inc. Voice control of a media playback system
US10712997B2 (en) 2016-10-17 2020-07-14 Sonos, Inc. Room association based on name
WO2021050546A1 (en) * 2019-09-10 2021-03-18 Sonos, Inc. Synchronizing playback of audio information received from other networks
US20220066008A1 (en) * 2020-08-31 2022-03-03 Sonos, Inc. Ultrasonic Transmission for Presence Detection
US20220229628A1 (en) * 2019-06-03 2022-07-21 Intellectual Discovery Co., Ltd. Method, device and computer program for controlling audio data in wireless communication system, and recording medium therefor
US20220358187A1 (en) * 2021-05-10 2022-11-10 Sonos, Inc. Audio Encryption in a Media Playback System
WO2023039294A2 (en) 2021-09-13 2023-03-16 Sonos, Inc. Techniques for flexible control of playback devices
US20230409280A1 (en) 2022-06-16 2023-12-21 Sonos, Inc. Techniques for Off-Net Synchrony Group Formation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234395B2 (en) 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
US8483853B1 (en) 2006-09-12 2013-07-09 Sonos, Inc. Controlling and manipulating groupings in a multi-zone media system
US10499146B2 (en) 2016-02-22 2019-12-03 Sonos, Inc. Voice control of a media playback system
US10712997B2 (en) 2016-10-17 2020-07-14 Sonos, Inc. Room association based on name
US20220229628A1 (en) * 2019-06-03 2022-07-21 Intellectual Discovery Co., Ltd. Method, device and computer program for controlling audio data in wireless communication system, and recording medium therefor
WO2021050546A1 (en) * 2019-09-10 2021-03-18 Sonos, Inc. Synchronizing playback of audio information received from other networks
US20220066008A1 (en) * 2020-08-31 2022-03-03 Sonos, Inc. Ultrasonic Transmission for Presence Detection
US20220358187A1 (en) * 2021-05-10 2022-11-10 Sonos, Inc. Audio Encryption in a Media Playback System
WO2023039294A2 (en) 2021-09-13 2023-03-16 Sonos, Inc. Techniques for flexible control of playback devices
US20230409280A1 (en) 2022-06-16 2023-12-21 Sonos, Inc. Techniques for Off-Net Synchrony Group Formation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CORE SPECIFICATION WORKING GROUP: "Bluetooth Core Specification, V5.2", INTERNET CITATION, 31 December 2019 (2019-12-31), pages 1 - 3256, XP009545985, Retrieved from the Internet <URL:https://www.bluetooth.com/specifications/specs/core-specification> *
HUNN NICK: "Introducing Bluetooth LE Audio", BLUETOOTH GUIDE, 1 January 2022 (2022-01-01), pages 1 - 315, XP093118698, Retrieved from the Internet <URL:https://www.bluetooth.com/wp-content/uploads/2022/01/Introducing-Bluetooth-LE-Audio-book.pdf> [retrieved on 20240112] *

Similar Documents

Publication Publication Date Title
US12288558B2 (en) Systems and methods of operating media playback systems having multiple voice assistant services
US20250193621A1 (en) Systems and methods for authenticating and calibrating passive speakers with a graphical user interface
US12393397B2 (en) Distributed synchronization
US12223226B2 (en) Playback queues for shared experiences
CA3114615A1 (en) Network identification of portable electronic devices while changing power states
US12506629B2 (en) Mixed-mode synchronous playback
CA3196895A1 (en) Networking in a media playback system
WO2025064375A1 (en) Wireless communication profile management
US20250358570A1 (en) Connection transition for audio playback devices
WO2025240889A1 (en) Pairing of audio devices via an intermediary
WO2025217383A1 (en) Connection and network setup for playback devices using selectable communication interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24783442

Country of ref document: EP

Kind code of ref document: A1