[go: up one dir, main page]

WO2022039457A1 - Dispositif électronique comprenant une caméra et des microphones - Google Patents

Dispositif électronique comprenant une caméra et des microphones Download PDF

Info

Publication number
WO2022039457A1
WO2022039457A1 PCT/KR2021/010817 KR2021010817W WO2022039457A1 WO 2022039457 A1 WO2022039457 A1 WO 2022039457A1 KR 2021010817 W KR2021010817 W KR 2021010817W WO 2022039457 A1 WO2022039457 A1 WO 2022039457A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
audio data
user
input
microphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2021/010817
Other languages
English (en)
Korean (ko)
Inventor
전재웅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of WO2022039457A1 publication Critical patent/WO2022039457A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9202Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being a sound signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/907Television signal recording using static stores, e.g. storage tubes or semiconductor memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • H04N5/9305Regeneration of the television signal or of selected parts thereof involving the mixing of the reproduced video signal with a non-recorded signal, e.g. a text signal

Definitions

  • Various embodiments disclosed in this document relate to a method and apparatus for separating an audio source for recording and an audio source for subtitles when photographing with a camera.
  • An electronic device including a camera and a microphone may capture a video through the camera and acquire audio corresponding to the video captured through the microphone.
  • contents in which various types of captions are added to a video captured by an electronic device may be produced.
  • a user may use a caption production program after recording a video or may create a caption at the same time as recording a video.
  • the user may add subtitles to a portion requiring subtitles while capturing the moving picture.
  • the electronic device may obtain a user's voice input corresponding to the caption.
  • the electronic device may store the audio source for photographing and the audio source for subtitles corresponding to the user's voice input for subtitles as one audio source without distinguishing them.
  • an audio source for recording and an audio source for subtitles may be separately stored.
  • An electronic device can easily recognize a user's voice input while shooting a video using a camera, and uses this to intuitively and easily create and add subtitles to a video and methods are provided.
  • An electronic device includes a plurality of microphones, a camera, the plurality of microphones, and a processor operatively connected to the camera, wherein the processor records video data using the camera and , while the video data is recorded, using the plurality of microphones to obtain first audio data corresponding to the video data, and while the video data is recorded, obtain a first user input associated with a user's voice input and during a time period designated by the first user input, the first audio data and the second audio data corresponding to the user's voice input are separately obtained using the plurality of microphones, and the video data In response to an event of ending recording of , moving picture data may be generated based on the video data, the first audio data, and the second audio data.
  • the electronic device may separately store the audio source for caption and the audio source for capturing.
  • a user's voice input for subtitles can be easily recognized using a subtitle application or a subtitle production program, and subtitles can be added to a video intuitively and easily using this.
  • FIG. 1 is a block diagram of an electronic device in a network environment according to an embodiment.
  • FIG. 2 is a block diagram of an electronic device and an external device according to an exemplary embodiment.
  • FIG. 3 is a flowchart illustrating an operation of separately storing audio data by an electronic device according to an exemplary embodiment.
  • FIG. 4 is a flowchart illustrating an operation of separately storing audio data by the electronic device 101 according to an exemplary embodiment or storing the audio data separately.
  • 5A illustrates an operation in which an electronic device acquires audio data for captions through a first microphone and a second microphone, according to an embodiment.
  • FIG. 5B illustrates a radius when the electronic device acquires first audio data and second audio data separately or without distinction, according to an embodiment.
  • 5C illustrates an operation in which an electronic device acquires audio data for captions through a first microphone, a second microphone, and a third microphone, according to an exemplary embodiment.
  • FIG. 5D illustrates a UI displayed by the electronic device to acquire audio data for captions through a plurality of microphones, according to an exemplary embodiment.
  • 5E illustrates an operation in which the electronic device acquires audio data for captions through a third microphone, according to an exemplary embodiment.
  • FIG 6 illustrates an operation in which an electronic device separates audio data for recording and audio data for captions, according to an embodiment.
  • FIG. 7 is a flowchart illustrating an operation in which an electronic device acquires audio data for subtitles through an external device, according to an exemplary embodiment.
  • 8A illustrates a UI when the electronic device acquires audio data for captions, according to an embodiment.
  • 8B illustrates a UI when the electronic device acquires audio data for subtitles through an external device, according to an exemplary embodiment.
  • FIG. 9 illustrates a UI for selecting a means for obtaining audio data for subtitles by an electronic device according to an exemplary embodiment.
  • 10A illustrates a UI when an electronic device acquires audio data for subtitles through an external device connected through BT, according to an embodiment.
  • 10B illustrates a UI when the electronic device acquires audio data for subtitles through an external device connected through BT, according to an embodiment.
  • 10C illustrates a UI when the electronic device acquires audio data for subtitles through an external device connected through BT, according to an embodiment.
  • FIG 11 illustrates sync control of an electronic device according to an embodiment.
  • FIG. 12 is a diagram illustrating a subtitle editing UI of an electronic device according to an exemplary embodiment.
  • FIG. 13 illustrates a UI for caption editing of an electronic device according to an exemplary embodiment.
  • FIG. 14 is a diagram illustrating a caption editing UI of an electronic device according to an exemplary embodiment.
  • 15 is a perspective view illustrating a front surface of an electronic device according to an exemplary embodiment.
  • 16 is a perspective view illustrating a rear surface of an electronic device according to an exemplary embodiment.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to an embodiment.
  • an electronic device 101 communicates with an electronic device 102 through a first network 198 (eg, a short-range wireless communication network) or a second network 199 . It may communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • a first network 198 eg, a short-range wireless communication network
  • a second network 199 e.g., a second network 199 . It may communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • the electronic device 101 includes a processor 120 , a memory 130 , an input module 150 , a sound output module 155 , a display module 160 , an audio module 170 , and a sensor module ( 176), interface 177, connection terminal 178, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196 , or an antenna module 197 may be included.
  • at least one of these components eg, the connection terminal 178
  • may be omitted or one or more other components may be added to the electronic device 101 .
  • some of these components are integrated into one component (eg, display module 160 ). can be
  • the processor 120 for example, executes software (eg, a program 140) to execute at least one other component (eg, a hardware or software component) of the electronic device 101 connected to the processor 120 . It can control and perform various data processing or operations. According to one embodiment, as at least part of data processing or operation, the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 . may be stored in the volatile memory 132 , and may process commands or data stored in the volatile memory 132 , and store the result data in the non-volatile memory 134 .
  • software eg, a program 140
  • the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 .
  • the volatile memory 132 may be stored in the volatile memory 132 , and may process commands or data stored in the volatile memory 132 , and store the result data in the non-volatile memory 134 .
  • the processor 120 is the main processor 121 (eg, a central processing unit or an application processor) or a secondary processor 123 (eg, a graphic processing unit, a neural network processing unit) a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor).
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123 eg, a graphic processing unit, a neural network processing unit
  • NPU neural processing unit
  • an image signal processor e.g., a sensor hub processor, or a communication processor.
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123 eg, a graphic processing unit, a neural network processing unit
  • NPU neural processing unit
  • an image signal processor e.g., a sensor hub processor, or a communication processor.
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123
  • the auxiliary processor 123 is, for example, on behalf of the main processor 121 while the main processor 121 is in an inactive (eg, sleep) state, or the main processor 121 is active (eg, executing an application). ), together with the main processor 121, at least one of the components of the electronic device 101 (eg, the display module 160, the sensor module 176, or the communication module 190) It is possible to control at least some of the related functions or states.
  • the co-processor 123 eg, an image signal processor or a communication processor
  • may be implemented as part of another functionally related component eg, the camera module 180 or the communication module 190. there is.
  • the auxiliary processor 123 may include a hardware structure specialized for processing an artificial intelligence model.
  • Artificial intelligence models can be created through machine learning. Such learning may be performed, for example, in the electronic device 101 itself on which artificial intelligence is performed, or may be performed through a separate server (eg, the server 108).
  • the learning algorithm may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but in the above example not limited
  • the artificial intelligence model may include a plurality of artificial neural network layers.
  • Artificial neural networks include deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), restricted boltzmann machines (RBMs), deep belief networks (DBNs), bidirectional recurrent deep neural networks (BRDNNs), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the above example.
  • the artificial intelligence model may include, in addition to, or alternatively, a software structure in addition to the hardware structure.
  • the memory 130 may store various data used by at least one component of the electronic device 101 (eg, the processor 120 or the sensor module 176 ).
  • the data may include, for example, input data or output data for software (eg, the program 140 ) and instructions related thereto.
  • the memory 130 may include a volatile memory 132 or a non-volatile memory 134 .
  • the program 140 may be stored as software in the memory 130 , and may include, for example, an operating system 142 , middleware 144 , or an application 146 .
  • the input module 150 may receive a command or data to be used in a component (eg, the processor 120 ) of the electronic device 101 from the outside (eg, a user) of the electronic device 101 .
  • the input module 150 may include, for example, a microphone, a mouse, a keyboard, a key (eg, a button), or a digital pen (eg, a stylus pen).
  • the sound output module 155 may output a sound signal to the outside of the electronic device 101 .
  • the sound output module 155 may include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes such as multimedia playback or recording playback.
  • the receiver may be used to receive an incoming call. According to one embodiment, the receiver may be implemented separately from or as part of the speaker.
  • the display module 160 may visually provide information to the outside (eg, a user) of the electronic device 101 .
  • the display module 160 may include, for example, a control circuit for controlling a display, a hologram device, or a projector and a corresponding device.
  • the display module 160 may include a touch sensor configured to sense a touch or a pressure sensor configured to measure the intensity of a force generated by the touch.
  • the audio module 170 may convert a sound into an electric signal or, conversely, convert an electric signal into a sound. According to an embodiment, the audio module 170 acquires a sound through the input module 150 , or an external electronic device (eg, a sound output module 155 ) connected directly or wirelessly with the electronic device 101 . A sound may be output through the electronic device 102 (eg, a speaker or headphones).
  • an external electronic device eg, a sound output module 155
  • a sound may be output through the electronic device 102 (eg, a speaker or headphones).
  • the sensor module 176 detects an operating state (eg, power or temperature) of the electronic device 101 or an external environmental state (eg, user state), and generates an electrical signal or data value corresponding to the sensed state. can do.
  • the sensor module 176 may include, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface 177 may support one or more designated protocols that may be used by the electronic device 101 to directly or wirelessly connect with an external electronic device (eg, the electronic device 102 ).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD card interface Secure Digital Card
  • the connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102 ).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 may convert an electrical signal into a mechanical stimulus (eg, vibration or movement) or an electrical stimulus that the user can perceive through tactile or kinesthetic sense.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 may capture still images and moving images. According to an embodiment, the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 may manage power supplied to the electronic device 101 .
  • the power management module 188 may be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101 .
  • battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
  • the communication module 190 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (eg, the electronic device 102, the electronic device 104, or the server 108). It can support establishment and communication performance through the established communication channel.
  • the communication module 190 may include one or more communication processors that operate independently of the processor 120 (eg, an application processor) and support direct (eg, wired) communication or wireless communication.
  • the communication module 190 is a wireless communication module 192 (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (eg, : It may include a LAN (local area network) communication module, or a power line communication module).
  • GNSS global navigation satellite system
  • a corresponding communication module among these communication modules is a first network 198 (eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network 199 (eg, legacy It may communicate with the external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (eg, a telecommunication network such as a LAN or a WAN).
  • a first network 198 eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network 199 eg, legacy It may communicate with the external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (eg, a telecommunication network such as a LAN or a WAN).
  • a telecommunication network
  • the wireless communication module 192 uses the subscriber information (eg, International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 within a communication network such as the first network 198 or the second network 199 .
  • the electronic device 101 may be identified or authenticated.
  • the wireless communication module 192 may support a 5G network after a 4G network and a next-generation communication technology, for example, a new radio access technology (NR).
  • NR access technology includes high-speed transmission of high-capacity data (eMBB (enhanced mobile broadband)), minimization of terminal power and access to multiple terminals (mMTC (massive machine type communications)), or high reliability and low latency (URLLC (ultra-reliable and low-latency) -latency communications)).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communications
  • URLLC ultra-reliable and low-latency
  • the wireless communication module 192 may support a high frequency band (eg, mmWave band) to achieve a high data rate, for example.
  • a high frequency band eg, mmWave band
  • the wireless communication module 192 includes various technologies for securing performance in a high-frequency band, for example, beamforming, massive multiple-input and multiple-output (MIMO), all-dimensional multiplexing. It may support technologies such as full dimensional MIMO (FD-MIMO), an array antenna, analog beam-forming, or a large scale antenna.
  • the wireless communication module 192 may support various requirements specified in the electronic device 101 , an external electronic device (eg, the electronic device 104 ), or a network system (eg, the second network 199 ).
  • the wireless communication module 192 may include a peak data rate (eg, 20 Gbps or more) for realizing eMBB, loss coverage (eg, 164 dB or less) for realizing mMTC, or U-plane latency for realizing URLLC ( Example: downlink (DL) and uplink (UL) each 0.5 ms or less, or round trip 1 ms or less).
  • a peak data rate eg, 20 Gbps or more
  • loss coverage eg, 164 dB or less
  • U-plane latency for realizing URLLC
  • the antenna module 197 may transmit or receive a signal or power to the outside (eg, an external electronic device).
  • the antenna module 197 may include an antenna including a conductor formed on a substrate (eg, a PCB) or a radiator formed of a conductive pattern.
  • the antenna module 197 may include a plurality of antennas (eg, an array antenna). In this case, at least one antenna suitable for a communication method used in a communication network such as the first network 198 or the second network 199 is connected from the plurality of antennas by, for example, the communication module 190 . can be selected. A signal or power may be transmitted or received between the communication module 190 and an external electronic device through the selected at least one antenna.
  • other components eg, a radio frequency integrated circuit (RFIC)
  • RFIC radio frequency integrated circuit
  • the antenna module 197 may form a mmWave antenna module.
  • the mmWave antenna module comprises a printed circuit board, an RFIC disposed on or adjacent to a first side (eg, bottom side) of the printed circuit board and capable of supporting a designated high frequency band (eg, mmWave band); and a plurality of antennas (eg, an array antenna) disposed on or adjacent to a second side (eg, top or side) of the printed circuit board and capable of transmitting or receiving signals of the designated high frequency band. can do.
  • peripheral devices eg, a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • GPIO general purpose input and output
  • SPI serial peripheral interface
  • MIPI mobile industry processor interface
  • the command or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199 .
  • Each of the external electronic devices 102 or 104 may be the same as or different from the electronic device 101 .
  • all or a part of operations executed in the electronic device 101 may be executed in one or more external electronic devices 102 , 104 , or 108 .
  • the electronic device 101 may perform the function or service itself instead of executing the function or service itself.
  • one or more external electronic devices may be requested to perform at least a part of the function or the service.
  • One or more external electronic devices that have received the request may execute at least a part of the requested function or service, or an additional function or service related to the request, and transmit a result of the execution to the electronic device 101 .
  • the electronic device 101 may process the result as it is or additionally and provide it as at least a part of a response to the request.
  • cloud computing distributed computing, mobile edge computing (MEC), or client-server computing technology may be used.
  • the electronic device 101 may provide an ultra-low latency service using, for example, distributed computing or mobile edge computing.
  • the external electronic device 104 may include an Internet of things (IoT) device.
  • Server 108 may be an intelligent server using machine learning and/or neural networks.
  • the external electronic device 104 or the server 108 may be included in the second network 199 .
  • the electronic device 101 may be applied to an intelligent service (eg, smart home, smart city, smart car, or health care) based on 5G communication technology and IoT-related technology.
  • the electronic device may have various types of devices.
  • the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance device.
  • a portable communication device eg, a smart phone
  • a computer device e.g., a smart phone
  • a portable multimedia device e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a wearable device e.g., a smart bracelet
  • a home appliance device e.g., a home appliance
  • first, second, or first or second may be used simply to distinguish the element from other elements in question, and may refer to elements in other aspects (e.g., importance or order) is not limited. It is said that one (eg, first) component is “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively”. When referenced, it means that one component can be connected to the other component directly (eg by wire), wirelessly, or through a third component.
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as, for example, logic, logic block, component, or circuit.
  • a module may be an integrally formed part or a minimum unit or a part of the part that performs one or more functions.
  • the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • one or more instructions stored in a storage medium may be implemented as software (eg, the program 140) including
  • a processor eg, processor 120
  • a device eg, electronic device 101
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not include a signal (eg, electromagnetic wave), and this term is used in cases where data is semi-permanently stored in the storage medium and It does not distinguish between temporary storage cases.
  • a signal eg, electromagnetic wave
  • the method according to various embodiments disclosed in this document may be provided by being included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product is distributed in the form of a machine-readable storage medium (eg compact disc read only memory (CD-ROM)), or via an application store (eg Play Store TM ) or on two user devices ( It can be distributed online (eg download or upload), directly between smartphones (eg smartphones).
  • a part of the computer program product may be temporarily stored or temporarily generated in a machine-readable storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
  • each component (eg, module or program) of the above-described components may include a singular or a plurality of entities, and some of the plurality of entities may be separately disposed in other components. there is.
  • one or more components or operations among the above-described corresponding components may be omitted, or one or more other components or operations may be added.
  • a plurality of components eg, a module or a program
  • the integrated component may perform one or more functions of each component of the plurality of components identically or similarly to those performed by the corresponding component among the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component are executed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations are executed in a different order, or omitted. or one or more other operations may be added.
  • FIG. 2 is a block diagram of an electronic device 101 and an external device 102 according to an exemplary embodiment.
  • the electronic device 101 may include a processor 201 , a camera 203 , a microphone 205 , a communication circuit 207 , a display 209 , and a memory 211 .
  • a module included in the electronic device 101 may be a hardware module (eg, a circuit) included in the electronic device.
  • Components included in the electronic device 101 include components shown in FIG. 2 (eg, a processor 201 , a camera 203 , a microphone 205 , a communication circuit 207 , a display 209 , and a memory). (211))) may not be limited.
  • Components of the electronic device 101 illustrated in FIG. 2 may be replaced with other components, or additional components may be added to the electronic device 101 .
  • At least a portion of the contents of the electronic device 101 of FIG. 1 may be applied to the electronic device 101 of FIG. 2 .
  • at least a portion of the contents of the electronic device 1500 of FIGS. 15 and 16 may be applied to the electronic device 101 of FIG. 2 .
  • the processor 201 executes instructions stored in the memory 211 to configure components of the electronic device 101 (eg, the camera 203 , the microphone 205 , the communication circuit 207 , and the display). 209 , and operations of the memory 211 ) may be controlled.
  • the processor 201 may be electrically and/or operatively coupled to the camera 203 , the microphone 205 , the communication circuitry 207 , the display 209 , and the memory 211 .
  • the processor 201 executes software to enable at least one other component connected to the processor 201 (eg, a camera 203 , a microphone 205 , a communication circuit 207 , a display 209 , and a memory 211 ). )) can be controlled.
  • the processor 201 may obtain a command from components included in the electronic device 101 , interpret the obtained command, and process and/or operate various data according to the interpreted command. there is.
  • the camera 203 may acquire an image of the object by photographing the object.
  • the electronic device 101 may obtain video data corresponding to an object photographed through the camera 203 .
  • At least one camera 203 may be included in the electronic device 101 .
  • the electronic device 101 includes at least one front camera disposed on the front of the electronic device 101 and the electronic device 101 . ) may include at least one rear camera disposed on the rear side.
  • the microphone 205 may acquire audio data.
  • the electronic device 101 may include a plurality of microphones 205 .
  • the electronic device 101 may obtain audio data corresponding to the user's voice through a first microphone among the microphones 205 .
  • the electronic device 101 may obtain audio data corresponding to a sound generated from a subject (eg, a person or a landscape) through a second microphone among the plurality of microphones 205 .
  • the plurality of microphones 205 may be disposed on one surface of the electronic device 101 .
  • a first microphone of the plurality of microphones 205 may be disposed on a side surface of the electronic device 101
  • a second microphone of the plurality of microphones 205 may be disposed on a front surface of the electronic device 101
  • a third microphone among the plurality of microphones 205 may be disposed on the rear surface of the electronic device 101 .
  • a position in which the plurality of microphones 205 are disposed in the electronic device 101 is not limited to the above-described example.
  • the communication circuit 207 uses wired communication or wireless communication (eg, BT (Bluetooth), BLE (Bluetooth Low Energy), Wi-Fi) to the electronic device 101 (eg, a smartphone) and the external device 102 (eg, a wireless earphone) may support performing communication.
  • the electronic device 101 may communicate with the external device 102 using short-range wireless communication (eg, BT) through the communication circuit 207 .
  • the electronic device 101 may transmit or receive data (eg, audio data) with the external device 102 connected using short-range wireless communication through the communication circuit 207 .
  • the display 209 may visually provide (or output) data.
  • the display 209 may visually provide (or output) data stored in the electronic device 101 or data acquired externally by the electronic device 101 .
  • the display 209 may display a content image corresponding to video data acquired through the camera 203 .
  • the display 209 may include at least one sensor (eg, a touch sensor, a pressure sensor).
  • the display 209 may detect a user's touch input using at least one of a touch sensor and a pressure sensor.
  • the display 209 may detect a user's touch input associated with the user's voice input.
  • the memory 211 includes components of the electronic device 101 (eg, the camera 203 , the microphone 205 , the communication circuit 207 , the display 209 , and the memory 211 ). It can temporarily or non-temporarily store various data used by For example, the memory 211 may store video data acquired through the camera 203 . As another example, the memory 211 may store audio data (eg, first audio data and second audio data) acquired through the microphone 205 .
  • the memory 211 may store video data acquired through the camera 203 .
  • audio data eg, first audio data and second audio data
  • the external device 102 may refer to an audio device connected to the electronic device 101 using short-range wireless communication (eg, BT) or wired communication.
  • the external device 102 may be a wireless earphone connected to the electronic device 101 using short-range wireless communication.
  • the external device 102 may be a wired earphone connected to the electronic device 101 using wired communication (eg, a cable).
  • the external device 102 may include a microphone 213 and a communication circuit 215 .
  • a module included in the external device 102 may be a hardware module (eg, a circuit) included in the external device 102 .
  • Components included in the external device 102 may not be limited to the components shown in FIG. 2 (eg, the microphone 213 and the communication circuit 215 ).
  • Components of the external device 102 shown in FIG. 2 may be replaced with other components, or additional components may be added to the external device 102 . For example, at least a portion of the contents of the electronic device 101 of FIG. 1 may be applied to the external device 102 of FIG. 2 .
  • the microphone 213 may acquire audio data.
  • the external device 102 may acquire audio data corresponding to the user's voice through the microphone 213 .
  • the external device 102 may acquire audio data corresponding to a sound generated by a subject (eg, a person or a landscape) through the microphone 213 .
  • At least a part of the description of the plurality of microphones 205 included in the electronic device 101 may be applied to the microphone 213 included in the external device 102 .
  • the communication circuit 215 may support communication between the electronic device 101 and the external device 102 using wired communication or wireless communication (eg, BT, BLE, Wi-Fi).
  • the external device 102 may transmit or receive data (eg, audio data) with the electronic device 101 connected using short-range wireless communication (eg, BT) through the communication circuit 215 .
  • the external device 102 may transmit audio data corresponding to the user's voice acquired through the microphone 213 to the electronic device 101 .
  • At least a portion of the description of the communication circuit 207 included in the electronic device 101 may be applied to the communication circuit 215 included in the external device 102 .
  • FIG. 3 is a flowchart 300 of an operation in which the electronic device 101 classifies and stores audio data according to an exemplary embodiment.
  • a series of operations described below may be simultaneously performed by the electronic device 101 or the external device 102 or performed in a different order, and some operations may be omitted or added.
  • the electronic device 101 while recording video data using the camera 203 , the electronic device 101 divides first audio data corresponding to the video data and second audio data corresponding to a user's voice input, can be saved
  • the electronic device 101 may record (or acquire) video data using the camera 203 .
  • the electronic device 101 may obtain video data corresponding to an image of a subject (eg, a person or a landscape) by using the camera 203 .
  • the electronic device 101 may temporarily or non-temporarily store video data corresponding to an image of an object captured by the camera 203 in the memory 211 .
  • the electronic device 101 may obtain first audio data corresponding to the video data. For example, while the electronic device 101 records (or acquires) video data corresponding to the image of the subject captured by the camera 203 , the electronic device 101 records (or acquires) video data corresponding to the video data through the plurality of microphones 205 .
  • 1 Audio data can be acquired.
  • the first audio data may mean audio data corresponding to a sound generated from a subject (eg, a person or a landscape) photographed by the camera 203 .
  • the electronic device 101 may obtain a first user input associated with a user's voice input while video data is being recorded.
  • the electronic device 101 may acquire a first user input associated with a user's voice input while recording (or acquiring) video data corresponding to an image of a subject captured by the camera 203 .
  • the first user input may include a voice input obtained through the plurality of microphones 205 , a touch input obtained through the display 209 , a button input obtained through a button, or at least one sensor (eg, a motion sensor). It may be obtained by at least one method among gesture inputs obtained through the
  • the first user input may be an input for the user to add a caption to video data captured by the camera 203 of the electronic device 101 .
  • the electronic device 101 divides the first audio data and the second audio data corresponding to the user's voice input by using the plurality of microphones 205 for a time period designated by the first user input. can be obtained For example, in response to obtaining (or sensing) the first user input, the electronic device 101 receives second audio data corresponding to the user's voice by using at least some of the plurality of microphones 205 . can be obtained The electronic device 101 may store the acquired second audio data.
  • the user's voice input obtained in response to the first user's input may be audio data for subtitle input.
  • the electronic device 101 may store audio data acquired through the plurality of microphones 205 , and may perform post-processing on the stored audio data through beamforming or noise canceling. .
  • the electronic device 101 may store audio data obtained through the plurality of microphones 205 , and perform beamforming or noise canceling on the stored audio data to obtain audio data for subtitles and audio data for recording. can be distinguished.
  • the electronic device 101 in response to acquiring (or detecting) the first user input, performs multiple The second audio data corresponding to the user's voice input may be obtained by distinguishing it from the first audio data corresponding to the video data by using at least some of the microphones 205 of the .
  • the electronic device 101 in response to detecting a user's touch input for caption input through the display 209, performs a predetermined time period (eg, 10 seconds) from when the touch input is sensed.
  • the second audio data corresponding to the user's voice input for the caption input may be obtained by distinguishing it from the first audio data corresponding to the sound generated from the photographed subject using at least some of the plurality of microphones 205 . .
  • the electronic device 101 uses at least some of the plurality of microphones 205 to distinguish the first audio data corresponding to the video data from the user's input.
  • Second audio data corresponding to the voice input may be obtained.
  • the electronic device 101 responds to a sound generated from a photographed subject using at least some of the plurality of microphones 205 .
  • the corresponding first audio data and the second audio data corresponding to the user's voice input for the caption input may be separately obtained.
  • the electronic device 101 may generate moving picture data based on the video data, the first audio data, and the second audio data in response to the event of terminating the recording of the video data.
  • the moving picture data may be data in which video data and audio data are merged.
  • the electronic device 101 may obtain a user input for terminating the recording of video data.
  • the electronic device 101 may obtain a user's touch input for terminating video capturing through the camera 203 .
  • the user input may include at least one of a touch input, a button input, a voice input, and a gesture input.
  • the electronic device 101 may generate and store moving image data including video data, first audio data, and second audio data in response to obtaining a user input for terminating the recording of the video data.
  • the electronic device 101 may store the video data, the first audio data, and the second audio data separately in the memory 211 .
  • FIG. 4 is a flowchart 400 of an operation in which the electronic device 101 separately stores audio data or stores audio data without distinction, according to an exemplary embodiment.
  • the electronic device 101 may store first audio data and second audio data separately or without distinction.
  • the electronic device 101 may determine whether to distinguish the first audio data from the second audio data.
  • the first audio data may mean data related to a user's voice input for subtitle input
  • the second audio data may mean data related to a sound generated from a subject photographed by the camera 230 .
  • the electronic device 101 may perform operation 403 and the first audio data and the second audio data
  • the electronic device 101 may perform operation 405 .
  • the electronic device 101 in response to determining that the electronic device 101 separately acquires and stores the first audio data and the second audio data, the electronic device 101 generates the first audio data and the second audio data can be stored separately. For example, when the electronic device 101 determines to separately acquire the first audio data and the second audio data, in response to acquiring a user input for a caption input, the first audio data and the The second audio data may be separately acquired and stored.
  • the electronic device 101 in response to determining that the electronic device 101 acquires and stores the first audio data and the second audio data without distinguishing them, the electronic device 101 separates the first audio data and the second audio data You can save it without For example, when the electronic device 101 determines to acquire and store the first audio data and the second audio data without distinguishing them, the first audio data and the second audio data are integrated without distinguishing them. can be saved
  • 5A illustrates an operation in which the electronic device 101 acquires audio data for captions through the first microphone 501 and the second microphone 502 according to an exemplary embodiment.
  • the electronic device 101 obtains audio data for captions corresponding to a voice input of a user 500 through a first microphone 501 and a second microphone 502, and a third microphone 503 ), audio data for recording corresponding to the sound generated from the subject 510 may be acquired.
  • the audio data for subtitles may correspond to second audio data described in this document, and the audio data for recording may correspond to first audio data described in this document.
  • the electronic device 101 places the first microphone 1 st Mic 501 among the plurality of microphones 205 on one side (eg, the lower side) (not shown) of the electronic device 101 .
  • the second microphone (2 nd Mic, 502) may be included on one surface (eg, front) (not shown) of the electronic device 101
  • the third microphone (3 rd Mic, 503) may be included It may be included on one surface (eg, the rear surface) of the electronic device 101 .
  • the electronic device 101 acquires the video data corresponding to the photographed subject 510 through the camera 203 (eg, a rear camera) through the third microphone 503 .
  • the first audio data corresponding to the data may be obtained.
  • the electronic device 101 may acquire second audio data corresponding to the voice input of the user 500 for subtitle input through the first microphone 501 and the second microphone 502 .
  • the electronic device 101 may store the acquired first audio data and the second audio data separately.
  • the first microphone 501 and the second microphone 502 are microphones set to acquire the voice of the photographer (or user), and the third microphone 503 records the sound of the subject. It may be a microphone configured to do so.
  • FIG. 5A the first microphone 501 and the second microphone 502 are microphones set to acquire the voice of the photographer (or user), and the third microphone 503 records the sound of the subject. It may be a microphone configured to do so.
  • FIG. 5A the first microphone 501 and the second microphone 502 are microphones set to acquire the voice of the photographer (or user), and the
  • the first microphone 501 and the second microphone 502 are for subtitles
  • the third microphone 503 is for recording
  • the path having the greatest Near End effect is used as an audio input. Yes. Because the photographer is close to the electronic device 101, an audio input difference occurs. Using the input difference, the input difference of the audio input signal is divided for subtitles, and the third microphone 503 at the far end is used as a recording source. can
  • FIG. 5B illustrates a radius when the electronic device 101 acquires first audio data and second audio data by or without distinction, according to an embodiment.
  • FIG. 5B an image related to a radius for acquiring audio data is shown when the electronic device 101 separately acquires and stores the first audio data and the second audio data and acquires and stores the first audio data separately. do.
  • the electronic device 101 when the electronic device 101 acquires and stores the first audio data and the second audio data without distinguishing them, the electronic device 101 uses the plurality of microphones 205 to obtain a first radius (The second audio data corresponding to the voice input of the user 500 existing in the 530 and the first audio data corresponding to the sound generated by the subject 510 may be stored without distinction.
  • the electronic device 101 when the electronic device 101 separately acquires and stores the first audio data and the second audio data, the electronic device 101 uses some of the plurality of microphones 205 to obtain the second audio data.
  • the second audio data corresponding to the voice input of the user 500 existing within the radius 531 may be acquired, and a subject existing within the third radius 532 may be obtained by using another part of the plurality of microphones 205 .
  • First audio data corresponding to the sound generated from the 510 may be acquired.
  • it may mean that the caption input function of the electronic device 101 is activated.
  • the electronic device 101 when the electronic device 101 separately acquires and stores the first audio data and the second audio data, or when the caption input function is activated, the electronic device 101 receives the plurality of microphones 205 .
  • Second audio data corresponding to the voice input of the user 500 for the caption input may be obtained by using some of them.
  • the electronic device 101 when the caption input function of the electronic device 101 is activated, the electronic device 101 performs beamforming of the first microphone 501 toward the user 500 among the plurality of microphones 205 . and the second microphone 502 may be used to obtain second audio data corresponding to the voice input of the user 500 .
  • the electronic device 101 when the electronic device 101 separately acquires the second audio data from the first audio data using the plurality of microphones 205 beamformed toward the user 500 , the electronic device 101 ) may obtain the second audio data in which the volume of the sound is reduced by using the plurality of microphones 205 that are not beamformed toward the user 500 .
  • the electronic device 101 when the electronic device 101 acquires the second audio data using the first microphone 501 and the second microphone 502 beamformed toward the user 500 , the electronic device 101 The second audio data in which the sound level is reduced by 12 dB may be obtained by using the third microphone 503 that is not beamformed toward the user 500 .
  • FIG. 5C illustrates an operation in which the electronic device 101 acquires audio data for captions through the first microphone 501 , the second microphone 502 , and the third microphone 503 according to an exemplary embodiment.
  • the electronic device 101 receives audio data for subtitles corresponding to the user's voice input through the first microphone 501 , the second microphone 502 , and the third microphone 503 . can be obtained
  • all of the plurality of microphones 205 may be used to obtain second audio data corresponding to the user's voice input for the subtitle input 500 .
  • Only the second audio data corresponding to the voice input of the user 500 may be acquired by using all of the plurality of microphones 205 included in the electronic device 101 .
  • the operation illustrated in FIG. 5C may be implemented through selection of the pop-up object 540 illustrated in FIG. 5D .
  • the name (eg, my voice) displayed on the pop-up object 540 is not limited to the above-described example, and a name of “subtitle only my voice” may be included.
  • FIG. 5D illustrates a UI displayed in order for the electronic device 101 to acquire audio data for captions through a plurality of microphones 205 according to an exemplary embodiment.
  • the electronic device 101 receives the voice of the user 500 through the plurality of microphones 205 (eg, the first microphone 501 , the second microphone 502 , and the third microphone 503 ).
  • a UI displayed on the display 209 is shown to obtain audio data for subtitles corresponding to the input.
  • the electronic device 101 may display a pop-up object 540 for obtaining only the user's 500 voice input through the display 209 .
  • the electronic device 101 uses a plurality of microphones 205 to respond to a voice input of the user 500 for subtitle input. Second audio data may be obtained.
  • FIG. 5E illustrates an operation in which the electronic device 101 acquires audio data for captions through the third microphone 503 according to an exemplary embodiment.
  • the electronic device 101 acquires video data corresponding to a photographed subject 510 through a camera 203 (eg, a rear camera) while acquiring a first microphone 501 and a second microphone.
  • First audio data corresponding to the video data may be obtained through 502 .
  • the electronic device 101 may acquire second audio data corresponding to the voice input of the user 500 for subtitle input through the third microphone 503 .
  • the electronic device 101 may store the acquired first audio data and the second audio data separately.
  • the sound of the far-end subject 510 having no time difference of audio input and the sound of a near-field photographer having the time difference of audio input may be simultaneously input to the first microphone 501 and the second microphone 502 .
  • the subject is far end Since there is no signal difference between the audio inputs in the location, the object's audio may be cancelled. For example, through the difference between the first microphone 501 input signal and the second microphone 502 input signal, the distant object The sound may be removed, and the electronic device 101 may acquire only the sound of a photographer located at the near end through the third microphone 503 .
  • the processor 201 of the electronic device 101 may provide audio data for subtitles processed through the third microphone 503 input + the first microphone 501 input - the second microphone 502 input to the subtitle generating processor. there is.
  • the electronic device 101 may process both the sound of the photographer and the sound of the subject acquired through the microphone 205 as audio data for recording.
  • FIG 6 illustrates an operation in which the electronic device 101 separates audio data 611 for recording and audio data 612 for captions, according to an embodiment.
  • the electronic device 101 transmits first audio data (eg, a first microphone 501 , a second microphone 502 , and a third microphone 503 ) through a plurality of microphones 205 .
  • first audio data eg, a first microphone 501 , a second microphone 502 , and a third microphone 503
  • second audio data 602 may be acquired.
  • the electronic device 100 uses a part of information (eg, an acquisition time difference) of the first audio data 601 and the second audio data 602 through the processor 201 to obtain the first audio data 601 and
  • the second audio data 602 may be divided into audio data 611 for recording and audio data 612 for subtitles and stored.
  • the times at which the sound generated from the subject 510 located less than a threshold distance from the electronic device 101 is acquired through the plurality of microphones 205 may be different. For example, there may be a difference in time at which a sound generated from a subject 510 located at a short distance (eg, 30 cm) from the electronic device 101 is acquired through the plurality of microphones 205 .
  • the times for which the sound generated from the subject 510 located at or more than a threshold distance from the electronic device 101 is acquired through the plurality of microphones 205 may be substantially the same. For example, there may not be a difference in time at which the sound generated from the subject 510 located at a far distance (eg, 10m) from the electronic device 101 is acquired through the plurality of microphones 205 .
  • the processor 201 may perform audio cancellation by differentiating audio signals of audio data having a difference in times acquired through the plurality of microphones 205 .
  • the electronic device 101 transmits a sound generated from a subject 510 located at a threshold distance (eg, 10m) or more from the electronic device 101 through the first microphone 501 and the second microphone 502 .
  • the processor 201 receives audio signals (eg, first audio data 601 and second audio data) obtained through the first microphone 501 and the second microphone 502 from the subject 510 located at or greater than a threshold distance. (602)) can be differentiated.
  • the processor 201 may process and/or store the difference signals as audio data 612 for subtitles.
  • the processor 201 may process and/or store signals of audio data acquired through the plurality of microphones 205 as audio data 611 for recording. For example, the processor 201 may omit a difference between audio signals acquired through the second microphone 502 and the third microphone 503 . The processor 201 may process and/or store audio signals omitting the first difference as audio data 611 for recording.
  • the electronic device 101 applies beamforming to audio data input through the first microphone 501 , the second microphone 502 , and the third microphone 503 to record audio data 611 and audio data 612 for subtitles may be separately stored.
  • the electronic device 101 may provide different delay times to audio data input through the first microphone 501 , the second microphone 502 , and the third microphone 503 .
  • the electronic device 101 may generate audio data having a loud sound in a specific direction by synthesizing the audio data to which the delay time is assigned. For example, when the electronic device 101 applies delay times corresponding to the direction of the user and the subject to the audio data acquired through the plurality of microphones 205 and adds them, respectively, the audio data for recording 611 and the audio data for subtitles (612) can be isolated.
  • the operation may be performed through post-processing after the electronic device 101 stores audio data acquired through each microphone 205 .
  • the above operation may be performed by the electronic device 101 using at least two microphones 205 .
  • the electronic device 101 may also perform the aforementioned noise cancellation through the above operation.
  • FIG. 7 is a flowchart illustrating an operation in which the electronic device 101 acquires audio data 612 for subtitles through the external device 102 according to an embodiment.
  • the electronic device 101 acquires audio data 612 for subtitles using a plurality of microphones 205 included in the electronic device 101 or uses the external device 102 to obtain subtitles. Audio data 612 may be obtained.
  • the electronic device 101 may determine whether it is connected to the first external device 102 .
  • the first external device 102 may acquire audio and may be referred to as an electronic device that is connected to the electronic device 101 to transmit or receive the acquired audio data.
  • the first external device 102 may be a wired earphone or a wireless earphone. At least a portion of the contents of the external device 102 of FIG. 1 may be applied to the first external device 102 of FIG. 7 .
  • the electronic device 101 may connect to the first external device 102 using wired communication or wireless communication.
  • the electronic device 101 may be connected to a wired earphone using a cable.
  • the electronic device 101 may be connected to the wireless earphone using short-range wireless communication (eg, BT).
  • BT short-range wireless communication
  • the electronic device 101 when the electronic device 101 is connected to the first external device 102 , the electronic device 101 may perform operation 703 , and the electronic device 101 may display the first external device 102 . If not connected to , the electronic device 101 may perform operation 705 .
  • the electronic device 101 may acquire second audio data through the first external device 102 , and may acquire first audio data through the electronic device 101 .
  • the first external device 102 may acquire the second audio data corresponding to the voice input of the user 500 for the caption input through the microphone 213 .
  • the electronic device 101 may receive the second audio data obtained by the first external device 102 through the microphone 213 from the first external device 102 .
  • the electronic device 101 may acquire first audio data corresponding to the sound generated by the subject 510 by using at least some of the plurality of microphones 205 .
  • the electronic device 101 may obtain first audio data through the first external device 102 and may obtain second audio data through the electronic device 101 .
  • the electronic device 101 may obtain first audio data and second audio data through the electronic device 101 .
  • the electronic device 101 responds to a voice input of the user 500 for subtitle input by using some of the plurality of microphones 205 . to obtain second audio data.
  • the electronic device 101 uses another part of the plurality of microphones 205 to generate a second sound corresponding to the sound generated by the subject 510 . 1 Audio data can be acquired.
  • FIG. 8A illustrates a UI when the electronic device 101 acquires audio data 612 for captions according to an embodiment
  • FIG. 8B shows the electronic device 101 according to an embodiment of the external device 102
  • a UI in the case of acquiring audio data 612 for subtitles is shown.
  • the electronic device 101 may display, on the display 209 , the first object 801 that obtains the voice of the user 500 for subtitle input.
  • the first object 801 may not be limited to the shape and location illustrated in FIG. 8A .
  • the first object 801 may be displayed on at least one area of the display 209 .
  • the electronic device 101 may display the first object 801 for caption input on an area of the display 209 while the subject 510 is photographed through the camera 203 .
  • the electronic device 101 may display the first object 801 for caption input on one area of the display 209 in response to starting the photographing through the camera 203 .
  • the electronic device 101 may obtain an input of the user 500 for selecting the first object 801 .
  • the electronic device 101 may obtain the user 500 voice input for caption input by using at least some of the plurality of microphones 205 .
  • the electronic device 101 transmits a first object 801 that obtains a voice input of the user 500 for subtitle input and a voice input of the user 500 for subtitle input to the external device 102 .
  • the second object 802 obtained through .
  • the second object 802 may not be limited to the shape and location illustrated in FIG. 8B .
  • the second object 802 may be displayed on at least one area of the display 209 .
  • the electronic device 101 while the electronic device 101 captures the subject 510 through the camera 203 in a state in which it is connected to the external device 102 , the electronic device 101 is displayed on an area of the display 209 .
  • the first object 801 and/or the second object 802 for caption input may be displayed.
  • the electronic device 101 may obtain an input of the user 500 for selecting the second object 802 .
  • the electronic device 101 receives a caption input acquired from the external device 102 connected to the electronic device 101 through the microphone 213 included in the external device 102 . It is possible to receive second audio data corresponding to the user's 500 voice input for .
  • the electronic device 101 may obtain an input of the user 500 for selecting the first object 801 .
  • the electronic device 101 may obtain the user 500 voice input for caption input by using at least some of the plurality of microphones 205 .
  • FIG 9 illustrates a UI for the electronic device 101 to select a means for acquiring audio data 612 for subtitles according to an embodiment.
  • the electronic device 101 displays a plurality of icons (eg, a first icon 901 ) through the display 209 . ), a second icon 902 , and a third icon 903) may be displayed.
  • the first icon 901 indicates the second audio data (or the voice input of the user 500 for subtitle input) using the external device 102 (eg, wired earphone) connected to the electronic device 101 by wire. It may be an icon for acquiring audio data 612 for subtitles).
  • the second icon 902 indicates the user 500 for caption input using the external device 102 (eg, wireless earphone) through which the electronic device 101 is wirelessly connected (eg, BT) to the electronic device 101 . It may be an icon for acquiring second audio data corresponding to a voice input.
  • the third icon 903 is an icon in which the electronic device 101 obtains second audio data corresponding to the user's voice input by using the plurality of microphones 205 included in the electronic device 101 . can
  • the electronic device 101 uses at least some On the premise that the first audio data corresponding to the video data is obtained using a microphone, the electronic device 101 transmits the second audio data corresponding to the voice input of the user 500 for the caption input to the first A means for acquiring and storing separately from audio data will be described.
  • the electronic device 101 may obtain an input of the user 500 selecting the first icon 901 .
  • the electronic device 100 obtains the The second audio data corresponding to the voice input of the user 500 for the caption input may be received.
  • the electronic device 101 may store the second audio data received from the wired external device 101 .
  • the electronic device 101 may obtain an input of the user 500 selecting the second icon 902 .
  • the electronic device 101 obtains the The second audio data corresponding to the voice input of the user 500 for the caption input may be received.
  • the electronic device 101 may store the second audio data received from the wirelessly connected external device 101 .
  • 10A, 10B, and 10C illustrate a UI when the electronic device 101 acquires audio data 612 for subtitles through the external device 102 connected through BT, according to an embodiment.
  • the electronic device 101 is acquiring first audio data corresponding to the video data in a state in which the electronic device 101 records video data through the camera 203, and the electronic device ( An operation in which the 101) separately acquires the second audio data corresponding to the voice input of the user 500 for subtitle input through the wirelessly connected external device 101 from the first audio data will be described.
  • the electronic device 101 in response to the electronic device 101 obtaining an input of the user 500 for selecting the second icon 902 , the electronic device 101 performs a caption through the wirelessly connected external device 102 .
  • a first display 1001 indicating a standby state for input may be displayed on one area of the display 209 .
  • the electronic device 101 displays the second display 1002 on one area of the display 209 in response to obtaining an input of the user 500 for selecting the first display 1001 .
  • the second display 1002 may be a display in which a visual effect (eg, highlight) of an arbitrary color (eg, blue) is processed on the first display 1002 .
  • the second display 1002 on which the visual effect is processed may indicate a state in which the electronic device 101 is connected to the external device 102 through short-range wireless communication (eg, BT).
  • the electronic device 101 displays the third display 1003 on one area of the display 209 in response to obtaining an input of the user 500 for selecting the second display 1002 .
  • the third display 1003 may be a display in which a visual effect (eg, blinking) is processed on the first display 1002 .
  • the third display 1003 on which the visual effect has been processed is to obtain the voice input of the user 500 for the caption input from the external device 102 connected to the electronic device 101 through short-range wireless communication (eg, BT). It may mean that it is possible.
  • the electronic device 101 may provide a voice guidance 'Subtitle input is ready'.
  • a state in which the electronic device 101 can obtain the user 500's voice input for subtitle input in response to the electronic device 101 acquiring the user 500's input of ending the voice input for subtitle input can be terminated.
  • the electronic device 101 may provide a voice guidance 'caption input is complete' in response to obtaining the user 500's input of ending voice input for subtitle input.
  • FIG 11 illustrates sync control of the electronic device 101 according to an embodiment.
  • the video data is A difference may occur between the recording time and the caption input time.
  • the difference between the time when the electronic device 101 records video data through the camera 203 and the time when the first external device 102 obtains the user 500's voice input for subtitle input is t1 can be Time t2 at which the second audio data corresponding to the user's 500 voice input acquired from the first external device 102 is transmitted to the processor 201 of the electronic device 101 may be time t2.
  • a time for the processor 102 to store the second audio data in the memory 211 may be t3.
  • a time at which the second audio data stored in the memory 211 is provided through the display 209 may be t4.
  • the electronic device 101 may adjust the subtitle sync by applying a time delay corresponding to the time difference. For example, in order to compensate for the time difference, the electronic device 101 may adjust the subtitle sync by applying a time delay corresponding to t1+t2+t3+t4.
  • FIG. 12 is a diagram illustrating a caption editing UI of the electronic device 101 according to an exemplary embodiment.
  • FIG. 12(a) shows a UI for the electronic device 101 to edit subtitles and sync supplementation
  • FIG. 12(b) shows a UI for the electronic device 101 to change a language or add a language. do.
  • the electronic device 101 may display the first subtitle 1201 on one area of the display 209 on which a content image corresponding to video data is provided.
  • the electronic device 101 may provide a plurality of touch buttons 1211 , 1212 , 1213 , 1214 , and 1215 for subtitle editing and sync complementation.
  • the first button 1211 may be a button for adjusting the subtitle sync forward
  • the second button 1212 may be a button for adjusting the subtitle sync backward.
  • the third button 1213 may be a button for adding another language
  • the fourth button 1214 may be a button for changing the current language to another language.
  • the fifth button 1215 may be a button for confirming corrections to which subtitle editing and sync supplementation are applied.
  • the electronic device 101 may display the second subtitle 1202 in one area of the display 209 on which a content image corresponding to video data is provided.
  • the second subtitle 1202 may be a subtitle displayed by adding a language different from the language applied to the first subtitle 1201 .
  • the second subtitle 1202 may be a subtitle displayed by adding an English subtitle to the Korean subtitle displayed on the first subtitle 1201 .
  • the electronic device 101 may separately create a subtitle file in a moving picture file, and may modify the subtitle through a subtitle editor.
  • the subtitle file may be provided so that an smi file is created in the title of the video file and can be edited with a separate tool.
  • subtitle editing may be provided in association with a dex station.
  • FIG. 13 is a diagram illustrating a caption editing UI of the electronic device 101 according to an exemplary embodiment.
  • the electronic device 101 may provide a UI for displaying a portion in which a caption is input in a content image corresponding to video data. For example, the electronic device 101 displays points (eg, the point 1301 , the second point 1303 , and the third point 1305 ) indicating the portion to which the caption is inputted on the display 209 . can be displayed in The electronic device 101 may provide a content image corresponding to the selected point through the display 209 in response to obtaining an input for selecting points indicating a portion into which a caption is input. For example, in response to the electronic device 101 obtaining an input for selecting the second portion 1303 , the content image 1310 corresponding to the selected second portion 1303 is displayed on the display 209 . can provide
  • FIG. 14 is a diagram illustrating a caption editing UI of the electronic device 101 according to an exemplary embodiment.
  • the electronic device 101 when video data is recorded or captured video data is reproduced by the camera 203 , the electronic device 101 responds to recognizing the color of the background 1401 in which the subtitle is located, in response to the The color of the subtitle 1403 may be automatically converted. For example, in response to recognizing that the color of the background 1401 on which the subtitle is located is a dark color (eg, black) while capturing a video through the camera 203 at night, the electronic device 101 responds to the subtitle The color of (1403) can be converted to a light color (eg white).
  • the content regarding the color conversion of the subtitle 1403 is not limited to the above-described example.
  • the electronic device 101 may change the color of the subtitle 1403 based on the color of the subtitle 1403 and the color of the background 1401 on which the subtitle 1403 is located.
  • the electronic device 101 may include a plurality of microphones 205 , a camera 203 , a plurality of microphones 205 , and a processor 201 operatively connected to the camera 203 .
  • the processor 201 of the electronic device 101 records video data using the camera 203, and while the video data is recorded, first audio corresponding to the video data using the plurality of microphones 205 acquiring data, acquiring a first user input associated with a user's voice input while the video data is being recorded, and using a plurality of microphones 205 for a time period specified by the first user input
  • First audio data and second audio data corresponding to the voice input of the user are separately obtained, and in response to an event of terminating recording of the video data, the video data, the first audio data, and the second audio data 2
  • the electronic device 101 may further include a display 209 , and while the processor 201 outputs content corresponding to the video data through the display 209 , the display 209 . Text corresponding to the second audio data may be displayed on the .
  • the processor 201 of the electronic device 101 responds to the start of recording the video data using the camera 203 , at least one of selecting a method of acquiring the user's voice input. may be displayed on the display 209 .
  • the first user input may include at least one of a button input, a touch input, a gesture input, and a voice input.
  • the electronic device 101 may further include a memory 211 , and the processor 201 may store the acquired first audio data and the second audio data separately in the memory. there is.
  • a first microphone of the plurality of microphones 205 is disposed on a side surface of the electronic device 101
  • a second microphone of the plurality of microphones 205 is disposed on a front surface of the electronic device 101
  • a third microphone among the plurality of microphones 205 may be disposed on the rear surface of the electronic device 101 .
  • the processor 201 of the electronic device 101 in response to obtaining the first user input, includes the first microphone beam-formed toward the user among the plurality of microphones 205 and the Obtaining the second audio data corresponding to the user's voice input using a second microphone, and in response to obtaining the first user input, using the third microphone among the plurality of microphones 205 The first audio data corresponding to the video data may be obtained.
  • the processor 201 transmits the video data to the first microphone and the second microphone beamformed toward the user in response to the specified time period elapsed or the user's voice input being terminated. Beamforming may be switched to obtain the first audio data corresponding to .
  • the processor 201 of the electronic device 101 in response to obtaining the first user input, includes the first microphone beamformed toward the user from among the plurality of microphones 205 , the The second audio data corresponding to the user's voice input may be acquired using a second microphone and the third microphone.
  • the processor 201 of the electronic device 101 uses the first microphone and the second microphone among the plurality of microphones 205 in response to obtaining the first user input.
  • the processor 201 of the electronic device 101 acquires a plurality of voices using the plurality of microphones 205 , determines whether a difference in time for acquiring the plurality of voices exists, and the Voices having a time difference may be stored as the second audio data corresponding to the user's voice input, and voices having no time difference may be stored as the first audio data corresponding to the video data.
  • the processor 201 of the electronic device 101 is connected to the first external device 102 , and the user's voice input obtained by the first external device 102 connected to the electronic device 101 is obtained. may receive data corresponding to from the first external device 102 , and store the data received from the first external device 102 as the second audio data corresponding to the user's voice input.
  • the electronic device 101 may further include a communication circuit 207 .
  • the processor 201 of the electronic device 101 may be connected to the first external device 102 using short-range wireless communication through the communication circuit 207 or may be connected to the first external device 102 using a wired cable. there is.
  • the method of operating the electronic device 101 includes an operation of recording video data using the camera 203 , and a plurality of microphones 205 while the video data is recorded. acquiring corresponding first audio data, while the video data is being recorded, acquiring a first user input associated with a user's voice input, during a time period specified by the first user input, a plurality of microphones In response to an operation of separately acquiring the first audio data and the second audio data corresponding to the voice input of the user using (205), and an event of terminating the recording of the video data, the video data; and generating video data based on the first audio data and the second audio data.
  • the method of operating the electronic device 101 includes displaying text corresponding to the second audio data on the display while outputting the content corresponding to the video data through the display 209 .
  • At least one object for selecting a method for obtaining the user's voice input may further include an operation of displaying on the display 209 .
  • the method of operating the electronic device 101 may further include the operation of separately storing the obtained first audio data and the second audio data in a memory.
  • a first microphone and a second microphone beamformed toward the user among the plurality of microphones 205 are selected. obtaining the second audio data corresponding to the user's voice input by using the video data using a third microphone among the plurality of microphones 205 in response to obtaining the first user input.
  • the method may further include obtaining the first audio data corresponding to .
  • the operation of acquiring a plurality of voices using the plurality of microphones 205 , the operation of determining whether a difference in time for acquiring the plurality of voices exists, and the voices having the time difference are the
  • the method may further include an operation of storing the second audio data corresponding to the user's voice input, and storing the voices having no time difference as the first audio data corresponding to the video data.
  • the electronic device 101 is connected to the first external device 102 using short-range wireless communication through the communication circuit 207 or is connected to the first external device 102 using a wired cable. operation, receiving data corresponding to the user's voice input obtained by the first external device 102 connected to the electronic device 101 from the first external device 102, and the first external device 102 The method may further include storing the data received from the user as the second audio data corresponding to the user's voice input.
  • FIG. 15 is a perspective view illustrating a front surface of an electronic device 1500 (eg, the electronic device 101 of FIG. 1 or FIG. 2 ) according to an embodiment
  • FIG. 16 is an electronic device 1500 according to an embodiment.
  • a perspective view showing the rear surface of the electronic device 1500 eg, the electronic device 101 of FIG. 1 or FIG. 2 .
  • an electronic device 1500 has a first side (or front side) 1510A, a second side (or back side) 1510B, and a first side 1510A. and a housing 1510 including a side surface 1510C surrounding the space between the second surfaces 1510B.
  • the housing may refer to a structure that forms part of the first surface 1510A of FIG. 15 , the second surface 1510B of FIG. 16 , and side surfaces 1510C of FIG. 15 .
  • the first surface 1510A may be formed by a front plate 1502 (eg, a front plate, a glass plate including various coating layers, or a polymer plate) at least a portion of which is substantially transparent. there is.
  • the front plate 1502 may be coupled to the housing 1510 to form an internal space together with the housing 1510 .
  • the term 'internal space' may mean a space accommodating at least a portion of the display 1501 as an internal space of the housing 1510 .
  • the second surface 1510B may be formed by a substantially opaque back plate 1511 .
  • the back plate 1511 may be formed, for example, by coated or colored glass, ceramic, polymer, metal (eg, aluminum, stainless steel (STS), or magnesium), or a combination of at least two of the above materials.
  • the side surface 1510C is coupled to the front plate 1502 and the rear plate 1511 and may be formed by a side bezel structure (or “side member”) 1518 comprising a metal and/or a polymer.
  • the back plate 1511 and the side bezel structure 1518 are integrally formed and may include the same material (eg, a metal material such as aluminum).
  • the front plate 1502 includes two first regions 1510D (eg, curved regions) extending seamlessly by bending from the first surface 1510A toward the rear plate 1511 . ) may be included at both ends of the long edge of the front plate 1502 .
  • the rear plate 1511 includes two second regions 1510E (eg, curved regions) that are bent from the second surface 1510B toward the front plate 1502 and extend seamlessly. It can be included on both ends of the edge.
  • the front plate 1502 (or the back plate 1511 ) may include only one of the first regions 1510D (or the second regions 1510E). In another embodiment, some of the first regions 1510D or the second regions 1510E may not be included.
  • the side bezel structure 1518 when viewed from the side of the electronic device 1500 , has a side (eg, a connector) that does not include the first area 1510D or the second area 1510E as described above.
  • the side having the first thickness (or width) on the side on which the hole 1508 is formed) and including the first region 1510D or the second region 1510E (eg, the side on which the key input device 1517 is disposed) ) side may have a second thickness thinner than the first thickness.
  • the electronic device 1500 includes a display 1501 , an audio module 1503 , 1507 , 1514 , a sensor module 1504 , a camera module 1505 , 1555 , a key input device 1517 , and light emission. element 1506 , and at least one or more of connector holes 1508 , 1509 .
  • the electronic device 1500 may omit at least one of the components (eg, the key input device 1517 or the light emitting device 1506 ) or additionally include other components.
  • Display 1501 may be exposed through a substantial portion of front plate 1502 , for example. In various embodiments, at least a portion of the display 1501 may be exposed through the front plate 1502 forming the first area 1510D of the first surface 1510A and the side surface 1510C. In various embodiments, an edge of the display 1501 may be formed to have substantially the same shape as an adjacent outer shape of the front plate 1502 . In another embodiment (not shown), in order to expand an area to which the display 1501 is exposed, the distance between the periphery of the display 1501 and the periphery of the front plate 1502 may be formed to be substantially the same.
  • a recess or opening is formed in a part of a screen display area (eg, an active area) or an area outside the screen display area (eg, an inactive area) of the display 1501, and at least one of an audio module 1514 , a sensor module 1504 , a camera module 1505 , 1555 , and a light emitting element 1506 aligned with the recess or opening.
  • at least one of an audio module 1514 , a sensor module 1504 , a camera module 1505 , 1555 , and a light emitting device 1506 on the rear surface of the screen display area of the display 1501 may include more than one.
  • the display 1501 is coupled to or adjacent to a touch sensing circuit, a pressure sensor capable of measuring the intensity (pressure) of a touch, and/or a digitizer detecting a magnetic field type stylus pen. can be placed.
  • a pressure sensor capable of measuring the intensity (pressure) of a touch
  • a digitizer detecting a magnetic field type stylus pen.
  • at least a portion of the sensor module 1504 and/or at least a portion of a key input device 1517 are disposed in the first areas 1510D and/or the second areas 1510E can be
  • the audio modules 1503 , 1507 , and 1514 may include a microphone hole 1503 and speaker holes 1507 and 1514 .
  • a microphone for acquiring an external sound may be disposed therein, and a plurality of microphones may be disposed to detect the direction of the sound in various embodiments.
  • the speaker holes 1507 and 1514 may include an external speaker hole 1507 and a receiver hole 1514 for a call.
  • the speaker holes 1507 and 1514 and the microphone hole 1503 may be implemented as a single hole, or a speaker may be included without the speaker holes 1507 and 1514 (eg, a piezo speaker).
  • the sensor module 1504 may generate an electrical signal or data value corresponding to an internal operating state of the electronic device 1500 or an external environmental state.
  • the sensor module 1504 may include, for example, a first sensor module 1504 (eg, a proximity sensor) and/or a second sensor module (not shown) disposed on a first side 1510A of the housing 1510 ( Example: a fingerprint sensor), and/or another sensor module (not shown) disposed on the second surface 1510B of the housing 1510 (eg, an HRM sensor or a fingerprint sensor).
  • the fingerprint sensor may be disposed on the second surface 1510B as well as the first surface 1510A (eg, the display 1501 ) of the housing 1510 .
  • the electronic device 1500 includes a sensor module not shown, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, It may further include at least one of a humidity sensor or an illuminance sensor 1504 .
  • a sensor module not shown, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, It may further include at least one of a humidity sensor or an illuminance sensor 1504 .
  • the camera modules 1505 and 1555 include a first camera device 1505 disposed on the first surface 1510A of the electronic device 1500 , and a second camera device 1555 disposed on the second surface 1510B of the electronic device 1500 .
  • the camera modules 1505 and 1555 may include one or more lenses, an image sensor, and/or an image signal processor.
  • a flash (not shown) may be disposed on the second surface 1510B.
  • the flash 413 may include, for example, a light emitting diode or a xenon lamp.
  • two or more lenses (infrared cameras, wide-angle and telephoto lenses) and image sensors may be disposed on one side of the electronic device 1500 .
  • the key input device 1517 may be disposed on a side surface 1510C of the housing 1510 .
  • the electronic device 1500 may not include some or all of the above-mentioned key input devices 1517 and the not included key input devices 1517 are displayed on the display 1501 as soft keys or the like. It may be implemented in other forms.
  • the light emitting element 1506 may be disposed, for example, on the first surface 1510A of the housing 1510 .
  • the light emitting device 1506 may provide, for example, state information of the electronic device 1500 in the form of light.
  • the light emitting device 1506 may provide a light source that is interlocked with the operation of the camera module 1505 , for example.
  • Light emitting element 1506 may include, for example, LEDs, IR LEDs, and xenon lamps.
  • the connector holes 1508 and 1509 include a first connector hole 1508 capable of receiving a connector (eg, a USB connector) for transmitting and receiving power and/or data to and from an external electronic device, and/or an external electronic device. and a second connector hole (eg, earphone jack) 1509 capable of accommodating a connector for transmitting and receiving audio signals.
  • a connector eg, a USB connector
  • a second connector hole eg, earphone jack

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

Un dispositif électronique, selon divers modes de réalisation divulgués dans le présent document, comprend des microphones, une caméra et un processeur connecté fonctionnellement aux microphones et à la caméra. Le processeur peut : enregistrer des données vidéo à l'aide de la caméra ; pendant que les données vidéo sont enregistrées, obtenir des premières données audio correspondant aux données vidéo à l'aide de la pluralité de microphones ; pendant que les données vidéo sont enregistrées, obtenir une première entrée d'utilisateur associée à une entrée vocale d'un utilisateur ; pendant une période de temps désignée par la première entrée d'utilisateur, obtenir séparément, à l'aide de la pluralité de microphones, les premières données audio et les secondes données audio correspondant à l'entrée vocale de l'utilisateur ; et en réponse à un événement qui termine l'enregistrement des données vidéo, générer des données de film sur la base des données vidéo, des premières données audio et des secondes données audio.
PCT/KR2021/010817 2020-08-18 2021-08-13 Dispositif électronique comprenant une caméra et des microphones Ceased WO2022039457A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0103340 2020-08-18
KR1020200103340A KR20220022315A (ko) 2020-08-18 2020-08-18 카메라 및 마이크를 포함하는 전자 장치

Publications (1)

Publication Number Publication Date
WO2022039457A1 true WO2022039457A1 (fr) 2022-02-24

Family

ID=80350489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/010817 Ceased WO2022039457A1 (fr) 2020-08-18 2021-08-13 Dispositif électronique comprenant une caméra et des microphones

Country Status (2)

Country Link
KR (1) KR20220022315A (fr)
WO (1) WO2022039457A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025058267A1 (fr) * 2023-09-15 2025-03-20 삼성전자 주식회사 Dispositif électronique et procédé de commande de sortie de signal audio l'utilisant

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084037B2 (en) * 2009-07-24 2015-07-14 Koninklijke Philips N.V. Audio beamforming
KR20160026585A (ko) * 2014-09-01 2016-03-09 삼성전자주식회사 복수의 마이크를 포함하는 전자 장치 및 이의 운용 방법
KR20160133335A (ko) * 2015-05-12 2016-11-22 이석희 음성인식형 입체적 디지털영상 구현시스템
KR101753715B1 (ko) * 2010-12-13 2017-07-04 삼성전자주식회사 촬영장치 및 이를 이용한 촬영방법
KR20190107623A (ko) * 2019-09-02 2019-09-20 엘지전자 주식회사 사이니지 장치 및 이의 동작 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084037B2 (en) * 2009-07-24 2015-07-14 Koninklijke Philips N.V. Audio beamforming
KR101753715B1 (ko) * 2010-12-13 2017-07-04 삼성전자주식회사 촬영장치 및 이를 이용한 촬영방법
KR20160026585A (ko) * 2014-09-01 2016-03-09 삼성전자주식회사 복수의 마이크를 포함하는 전자 장치 및 이의 운용 방법
KR20160133335A (ko) * 2015-05-12 2016-11-22 이석희 음성인식형 입체적 디지털영상 구현시스템
KR20190107623A (ko) * 2019-09-02 2019-09-20 엘지전자 주식회사 사이니지 장치 및 이의 동작 방법

Also Published As

Publication number Publication date
KR20220022315A (ko) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2022030882A1 (fr) Dispositif électronique de traitement de données audio, et procédé d'exploitation de celui-ci
WO2022114801A1 (fr) Dispositif électronique comprenant une pluralité de dispositifs de prise de vues, et procédé de commande de dispositif électronique
WO2022065827A1 (fr) Procédé de capture d'images au moyen d'une communication sans fil et dispositif électronique le prenant en charge
WO2020171342A1 (fr) Dispositif électronique permettant de fournir un service d'intelligence artificielle visualisé sur la base d'informations concernant un objet externe, et procédé de fonctionnement pour dispositif électronique
WO2022149812A1 (fr) Dispositif électronique comprenant un module de caméra et procédé de fonctionnement de dispositif électronique
WO2023054957A1 (fr) Procédé de fourniture de vidéo et dispositif électronique le prenant en charge
WO2022231180A1 (fr) Dispositif électronique et procédé de fonctionnement d'un dispositif électronique
WO2022039457A1 (fr) Dispositif électronique comprenant une caméra et des microphones
WO2024225788A1 (fr) Procédé et dispositif électronique pour demander une commande à distance d'une caméra
WO2023085679A1 (fr) Dispositif électronique et procédé de génération automatique de vidéo éditée
WO2023101179A1 (fr) Dispositif électronique à écran flexible et procédé de commande de module d'appareil de prise de vues correspondant
WO2022186646A1 (fr) Dispositif électronique pour une génération d'image et procédé de fonctionnement du dispositif électronique
WO2022154440A1 (fr) Dispositif électronique de traitement de données audio, et procédé d'exploitation associé
WO2022245173A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2022030908A1 (fr) Dispositif électronique et procédé de synchronisation de données vidéo et de données audio à l'aide de celui-ci
WO2022124659A1 (fr) Dispositif électronique et procédé permettant de traiter une entrée d'utilisateur
WO2022154415A1 (fr) Dispositif électronique et procédé de fonctionnement d'un service vidéo d'avatar
WO2022145673A1 (fr) Dispositif électronique et procédé de fonctionnement d'un dispositif électronique
WO2024128637A1 (fr) Dispositif électronique de réglage du volume de chaque haut-parleur, procédé de fonctionnement et support de stockage associés
WO2024262843A1 (fr) Procédé de production d'informations de retour pour prendre une photographie à distance et dispositif électronique associé
WO2023080401A1 (fr) Procédé et dispositif d'enregistrement sonore par dispositif électronique au moyen d'écouteurs
WO2023038252A1 (fr) Dispositif électronique de capture d'image mobile et son procédé de fonctionnement
WO2022186477A1 (fr) Procédé de lecture de contenu et dispositif électronique prenant en charge celui-ci
WO2022203211A1 (fr) Dispositif électronique comprenant un module de caméra et procédé de fonctionnement du dispositif électronique
WO2025089604A1 (fr) Dispositif électronique de prise de vue comprenant une section d'événement, son procédé de fonctionnement et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21858544

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21858544

Country of ref document: EP

Kind code of ref document: A1