AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts

Post

Replies

Boosts

Views

Activity

How can I locate a UVC camera for PTZ control by AVCaptureDevice.unique_id
I'm writing a program to control a PTZ camera connected via USB. I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest. There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target. It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change. Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice? https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera? here's my example code for getting unique_id: // // camera_unique_id_test.mm // // 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id // // 编译命令: // clang++ -framework AVFoundation -framework CoreMedia -framework Foundation // camera_unique_id_test.mm -o camera_unique_id_test // #include <iostream> #include <string> #include <vector> #import <AVFoundation/AVFoundation.h> #import <Foundation/Foundation.h> struct CameraInfo { std::string uniqueId; }; std::vector<CameraInfo> getAllCameraDevices() { std::vector<CameraInfo> cameras; @autoreleasepool { NSArray<AVCaptureDevice*>* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; AVCaptureDevice* defaultDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // 遍历所有设备 for (AVCaptureDevice* device in devices) { CameraInfo info; // 获取unique_id info.uniqueId = std::string([device.uniqueID UTF8String]); cameras.push_back(info); } } return cameras; } int main(int argc, char* argv[]) { std::vector<CameraInfo> cameras = getAllCameraDevices(); for (size_t i = 0; i < cameras.size(); i++) { const CameraInfo& camera = cameras[i]; std::cout << " 设备 " << (i + 1) << ":" << std::endl; std::cout << " unique_id: " << camera.uniqueId << std::endl; } return 0; } and here's my code for UVC control: // clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test #include <iostream> #include <CoreFoundation/CoreFoundation.h> #include <IOKit/IOCFPlugIn.h> #include <IOKit/IOKitLib.h> #include <IOKit/IOMessage.h> #include <IOKit/usb/IOUSBLib.h> #include <IOKit/usb/USB.h> CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return nullptr; CFStringRef result = static_cast<CFStringRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); return result; } std::string GetStringFromIORegistry(io_service_t ioService, const char* key) { CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key); if (!cfString) return ""; char buffer[256]; Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer), kCFStringEncodingUTF8); CFRelease(cfString); return success ? std::string(buffer) : std::string(""); } uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return 0; CFNumberRef number = static_cast<CFNumberRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); if (!number) return 0; uint32_t value = 0; CFNumberGetValue(number, kCFNumberSInt32Type, &value); CFRelease(number); return value; } int main() { // Get matching dictionary for USB devices CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); // Get iterator for matching services io_iterator_t serviceIterator; IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, &serviceIterator); // Iterate through matching devices io_service_t usbService; while ((usbService = IOIteratorNext(serviceIterator))) { uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID"); uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor"); uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct"); IOCFPlugInInterface** plugInInterface = nullptr; IOUSBDeviceInterface** deviceInterface = nullptr; SInt32 score; // Get device plugin interface IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); // Get device interface (*plugInInterface) ->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); // Try to find UVC control interface using CreateInterfaceIterator io_iterator_t interfaceIterator; IOUSBFindInterfaceRequest interfaceRequest; interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14 interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1 interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare; (*deviceInterface) ->CreateInterfaceIterator(deviceInterface, &interfaceRequest, &interfaceIterator); (*deviceInterface)->Release(deviceInterface); io_service_t usbInterface = IOIteratorNext(interfaceIterator); IOObjectRelease(interfaceIterator); if (usbInterface) { std::cout << "Get UVC device with:" << std::endl; std::cout << "locationId: " << std::hex << locationId << std::endl; std::cout << "vendorId: " << std::hex << vendorId << std::endl; std::cout << "productId: " << std::hex << productId << std::endl << std::endl; IOObjectRelease(usbInterface); } IOObjectRelease(usbService); } IOObjectRelease(serviceIterator); }
0
0
69
7h
LockedCameraCapture Does Not Launch The App from Lock Screen
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully. Standard Xcode target template: Lock_Screen_Capture.swift @main struct Lock_Screen_Capture: LockedCameraCaptureExtension { var body: some LockedCameraCaptureExtensionScene { LockedCameraCaptureUIScene { session in Lock_Screen_CaptureViewFinder(session: session) } } } Lock_Screen_CaptureViewFinder.swift: import SwiftUI import UIKit import UniformTypeIdentifiers import LockedCameraCapture struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable { let session: LockedCameraCaptureSession var sourceType: UIImagePickerController.SourceType = .camera init(session: LockedCameraCaptureSession) { self.session = session } func makeUIViewController(context: Self.Context) -> UIImagePickerController { let imagePicker = UIImagePickerController() imagePicker.sourceType = sourceType imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier] imagePicker.cameraDevice = .rear return imagePicker } func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) { } } Then I have my widget: struct CameraWidgetControl: ControlWidget { var body: some ControlWidgetConfiguration { StaticControlConfiguration( kind: "com.myCompany.myAppName.lock-screen") { ControlWidgetButton(action: MyAppCaptureIntent()) { Label("Capture", systemImage: "camera.shutter.button.fill") } } } } My AppIntent: struct MyAppContext: Codable {} struct MyAppCaptureIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "MyAppCaptureIntent" static let description = IntentDescription("Capture photos and videos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } } The Issue LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully. Error Message When tapped on Lock Screen, I get the following error code: LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Attempt to map database failed: permission was denied. This attempt will not be retried. Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Things I tried Widget image displays correctly App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's. AppIntent file contains the target memberships of the Lock Screen capture and Widget Apple compiles without errors or warnings.
1
0
151
1d
Torch Freezes Ultra-Wide Camera When Switching Between Wide & Ultra-Wide Lenses (AVFoundation Bug?)
I'm developing an iOS app using AVFoundation for real-time video capture and object detection. While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active. Issue If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well. Code snippet DispatchQueue.global(qos: .userInitiated).async { [weak self] in guard let self = self else { return } let isSwitchingToUltraWide = !self.isUsingFisheyeCamera let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide" guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.") } return } do { let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput self.videoCapture.captureSession.beginConfiguration() if isSwitchingToUltraWide &amp;&amp; self.isFlashlightOn { self.forceEnableTorchThroughWide() } if let currentInput = currentInput { self.videoCapture.captureSession.removeInput(currentInput) } let videoInput = try AVCaptureDeviceInput(device: selectedCamera) self.videoCapture.captureSession.addInput(videoInput) self.videoCapture.captureSession.commitConfiguration() self.videoCapture.updateVideoOrientation() DispatchQueue.main.async { if let barButton = sender as? UIBarButtonItem { barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide" barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white } print("Switched to \(cameraName) camera.") } self.isUsingFisheyeCamera.toggle() } catch { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)") } } } } Expected Behavior Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does. The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON. AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active. Questions &amp; Help Needed Is this a known issue with AVFoundation? How does the iPhone Camera app allow using the torch while recording in Ultra-Wide? What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active? Info Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15 iOS Version: iOS 17.3 / iOS 18.0 Xcode Version: 16.2
2
1
623
1d
About Cinematic Video Capture
Hi everyone! I’m trying to implement a cinematic video shooting mode. While following the guide “Capture cinematic video in your app”, I added the following code to obtain metadata information: metadataOutput.metadataObjectTypes = metadataOutput.requiredMetadataObjectTypesForCinematicVideoCapture However, the following error appears in the console: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'When Cinematic Video is enabled, AVCaptureMetadataOutput's metadataObjectTypes must match requiredMetadataObjectTypesForCinematicVideoCapture.' How can I resolve this issue?
0
0
96
1d
iOS - record audio fails to record
Hi, I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1. Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio. In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording. if await AVAudioApplication.requestRecordPermission() { print("permission granted") recordPermission = true } else { print("permission denied") } Permission is granted. let settings: [String : Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] recorder = try AVAudioRecorder(url: filename, settings: settings) let prepared = recorder.prepareToRecord() print("prepared started: \(prepared)") let started = recorder.record() print("recording started: \(started)") started is always false and I tried many settings. Error messages AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46 AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50 AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame prepared started: true AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5 recording started: false All examples I find are the same, but apparently there must be something different.
1
0
139
3d
[AVFCore] IOS 26.0 EXC_BAD_ACCESS from _customCompositorShouldCancelPendingFrames
Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc. In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace. libobjc.A.dylib objc_msgSend libdispatch.dylib _dispatch_sync_invoke_and_complete_recurse libdispatch.dylib _dispatch_sync_f_slow [symbolication failed] libdispatch.dylib _dispatch_client_callout libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete AVFCore -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames] AVFCore _customCompositorShouldCancelPendingFramesCallback MediaToolbox remoteVideoCompositor_HandleVideoCompositorClientMessage CoreMedia __figXPCConnection_CallClientMessageHandlers_block_invoke libdispatch.dylib _dispatch_call_block_and_release libdispatch.dylib _dispatch_client_callout libdispatch.dylib _dispatch_lane_serial_drain libdispatch.dylib _dispatch_lane_invoke libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh libdispatch.dylib _dispatch_workloop_worker_thread libsystem_pthread.dylib _pthread_wqthread libsystem_pthread.dylib start_wqthread What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code. I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them? Thanks,
0
0
24
6d
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
3
0
582
1w
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
1
0
949
1w
AVPlayerViewController Mac Catalyst 26 Full Screen Crash
I have a SwiftUI Mac Catalyst app that shows a video player using a UIViewControllerRepresentable AVPlayerViewController. When I tap the full screen button on the native playback control, the app crashes. The app crashes only when built with Xcode 26. When I build with Xcode 16, this does not cause a crash. Here is some of the crash log: 0 CoreFoundation 0x000000019a5cc770 __exceptionPreprocess + 176 1 libobjc.A.dylib 0x000000019a0aa418 objc_exception_throw + 88 2 CoreFoundation 0x000000019a69b7fc -[NSException initWithCoder:] + 0 3 AppKit 0x000000019eeee1d0 -[NSBezierPath(NSBezierPathDevicePrimitives) _deviceMoveToPoint:] + 104 4 AppKit 0x000000019eeec930 -[NSBezierPath appendBezierPathWithRoundedRect:xRadius:yRadius:] + 200 5 AppKit 0x000000019eeea238 +[NSBezierPath bezierPathWithRoundedRect:xRadius:yRadius:] + 88 6 AVKitMacHelper 0x0000000247d73cc4 -[AVScrubberSliderCell drawBarInside:flipped:] + 1264 7 AppKit 0x000000019f35cf7c -[NSSliderCell drawInteriorWithFrame:inView:] + 680 8 AppKit 0x000000019f35ccbc -[NSSliderCell drawWithFrame:inView:] + 104 Any ideas!?
0
0
53
1w
AlarmKit plays system error tone instead of custom sound files (iOS 26.0)
AlarmKit custom sounds are universally broken in iOS 26.0 stable - instead of playing your custom sound, it plays a system error/timeout beep. I've spent days investigating why custom sounds result in what sounds like an error beep (like when you cancel an operation or hit a timeout) instead of the actual audio file. I can now prove this is an Apple bug, not implementation error. Evidence: Test 1: My Implementation Followed Apple's documentation exactly Tried both bundle and Library/Sounds (as documented) Result: System error beep (not my audio) Test 2: Professional Apps Tested ADHDAlarms (popular AlarmKit example by jacobsapps) https://github.com/jacobsapps/ADHDAlarms Their airhorn.mp3 custom sound: same error beep (not an airhorn) Their default sound: works perfectly Test 3: Device Testing Physical iPhone (iOS 26.0 - 23A341): broken iOS Simulator: broken Not device-specific Files are found correctly, but the actual audio file is never played. Instead, you hear what sounds like a system error/cancellation tone. What I've Eliminated Not a Library/Sounds vs Bundle issue (both broken) Not a file format issue (.mp3, .caf, .m4a all broken) Not an implementation issue (professional apps broken too) Not a device issue (simulator and device both broken) Not a file size issue (5KB to 2MB all broken) The Documentation Lie: Apple's docs for AlertConfiguration.AlertSound.named(_:) state: "Choose a file that's in your app's main bundle or the Library/Sounds folder" https://developer.apple.com/documentation/activitykit/alertconfiguration/alertsound/named(_:) Both locations are broken. Tested on: iOS 26.0 (23A341), Xcode 26.0.1, Swift 6.2 Impact: This affects any app trying to: Provide personalized wake-up sounds Use custom alarm tones Create meditation/sleep apps Differentiate from default iOS alarms Current Status: Multiple bug reports filed: FB19900024, FB18237648, FB19779004 Apple engineer claimed "fixed in latest beta" in August Still broken in iOS 26.0 stable (September) Workaround: None that I know of. You must use .default sound. For apps needing custom audio, play it with AVAudioPlayer after the alarm fires and user opens the app. Question: Has ANYONE gotten custom AlarmKit sounds working in iOS 26.0 stable? If so, plzzz help I'd be so grateful.
2
1
75
1w
-46250 error when calling `makeSecureTokenForExpirationDateOfPersistableContentKey`
Hi there, We're working on offline playback of DRM tracks. The persistent keys (also known as track licenses) for offline playback are stored locally on the device and are served from cache when a user initiates playback of a downloaded track. Our persistent keys have a limited validity time and need to be refreshed when they expire. To prevent a situation where a persistent key expires while the user is offline, we've decided to eagerly refresh these keys one week before their expiration date. To make that happen we need to be able to obtain the expiration date of the given track license. We've been attempting to use the makeSecureTokenForExpirationDateOfPersistableContentKey API to facilitate this process. The documentation states that this API returns a secret token representing the persistent key, which we can then exchange with our license server for the expiration date: https://developer.apple.com/documentation/avfoundation/avcontentkeysession/makesecuretokenforexpirationdate(ofpersistablecontentkey:completionhandler:)?language=objc However, every time we call makeSecureTokenForExpirationDateOfPersistableContentKey, we receive an error with code -46250. We haven't been able to find any public references or documentation for this specific error code, which is preventing us from troubleshooting the issue. We are conducting our tests on a physical device, as the simulator does not support FairPlay playback. We don't use dual expiry approach. Is our understanding of how to obtain the expiration timestamp correct? Are we using the makeSecureTokenForExpirationDateOfPersistableContentKey API as it was intended? What does the -46250 error code mean, and what steps should we take to fix our FairPlay implementation to make this work? Thanks in advance for your assistance.
1
0
217
1w
Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
3
0
317
2w
ARKit with 422 pixel format and Apple Log colorspace
Hi, I’m trying to configure camera feed in ARKit to be in Apple Log color space. I can change Capture Device’s format to one that has Apple Log and I see one frame being in proper log-gray colors but then all AR tracking stops and tracking state hangs at “initializing”. In other combinations I see error “sensor failed to initialize” and session restarts with default format. I suspect that this is because normal AR capture formats are 420f, whereas ones that have Apple Log are 422. Could someone confirm if it’s even possible to run ARKit session with camera feed in a different pixel format? I’m trying it on iphone 15 pro
0
0
165
2w
How to capture 48MP capture with Ultra wide lens using iPhone 16 pro max
I am working on capturing 48MP images using the iPhone 16 Pro Max with the Ultra-wide camera. I’ve updated the code to capture the maximum supported dimensions with the following snippet: if #available(iOS 16.0, *) { photoOutput.maxPhotoDimensions = device.activeFormat.supportedMaxPhotoDimensions.last! photoSettings.maxPhotoDimensions = .init(width: 5712, height: 4284) } However, I’m still not getting the expected results. My goal is to capture 48MP images, and I want to confirm if the Ultra-wide camera supports this resolution or if I’m missing any other configuration. Any guidance would be appreciated!
2
2
748
2w
On iOS26, in our video playback app(use AVPlayer), the sound and video are out of sync when playing after seeking.
Our app plays TS files on an iPhone. The app fragments the TS files, creates an M3U8 playlist, converts them to HLS(HTTP Live Streaming), and then uses AVPlayer to play the video content. On a device running iOS 26, after starting playback and seeking, restarting playback causes the video and audio to be out of sync (by about 2-3 seconds depending on the situation). This also occurs on iPadOS/macOS 26. This issue was not observed prior to iOS 18. We are trying to fix this issue on the app side, but we have the following questions: The behavior of AVPlayer is different between iOS 26 and previous versions. Has there been any change that could be considered? Or is it a bug? We tried pausing before seeking, but it didn’t seem to have any effect. Are there any APIs or workarounds that can improve this? We would appreciate it if you could tell us any other helpful documents or URLs.
0
0
145
2w
How to use Front UW and TrueDepth in iPad
I want to use both front UW and TrueDepth cameras in iPad which has front UW camera. Firstly, I have used only front builtInDualCamera by AVFoundation and tried all the formats that can be used with builtInDualCamera, but there was no format that could capture UW. Secondly, I have tried to both front builtInDualCamera and builtInUltraWideCamera, but there was no combination that could use builtInUltraWideCamera and builtInDualCamera. Is there any way ?
0
0
101
2w
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
3
3
449
2w