I'm writing a program to control a PTZ camera connected via USB.
I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest.
There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target.
It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change.
Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice?
https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice
I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera?
here's my example code for getting unique_id:
//
// camera_unique_id_test.mm
//
// 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id
//
// 编译命令:
// clang++ -framework AVFoundation -framework CoreMedia -framework Foundation
// camera_unique_id_test.mm -o camera_unique_id_test
//
#include <iostream>
#include <string>
#include <vector>
#import <AVFoundation/AVFoundation.h>
#import <Foundation/Foundation.h>
struct CameraInfo {
std::string uniqueId;
};
std::vector<CameraInfo> getAllCameraDevices() {
std::vector<CameraInfo> cameras;
@autoreleasepool {
NSArray<AVCaptureDevice*>* devices =
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice* defaultDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// 遍历所有设备
for (AVCaptureDevice* device in devices) {
CameraInfo info;
// 获取unique_id
info.uniqueId = std::string([device.uniqueID UTF8String]);
cameras.push_back(info);
}
}
return cameras;
}
int main(int argc, char* argv[]) {
std::vector<CameraInfo> cameras = getAllCameraDevices();
for (size_t i = 0; i < cameras.size(); i++) {
const CameraInfo& camera = cameras[i];
std::cout << " 设备 " << (i + 1) << ":" << std::endl;
std::cout << " unique_id: " << camera.uniqueId << std::endl;
}
return 0;
}
and here's my code for UVC control:
// clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test
#include <iostream>
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOCFPlugIn.h>
#include <IOKit/IOKitLib.h>
#include <IOKit/IOMessage.h>
#include <IOKit/usb/IOUSBLib.h>
#include <IOKit/usb/USB.h>
CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService,
const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return nullptr;
CFStringRef result = static_cast<CFStringRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
return result;
}
std::string GetStringFromIORegistry(io_service_t ioService, const char* key) {
CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key);
if (!cfString)
return "";
char buffer[256];
Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer),
kCFStringEncodingUTF8);
CFRelease(cfString);
return success ? std::string(buffer) : std::string("");
}
uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) {
CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key,
kCFStringEncodingUTF8);
if (!keyString)
return 0;
CFNumberRef number = static_cast<CFNumberRef>(
IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault,
kIORegistryIterateRecursively));
CFRelease(keyString);
if (!number)
return 0;
uint32_t value = 0;
CFNumberGetValue(number, kCFNumberSInt32Type, &value);
CFRelease(number);
return value;
}
int main() {
// Get matching dictionary for USB devices
CFMutableDictionaryRef matchingDict =
IOServiceMatching(kIOUSBDeviceClassName);
// Get iterator for matching services
io_iterator_t serviceIterator;
IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict,
&serviceIterator);
// Iterate through matching devices
io_service_t usbService;
while ((usbService = IOIteratorNext(serviceIterator))) {
uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID");
uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor");
uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct");
IOCFPlugInInterface** plugInInterface = nullptr;
IOUSBDeviceInterface** deviceInterface = nullptr;
SInt32 score;
// Get device plugin interface
IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID, &plugInInterface,
&score);
// Get device interface
(*plugInInterface)
->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
// Try to find UVC control interface using CreateInterfaceIterator
io_iterator_t interfaceIterator;
IOUSBFindInterfaceRequest interfaceRequest;
interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14
interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1
interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare;
interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare;
(*deviceInterface)
->CreateInterfaceIterator(deviceInterface, &interfaceRequest,
&interfaceIterator);
(*deviceInterface)->Release(deviceInterface);
io_service_t usbInterface = IOIteratorNext(interfaceIterator);
IOObjectRelease(interfaceIterator);
if (usbInterface) {
std::cout << "Get UVC device with:" << std::endl;
std::cout << "locationId: " << std::hex << locationId << std::endl;
std::cout << "vendorId: " << std::hex << vendorId << std::endl;
std::cout << "productId: " << std::hex << productId << std::endl
<< std::endl;
IOObjectRelease(usbInterface);
}
IOObjectRelease(usbService);
}
IOObjectRelease(serviceIterator);
}
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have both apple devices, AirPods Pro 3 is up to date and Ultra 3 is on watch os 26.1 latest public beta.
Each morning when I would go on my mindfulness app and start a meditation or listen to Apple Music on my watch and AirPods Pro 3, it will play for a few seconds then disconnects. My bluetooth settings on my watch says my AirPods is connected to my watch. I also have removed the tick about connecting automatically to iPhone on the AirPods setting in my iPhone.
To fix this I invariably turn off my Apple Watch Ultra 3 and turn it on again. Then the connection becomes stable. I am not sure why I have to do this each morning. It is frustrating. I am not sure why this fix does not last long? Is there something wrong with my AirPods?
Has anyone encountered this before?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Topic:
Media Technologies
SubTopic:
Audio
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already:
• Display detailed scrolling waveforms of Apple Music songs
• Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions:
1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content?
2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access?
3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully.
Standard Xcode target template:
Lock_Screen_Capture.swift
@main
struct Lock_Screen_Capture: LockedCameraCaptureExtension {
var body: some LockedCameraCaptureExtensionScene {
LockedCameraCaptureUIScene { session in
Lock_Screen_CaptureViewFinder(session: session)
}
}
}
Lock_Screen_CaptureViewFinder.swift:
import SwiftUI
import UIKit
import UniformTypeIdentifiers
import LockedCameraCapture
struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable {
let session: LockedCameraCaptureSession
var sourceType: UIImagePickerController.SourceType = .camera
init(session: LockedCameraCaptureSession) {
self.session = session
}
func makeUIViewController(context: Self.Context) -> UIImagePickerController {
let imagePicker = UIImagePickerController()
imagePicker.sourceType = sourceType
imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier]
imagePicker.cameraDevice = .rear
return imagePicker
}
func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) {
}
}
Then I have my widget:
struct CameraWidgetControl: ControlWidget {
var body: some ControlWidgetConfiguration {
StaticControlConfiguration(
kind: "com.myCompany.myAppName.lock-screen") {
ControlWidgetButton(action: MyAppCaptureIntent()) {
Label("Capture", systemImage: "camera.shutter.button.fill")
}
}
}
}
My AppIntent:
struct MyAppContext: Codable {}
struct MyAppCaptureIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "MyAppCaptureIntent"
static let description = IntentDescription("Capture photos and videos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
The Issue
LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully.
Error Message
When tapped on Lock Screen, I get the following error code:
LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Attempt to map database failed: permission was denied. This attempt will not be retried.
Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Things I tried
Widget image displays correctly
App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's.
AppIntent file contains the target memberships of the Lock Screen capture and Widget
Apple compiles without errors or warnings.
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2
{
"aps": { "content-available": 1 },
"audio_file_name": "ding.caf",
"audio_url": "https://example.com/audio.mp3"
}
When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
Hey there,
We're seeing a high rate of 403 - Invalid Authentication on this endpoint v1/me/library/artists since a few days.
Does anyone have the same issue ?
I have some tried-and-tested code that records and plays back audio via AUHAL which breaks on Tahoe on Intel. The same code works fine on Sequioa and also works on Tahoe on Apple Silicon.
To start with something simple, the following code to request access to the Microphone doesn't work as it should:
bool RequestMicrophoneAccess ()
{
__block AVAuthorizationStatus status =
[AVCaptureDevice authorizationStatusForMediaType: AVMediaTypeAudio];
if (status == AVAuthorizationStatusAuthorized)
return true;
__block bool done = false;
[AVCaptureDevice requestAccessForMediaType: AVMediaTypeAudio completionHandler: ^ (BOOL granted)
{
status = (granted) ? AVAuthorizationStatusAuthorized : AVAuthorizationStatusDenied;
done = true;
}];
while (!done)
CFRunLoopRunInMode (kCFRunLoopDefaultMode, 2.0, true);
return status == AVAuthorizationStatusAuthorized;
}
On Tahoe on Intel, the code runs to completion but granted is always returned as NO. Tellingly, the popup to ask the user to grant microphone access is never displayed, even though the app is not present in the Privacy pane and never appears there. On Apple Silicon, everything works fine.
There are some other problems, but I'm hoping they have a common underlying cause and that the Apple guys can figure out what's wrong from the information in this post. I'd be happy to test any potential fix. Thanks.
Topic:
Media Technologies
SubTopic:
Audio
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation.
During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case.
I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
Hi,
I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1.
Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio.
In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording.
if await AVAudioApplication.requestRecordPermission() {
print("permission granted")
recordPermission = true
} else {
print("permission denied")
}
Permission is granted.
let settings: [String : Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
recorder = try AVAudioRecorder(url: filename, settings: settings)
let prepared = recorder.prepareToRecord()
print("prepared started: \(prepared)")
let started = recorder.record()
print("recording started: \(started)")
started is always false and I tried many settings.
Error messages
AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46
AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50
AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50
from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame
to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
prepared started: true
AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5
recording started: false
All examples I find are the same, but apparently there must be something different.
After update,WeChat voice chatting no sounds, please help
I'm getting an error writing a ciImage as a heif image:
// Create CIImage directly from pixel buffer
let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata])
// Write HEIC synchronously
do {
try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace)
The error I'm getting is:
Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}}
Both
try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options)
and
try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace)
work. I also verified that the code works with iOS 18.
Is there something new we need to do for Heif images?
Thanks in advance
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Mobile Core Services
Photos and Imaging
Core Image
I'm at my wit's end with a problem I'm facing while developing an app. The app is designed to send video captured on an iPhone to a browser application for real-time display.
While it works on many older iPhone models, whenever I test it on an iPhone 17 Pro or 17 Pro Max, the video displayed in the browser becomes a solid green screen, or a colorful, garbled image that's mostly green.
I've been digging into this, and my main suspicion is an encoding failure. It seems the resolution of the ultra-wide and telephoto cameras was significantly increased on the 17 Pro and Pro Max (from 12MP to 48MP), and I think this might be overwhelming the encoder.
I'm really hoping someone here has encountered a similar issue or has any suggestions. I'm open to any information or ideas you might have. Please help!
Environment Information:
WebRTC Library: GoogleWebRTC Version 1.1 (via CocoaPods)
Signaling Server: AWS Kinesis Video Streams
Problem Occurs on:
Model: iPhone18,1, OS: 26.0
Model: iPhone18,1, OS: 26.1
Works Fine on:
Many models before iPhone17,5
Model: iPhone18,1, OS: 26.0
Model: iPhone18,3, OS: 26.0
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。
iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。
調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。
なんでも情報下さい。
環境情報
WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入)
シグナリングサーバー: AWS Kinesis Video Streams
問題が発生するデバイス:
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,1, OS: 26.1
問題が発生しないデバイス:
iPhone17,5 以前の多数のモデル
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,3, OS: 26.0
We have been using passthrough in screen capture since visionOS26 with broadcast upload extension which was working in visionOS2.2 but now with visionOS26 it doesn't update. It fails with Invalid Broadcast session started, after a few seconds of starting the broadcast session.
Is there a bug filed for it? or is it a known bug for it?
Hi,It seems like it's pretty easy to consume HTTP Live Streaming content in an iOS app. Unfortunately, I need to consume media from an RTSP server. It seems to me that this is a very similar thing, and that all of the underpinnings for doing it ought to be present in iOS, but I'm having a devil of a time figuring out how to make it work without doing a lot of programming.For starters, I know that there are web-based services that can consume an RTSP stream and rebroadcast it as an HTTP Live Stream that can be easily consumed by the media players in iOS. This won't work for me because my application needs to function in an environment where there is no internet access (it's on a private Wifi network where the only other thing on the network is the device that is serving the RTSP stream).Having read everything I can get my hands on and exploring third-party and open-source solutions, I've compiled the following list of ideas:1. Using an iOS build of the open-source ffmpeg library, which supports RTSP, I've come up with a test app that can receive the RTSP packets, decode them, create UIImages out of the frames, and display those frames on-screen. This provides a crude player, but performance is poor, most likely because ffmpeg can't take advantage of any hardware acceleration. It also doesn't provide me with any way to integrate the video stream into AVFoundation, so I'm on my own as far as saving the stream to a file, transcoding it, etc.2. I know that the AVURLAsset class doesn't directly support the RTSP scheme. Since I have access to the undecoded RTSP packets via ffmpeg, I've thought it should be possible to implement RTSP support myself via a custom NSURLProtocol, essentially fooling AVFoundation into reading those packets as if they originated in a file. I'm not sure if this would work, since the raw packets coming from the RTSP server might lack the headers that would otherwise be present in data being read from a file. I'm not even sure if AVFoundation would recognize my custom protocol.3. If a protocol doesn't work, I've considered that I might be able to implement my own local HTTP Live Streaming server that converts the RTSP packets into an HTTP stream that the media players can read. This sounds like a terribly convoluted solution to the problem, at best, and very difficult at worst.4. Going back to solution (1), if I could speed up the decoding by using some iOS CoreVideo function instead of ffmpeg, this solution might be okay. However, I can't find any documentation for CoreVideo on iOS (Apple only documents it for OS X).5. I'm certainly willing to license a third-party solution if it works well and provides good performance. Unfortunately, everything I've found so far is pretty crummy and mostly just leverages ffmpeg and/or VLC. What is most disappointing to me is that nobody seems to be able or willing to provide a solution that neatly integrates with AVFoundation. I really want to make my RTSP stream available as an AVAsset so I can use it with AVFoundation players and other classes -- I don't want to build an app that relies on custom third-party code for everything.Any ideas, tips, advice would be greatly appreciated.Thanks,Frank
Hi everyone,
I am currently on MacOS Tahoe (26.1), and for some weird reason my mac is not connecting via HDMI. To be accurate: it is connecting and the LG TV shows up in the Displays settings, but no image shows up in it, I have no idea why. This used to work as I've tried this cable before with the same exact tv. The cable is a basic Amazon Basics HDMI one.
Allow me just to advanced this question a little: usually terminal commands are more advanced recommendations, whereas basic questions like "have you connected it right" are just a waste of time
Topic:
Media Technologies
SubTopic:
Video
I'm wondering if someone happened issues with currentPlaybackRate in released version of iOS 26.0?
There is no issue happened in case of iOS 18.5.
When I changed currentPlaybackRate on iOS 26.0, it seems to unexpectedly change currentPlaybackRate to be 0 or 1.0 forcibly no matter DRM or non-DRM contents. And also, playing music will be abnormal behavior and unstable with noise if currentPlaybackRate is not 1.0. And changes stop state and play state frequently.