ios 通过上部(电话)扬声器播放音频

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/18026578/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 00:35:46  来源:igfitidea点击:

Play audio through upper (phone call) speaker

iosobjective-cavfoundationmedia-playeraudiotoolbox

提问by Nerrolken

I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.

我正在尝试让我的应用程序中的音频通过 iPhone 上的上部扬声器播放,即您在通话期间按在耳朵上的那个扬声器。我知道这是可能的,因为我玩过 App Store 上的一款模拟电话的游戏(“The Heist” by “tap tap tap”),并且确实做到了这一点。

I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like thisand thisand this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.

我在网上做了很多研究,但我很难找到任何人甚至讨论过这种可能性。绝大多数帖子似乎都是关于免提扬声器与插入式耳机的,(像这个这个这个),而不是上面的“电话”扬声器与免提扬声器。(这个问题的一部分可能没有一个好名字:“电话扬声器”通常意味着设备底部的免提扬声器等,因此很难进行真正有针对性的搜索)。我已经研究过 Apple 的Audio Session Category Route Overrides,但那些似乎再次(如果我错了,请纠正我)只处理底部的免提扬声器,而不是手机顶部的扬声器。

I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhonemethod (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoadto see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)

我找到了一篇似乎与此相关的帖子:链接。它甚至提供了一堆代码,所以我以为我有空回家,但现在我似乎无法让代码工作。为简单起见,我只是将DisableSpeakerPhone方法(如果我理解正确的话应该是将音频重新路由到上扬声器的方法)复制到我的中viewDidLoad以查看它是否有效,但第一个“断言”行失败,音频继续打出底线。(我还按照评论中的建议导入了 AudioToolbox 框架,所以这不是问题。)

Here is the main block of code I'm working with (this is what I copied into my viewDidLoadto test), although there are a few more methods in the article I linked to:

这是我正在使用的主要代码块(这是我复制到我viewDidLoad要测试的代码块),尽管我链接到的文章中还有一些方法:

void DisableSpeakerPhone () {
    UInt32 dataSize = sizeof(CFStringRef);
    CFStringRef currentRoute = NULL;
    OSStatus result = noErr;

    AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);

    // Set the category to use the speakers and microphone.
    UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
    result = AudioSessionSetProperty (
                                      kAudioSessionProperty_AudioCategory,
                                      sizeof (sessionCategory),
                                      &sessionCategory
                                      );
    assert(result == kAudioSessionNoError);

    Float64 sampleRate = 44100.0;
    dataSize = sizeof(sampleRate);
    result = AudioSessionSetProperty (
                                      kAudioSessionProperty_PreferredHardwareSampleRate,
                                      dataSize,
                                      &sampleRate
                                      );
    assert(result == kAudioSessionNoError);

    // Default to speakerphone if a headset isn't plugged in.
    // Overriding the output audio route

    UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None; 
    dataSize = sizeof(audioRouteOverride);
    AudioSessionSetProperty(
                            kAudioSessionProperty_OverrideAudioRoute,
                            dataSize,
                            &audioRouteOverride);

    assert(result == kAudioSessionNoError);

    AudioSessionSetActive(YES);
} 

So my question is this:can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?

所以我的问题是:任何人都可以 A) 帮助我弄清楚为什么该代码不起作用,或者 B) 提供更好的建议,以便能够按下按钮并将音频路由到上扬声器?

PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!

PS 我越来越熟悉 iOS 编程,但这是我第一次涉足 AudioSessions 等领域,因此非常感谢详细信息和代码示例!感谢您的帮助!

UPDATE:

更新:

From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:

从“他是”(下面)的建议中,我删除了上面引用的代码并将其替换为:

[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];

at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecordto send audio out of the receiver on its own, so something is still wrong.

在开头viewDidLoad。但是,它仍然无法正常工作(我的意思是音频仍然从手机底部的扬声器而不是顶部的接收器发出)。显然,默认行为应该是AVAudioSessionCategoryPlayAndRecord自己从接收器发送音频,所以还是有问题。

More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):

更具体地说,我用这段代码做的是通过 iPod 音乐播放器播放音频(在 AVAudioSession 行之后立即初始化viewDidLoad,这是值得的):

_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];

and the media for that iPod Music Player is chosen through an MPMediaPickerController:

并且该 iPod 音乐播放器的媒体是通过以下方式选择的MPMediaPickerController

- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
    if (mediaItemCollection) {
        [_musicPlayer setQueueWithItemCollection: mediaItemCollection];
        [_musicPlayer play];
    }

    [self dismissViewControllerAnimated:YES completion:nil];
}

This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...

这一切对我来说似乎相当简单,我没有任何错误或警告,而且我知道媒体选择器和音乐播放器工作正常,因为正确的歌曲开始播放,只是从错误的扬声器中发出的。是否有“使用此 AudioSession 播放媒体”方法之类的?或者有没有办法检查当前处于活动状态的音频会话类别,以确认没有任何东西可以将其切换回来或其他什么?有没有办法强调告诉代码使用接收器,而不是依赖默认值来这样做?我觉得我在一码线上,我只需要越过最后一点......

EDIT:I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?

编辑:我只是想到了一个理论,其中关于 iPod 音乐播放器不想在接收器外播放。我的推理:可以通过官方 iPod 应用程序设置一首歌曲开始播放,然后通过我正在开发的应用程序无缝调整(暂停、跳过等)。从一个应用程序到下一个应用程序的连续播放让我想到可能iPod音乐播放器有自己的音频路由设置,或者它不会停下来检查新应用程序中的设置?有没有知道他们在说什么的人认为它可能是这样的?

采纳答案by foundry

You have to initialise your audio session first.

您必须先初始化音频会话。

Using the C API

使用 C API

  AudioSessionInitialize (NULL, NULL, NULL, NULL);

In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):

在 iOS6 中,您可以改用 AVAudioSession 方法(您需要导入 AVFoundation 框架才能使用AVAudioSession):

Initialization using AVAudioSession

使用 AVAudioSession 初始化

 self.audioSession = [AVAudioSession sharedInstance];

Setting the audioSession category using AVAudioSession

使用 AVAudioSession 设置 audioSession 类别

 [self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
                                       error:nil];

For further research, if you want better search terms, here are the full names of the constants for the speakers:

为了进一步研究,如果您想要更好的搜索词,以下是扬声器常量的全名:

const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;

see apple's docs here

在这里查看苹果的文档

But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:

但真正的谜团是为什么您在路由到接收器时遇到任何问题。这是 playAndRecord 类别的默认行为。苹果的文档kAudioSessionOverrideAudioRoute_None

"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."

“指定,对于 kAudioSessionCategory_PlayAndRecord 类别,输出音频应发送到接收器。这是该类别的默认输出音频路由。

update

更新

In your updated question you reveal that you are using the MPMusicPlayerControllerclass. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.

在您更新的问题中,您透露您正在使用该MPMusicPlayerController课程。此类调用全局音乐播放器(与音乐应用程序中使用的播放器相同)。此音乐播放器与您的应用程序分开,因此不会与您的应用程序的 audioSession 共享相同的音频会话。MPMusicPlayerController 将忽略您在应用的 audioSession 上设置的任何属性。

If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder/ AVAudioPlayeror Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSessionproperties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.

如果您想控制应用的音频行为,则需要使用应用内部的音频框架。这将是AVAudioRecorder/AVAudioPlayer或核心音频(音频队列、音频单元或 OpenAL)。无论您使用哪种方法,音频会话都可以通过AVAudioSession属性或通过 Core Audio API 进行控制。Core Audio 为您提供更细粒度的控制,但随着 iOS 的每个新版本,更多的它被移植到 AVFoundation,所以从它开始。

Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.

还要记住,音频会话为您提供了一种方法来描述与整个 iOS 环境相关的应用音频的预期行为,但它不会让您完全控制。Apple 会注意确保用户对其设备音频行为的期望在应用程序之间以及当一个应用程序需要中断另一个应用程序的音频流时保持一致。

update 2

更新 2

In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating systemarbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.

在您的编辑中,您提到了音频会话检查其他应用程序的音频会话设置的可能性。这不会发生1。这个想法是每个应用程序使用它的自包含音频会话设置它自己的音频行为的首选项。该操作系统相冲突的音频需求当一个以上的应用程序竞争的不可共享资源,如内置麦克风或扬声器之一,将有利于该行为是最有可能满足用户的期望通常决定之间进行仲裁设备作为一个整体。

The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.

MPMusicPlayerController 类有点不寻常,因为它使您能够让一个应用程序在一定程度上控制另一个应用程序。在这种情况下,您的应用不会播放音频,而是向音乐播放器发送请求以代表您播放音频。您的控制受到 MPMusicPlayerController API 范围的限制。为了获得更多控制,您的应用程序必须提供它自己的音频播放实现。

In your comment you wonder:

在您的评论中,您想知道:

Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?

有没有办法从 MPMusicPlayerController 中提取 MPMediaItem,然后通过特定于应用程序的音频会话或类似的方式播放它们?

That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary- it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps- that should give you a sense to the complexity you will face. This mayhave got easier since iOS6 but I wouldn't be so sure!

这是一个新问题的(大)主题。这是一个很好的入门读物(来自 Chris Adamson 的博客)从 iPod 库到 PCM 样本的步骤比以前需要的要少得多——它是从 iphone 媒体库到 pcm 样本的续集,有几十个混淆和可能有损的步骤——这应该给你对你将面临的复杂性有所了解。自 iOS6 以来,这可能变得更容易了,但我不太确定!



1there is an otherAudioPlayingread-only BOOL property in ios6, but that's about it

1otherAudioPlayingios6 中有一个只读的 BOOL 属性,仅此而已

回答by DrJid

Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.

也为此挣扎了一段时间。也许这会在以后对某人有所帮助。您还可以使用覆盖端口的较新方法。示例代码中的许多方法实际上已被弃用。

So if you have your AudioSession sharedInstance by getting,

所以如果你有你的 AudioSession sharedInstance 通过获取,

NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];

The session category has to be AVAudioSessionCategoryPlayAndRecord You can get the current output by checking this value.

会话类别必须是 AVAudioSessionCategoryPlayAndRecord 您可以通过检查此值来获取当前输出。

AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;

And now depending on the port you want to send it to, simply toggle the output using

现在,根据您要将其发送到的端口,只需使用切换输出

if ([portType isEqualToString:@"Receiver"]) {
       [session  overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
       [session  overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}

This should be a quick way to toggle the outputs to the speaker phone and receiver.

这应该是一种将输出切换到扬声器电话和接收器的快速方法。

回答by krish

Swift 3.0 Code

Swift 3.0 代码

func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {       
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
    try? audioSession.overrideOutputAudioPort(.speaker)
   }
   else {
        try? audioSession.overrideOutputAudioPort(.none)
   }

回答by Dmih

swift 5.0

迅捷 5.0

func activateProximitySensor(isOn: Bool) {
    let device = UIDevice.current
    device.isProximityMonitoringEnabled = isOn
    if isOn {
        NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
        let session = AVAudioSession.sharedInstance()
        do{
            try session.setCategory(.playAndRecord)
            try session.setActive(true)
            try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
        } catch {
            print ("\(#file) - \(#function) error: \(error.localizedDescription)")
        }
    } else {
        NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
    }
}

@objc func proximityStateDidChange(notification: NSNotification) {
    if let device = notification.object as? UIDevice {
        print(device)
        let session = AVAudioSession.sharedInstance()
        do{
            let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
            let portType = routePort?.portType
            if let type = portType, type.rawValue == "Receiver" {
                try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
            } else {
                try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
            }
        } catch {
            print ("\(#file) - \(#function) error: \(error.localizedDescription)")
        }
    }
}