xcode 如何录制 iPhone 的音频输出?(就像我的应用程序的声音)

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2165894/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-14 19:05:42  来源:igfitidea点击:

How can I record the audio output of the iPhone? (like sounds of my app)

iphoneobjective-cxcodeaudio

提问by Flocked

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.

我想录制我的 iPhone 应用程序的声音。就像有人在 iPhone 乐器上演奏某些东西,然后你就可以听到它了。

Is it possible without the micro?

没有微信可以吗?

采纳答案by VoidPointer

Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)

你的意思是你自己开发的应用程序?如果是,您可以只保存渲染的波形(可能经过编码/压缩以节省空间)以供以后播放。(请参阅:扩展音频文件服务,它可以将相同的 AudioBufferList 写入文件,在您的 Instrument-App 中播放音频时,您将呈现给 RemoteAudio Unit)

[Edit: removed comments on recording third-party app audio output ...]

[编辑:删除了录制第三方应用程序音频输出的评论......]

With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.

使用您当前使用的 AVFoundation,您始终可以处理声音文件的级别。您的代码永远不会看到实际的音频信号。因此,您无法“获取”应用在使用时生成的音频信号。此外,AVAudioPlayer 不提供任何获取最终信号的方法。如果您使用多个 AVAudio 播放器实例同时播放多个声音,您也将无法获得混合信号。

Alas, you probably need to use CoreAudiowhich is a much more low level interface.

唉,您可能需要使用CoreAudio,这是一个更底层的接口。

I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

我想建议一种替代方法:与其记录音频输出,为什么不记录导致播放音频的动作序列及其时间?将此事件序列写入文件并将其读回以重现“性能” - 它有点像您自己的 MIDI 音序器 :)