xcode Mac OS X 简单录音机
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8101667/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Mac OS X Simple Voice Recorder
提问by Alan Harmon
Does anyone have some sample code for a SIMPLE voice recorder for Mac OS X? I would just like to record my voice coming from the internal microphone on my MacBook Pro and save it to a file. That is all.
有没有人有一些适用于 Mac OS X 的 SIMPLE 录音机的示例代码?我只想录制来自 MacBook Pro 内置麦克风的声音并将其保存到文件中。就这些。
I have been searching for hours and yes, there are some examples that will record voice and save it to a file such as http://developer.apple.com/library/mac/#samplecode/MYRecorder/Introduction/Intro.html. The sample code for Mac OS X seems to be about 10 times more complicated than similar sample code for the iPhone.
我已经搜索了几个小时,是的,有一些示例可以录制语音并将其保存到文件中,例如http://developer.apple.com/library/mac/#samplecode/MYRecorder/Introduction/Intro.html。Mac OS X 的示例代码似乎比 iPhone 的类似示例代码复杂约 10 倍。
For iOS the commands are as simple as:
对于 iOS,命令非常简单:
soundFile =[NSURL FileURLWithPath:[tempDir stringByAppendingString:@"mysound.cap"]];
soundSetting = [NSDictionary dictionaryWithObjectsAndKeys: // dictionary setting code left out goes here
soundRecorder = [[AVAudioRecorder alloc] initWithURL:soundFile settings:soundSetting error:nil];
[soundRecorder record];
[soundRecorder stop];
I think there is code to do this for the Mac OS X that would be as simple as the iPhone version. Thank you for your help.
我认为有代码可以为 Mac OS X 执行此操作,就像 iPhone 版本一样简单。感谢您的帮助。
Here is the code (currently the player will not work)
这是代码(目前播放器不起作用)
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
@interface MyAVFoundationClass : NSObject <AVAudioPlayerDelegate>
{
AVAudioRecorder *soundRecorder;
}
@property (retain) AVAudioRecorder *soundRecorder;
-(IBAction)stopAudio:(id)sender;
-(IBAction)recordAudio:(id)sender;
-(IBAction)playAudio:(id)sender;
@end
#import "MyAVFoundationClass.h"
@implementation MyAVFoundationClass
@synthesize soundRecorder;
-(void)awakeFromNib
{
NSLog(@"awakeFromNib visited");
NSString *tempDir;
NSURL *soundFile;
NSDictionary *soundSetting;
tempDir = @"/Users/broncotrojan/Documents/testvoices/";
soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];
NSLog(@"soundFile: %@",soundFile);
soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
[NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];
soundRecorder = [[AVAudioRecorder alloc] initWithURL: soundFile settings: soundSetting error: nil];
}
-(IBAction)stopAudio:(id)sender
{
NSLog(@"stopAudioVisited");
[soundRecorder stop];
}
-(IBAction)recordAudio:(id)sender
{
NSLog(@"recordAudio Visited");
[soundRecorder record];
}
-(IBAction)playAudio:(id)sender
{
NSLog(@"playAudio Visited");
NSURL *soundFile;
NSString *tempDir;
AVAudioPlayer *audioPlayer;
tempDir = @"/Users/broncotrojan/Documents/testvoices/";
soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];
NSLog(@"soundFile: %@", soundFile);
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFile error:nil];
[audioPlayer setDelegate:self];
[audioPlayer play];
}
@end
回答by Rob Keniger
回答by Stanislav Pankevich
Here is the code that is working for me on macOS 10.14 with Xcode 10.2.1, Swift 5.0.1.
这是在 macOS 10.14 和 Xcode 10.2.1、Swift 5.0.1 上对我有用的代码。
First of all you have to set up NSMicrophoneUsageDescription
aka Privacy - Microphone Usage Description
in your Info.plist
file as described in the Apple docs: Requesting Authorization for Media Capture on macOS.
首先,您必须按照 Apple 文档中的说明在文件中进行NSMicrophoneUsageDescription
aka设置:Requesting Authorization for Media Capture on macOS。Privacy - Microphone Usage Description
Info.plist
Then you have to request a permission from a user to use a microphone:
然后你必须请求用户的许可才能使用麦克风:
switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized: // The user has previously granted access to the camera.
// proceed with recording
case .notDetermined: // The user has not yet been asked for camera access.
AVCaptureDevice.requestAccess(for: .audio) { granted in
if granted {
// proceed with recording
}
}
case .denied: // The user has previously denied access.
()
case .restricted: // The user can't grant access due to restrictions.
()
@unknown default:
fatalError()
}
Then you can use the following methods to start and stop audio recording:
然后您可以使用以下方法开始和停止录音:
import AVFoundation
open class SpeechRecorder: NSObject {
private var destinationUrl: URL!
var recorder: AVAudioRecorder?
let player = AVQueuePlayer()
open func start() {
destinationUrl = createUniqueOutputURL()
do {
let format = AVAudioFormat(settings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderAudioQualityKey: AVAudioQuality.high,
AVSampleRateKey: 44100.0,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
])!
let recorder = try AVAudioRecorder(url: destinationUrl, format: format)
// workaround against Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL issue
// https://stackoverflow.com/a/57670740/598057
let firstSuccess = recorder.record()
if firstSuccess == false || recorder.isRecording == false {
recorder.record()
}
assert(recorder.isRecording)
self.recorder = recorder
} catch let error {
let code = (error as NSError).code
NSLog("SpeechRecorder: \(error)")
NSLog("SpeechRecorder: \(code)")
let osCode = OSStatus(code)
NSLog("SpeechRecorder: \(String(describing: osCode.detailedErrorMessage()))")
}
}
open func stop() {
NSLog("SpeechRecorder: stop()")
if let recorder = recorder {
recorder.stop()
NSLog("SpeechRecorder: final file \(destinationUrl.absoluteString)")
player.removeAllItems()
player.insert(AVPlayerItem(url: destinationUrl), after: nil)
player.play()
}
}
func createUniqueOutputURL() -> URL {
let paths = FileManager.default.urls(for: .musicDirectory,
in: .userDomainMask)
let documentsDirectory = URL(fileURLWithPath: NSTemporaryDirectory())
let currentTime = Int(Date().timeIntervalSince1970 * 1000)
let outputURL = URL(fileURLWithPath: "SpeechRecorder-\(currentTime).m4a",
relativeTo: documentsDirectory)
destinationUrl = outputURL
return outputURL
}
}
extension OSStatus {
//**************************
func asString() -> String? {
let n = UInt32(bitPattern: self.littleEndian)
guard let n1 = UnicodeScalar((n >> 24) & 255), n1.isASCII else { return nil }
guard let n2 = UnicodeScalar((n >> 16) & 255), n2.isASCII else { return nil }
guard let n3 = UnicodeScalar((n >> 8) & 255), n3.isASCII else { return nil }
guard let n4 = UnicodeScalar( n & 255), n4.isASCII else { return nil }
return String(n1) + String(n2) + String(n3) + String(n4)
} // asString
//**************************
func detailedErrorMessage() -> String {
switch(self) {
case 0:
return "Success"
// AVAudioRecorder errors
case kAudioFileUnspecifiedError:
return "kAudioFileUnspecifiedError"
case kAudioFileUnsupportedFileTypeError:
return "kAudioFileUnsupportedFileTypeError"
case kAudioFileUnsupportedDataFormatError:
return "kAudioFileUnsupportedDataFormatError"
case kAudioFileUnsupportedPropertyError:
return "kAudioFileUnsupportedPropertyError"
case kAudioFileBadPropertySizeError:
return "kAudioFileBadPropertySizeError"
case kAudioFilePermissionsError:
return "kAudioFilePermissionsError"
case kAudioFileNotOptimizedError:
return "kAudioFileNotOptimizedError"
case kAudioFileInvalidChunkError:
return "kAudioFileInvalidChunkError"
case kAudioFileDoesNotAllow64BitDataSizeError:
return "kAudioFileDoesNotAllow64BitDataSizeError"
case kAudioFileInvalidPacketOffsetError:
return "kAudioFileInvalidPacketOffsetError"
case kAudioFileInvalidFileError:
return "kAudioFileInvalidFileError"
case kAudioFileOperationNotSupportedError:
return "kAudioFileOperationNotSupportedError"
case kAudioFileNotOpenError:
return "kAudioFileNotOpenError"
case kAudioFileEndOfFileError:
return "kAudioFileEndOfFileError"
case kAudioFilePositionError:
return "kAudioFilePositionError"
case kAudioFileFileNotFoundError:
return "kAudioFileFileNotFoundError"
//***** AUGraph errors
case kAUGraphErr_NodeNotFound: return "AUGraph Node Not Found"
case kAUGraphErr_InvalidConnection: return "AUGraph Invalid Connection"
case kAUGraphErr_OutputNodeErr: return "AUGraph Output Node Error"
case kAUGraphErr_CannotDoInCurrentContext: return "AUGraph Cannot Do In Current Context"
case kAUGraphErr_InvalidAudioUnit: return "AUGraph Invalid Audio Unit"
//***** MIDI errors
case kMIDIInvalidClient: return "MIDI Invalid Client"
case kMIDIInvalidPort: return "MIDI Invalid Port"
case kMIDIWrongEndpointType: return "MIDI Wrong Endpoint Type"
case kMIDINoConnection: return "MIDI No Connection"
case kMIDIUnknownEndpoint: return "MIDI Unknown Endpoint"
case kMIDIUnknownProperty: return "MIDI Unknown Property"
case kMIDIWrongPropertyType: return "MIDI Wrong Property Type"
case kMIDINoCurrentSetup: return "MIDI No Current Setup"
case kMIDIMessageSendErr: return "MIDI Message Send Error"
case kMIDIServerStartErr: return "MIDI Server Start Error"
case kMIDISetupFormatErr: return "MIDI Setup Format Error"
case kMIDIWrongThread: return "MIDI Wrong Thread"
case kMIDIObjectNotFound: return "MIDI Object Not Found"
case kMIDIIDNotUnique: return "MIDI ID Not Unique"
case kMIDINotPermitted: return "MIDI Not Permitted"
//***** AudioToolbox errors
case kAudioToolboxErr_CannotDoInCurrentContext: return "AudioToolbox Cannot Do In Current Context"
case kAudioToolboxErr_EndOfTrack: return "AudioToolbox End Of Track"
case kAudioToolboxErr_IllegalTrackDestination: return "AudioToolbox Illegal Track Destination"
case kAudioToolboxErr_InvalidEventType: return "AudioToolbox Invalid Event Type"
case kAudioToolboxErr_InvalidPlayerState: return "AudioToolbox Invalid Player State"
case kAudioToolboxErr_InvalidSequenceType: return "AudioToolbox Invalid Sequence Type"
case kAudioToolboxErr_NoSequence: return "AudioToolbox No Sequence"
case kAudioToolboxErr_StartOfTrack: return "AudioToolbox Start Of Track"
case kAudioToolboxErr_TrackIndexError: return "AudioToolbox Track Index Error"
case kAudioToolboxErr_TrackNotFound: return "AudioToolbox Track Not Found"
case kAudioToolboxError_NoTrackDestination: return "AudioToolbox No Track Destination"
//***** AudioUnit errors
case kAudioUnitErr_CannotDoInCurrentContext: return "AudioUnit Cannot Do In Current Context"
case kAudioUnitErr_FailedInitialization: return "AudioUnit Failed Initialization"
case kAudioUnitErr_FileNotSpecified: return "AudioUnit File Not Specified"
case kAudioUnitErr_FormatNotSupported: return "AudioUnit Format Not Supported"
case kAudioUnitErr_IllegalInstrument: return "AudioUnit Illegal Instrument"
case kAudioUnitErr_Initialized: return "AudioUnit Initialized"
case kAudioUnitErr_InvalidElement: return "AudioUnit Invalid Element"
case kAudioUnitErr_InvalidFile: return "AudioUnit Invalid File"
case kAudioUnitErr_InvalidOfflineRender: return "AudioUnit Invalid Offline Render"
case kAudioUnitErr_InvalidParameter: return "AudioUnit Invalid Parameter"
case kAudioUnitErr_InvalidProperty: return "AudioUnit Invalid Property"
case kAudioUnitErr_InvalidPropertyValue: return "AudioUnit Invalid Property Value"
case kAudioUnitErr_InvalidScope: return "AudioUnit InvalidScope"
case kAudioUnitErr_InstrumentTypeNotFound: return "AudioUnit Instrument Type Not Found"
case kAudioUnitErr_NoConnection: return "AudioUnit No Connection"
case kAudioUnitErr_PropertyNotInUse: return "AudioUnit Property Not In Use"
case kAudioUnitErr_PropertyNotWritable: return "AudioUnit Property Not Writable"
case kAudioUnitErr_TooManyFramesToProcess: return "AudioUnit Too Many Frames To Process"
case kAudioUnitErr_Unauthorized: return "AudioUnit Unauthorized"
case kAudioUnitErr_Uninitialized: return "AudioUnit Uninitialized"
case kAudioUnitErr_UnknownFileType: return "AudioUnit Unknown File Type"
case kAudioUnitErr_RenderTimeout: return "AudioUnit Rendre Timeout"
//***** Audio errors
case kAudio_BadFilePathError: return "Audio Bad File Path Error"
case kAudio_FileNotFoundError: return "Audio File Not Found Error"
case kAudio_FilePermissionError: return "Audio File Permission Error"
case kAudio_MemFullError: return "Audio Mem Full Error"
case kAudio_ParamError: return "Audio Param Error"
case kAudio_TooManyFilesOpenError: return "Audio Too Many Files Open Error"
case kAudio_UnimplementedError: return "Audio Unimplemented Error"
default: return "Unknown error (no description)"
}
}
}
The workaround for the inPropertyData == NULL
issue is adapted from Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL.
该inPropertyData == NULL
问题的解决方法改编自Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL。
The code that provides string messages for the OSStatus
codes is adapted from here: How do you convert an iPhone OSStatus code to something useful?.
为代码提供字符串消息的OSStatus
代码改编自此处:如何将 iPhone OSStatus 代码转换为有用的内容?.
回答by lishrimp
The reason that your code does not play the audio is audioPlayer variable is immediately released as soon as it reaches the end of the method block.
您的代码不播放音频的原因是 audioPlayer 变量在到达方法块的末尾时立即释放。
So move the following variable to the outside of the method block, then it will play the audio well.
所以把下面的变量移到方法块的外面,这样就可以很好地播放音频了。
AVAudioPlayer *audioPlayer;
By the way, your code snippet was very helpful for me! :D
顺便说一句,你的代码片段对我很有帮助!:D
回答by joseph
Here is the snippet for Mac:
这是 Mac 的代码片段:
NSDictionary *soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
[NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSURL* audioFileURL = [NSURL fileURLWithPath: [documentsDirectory stringByAppendingString:@"/test.wav"]];
NSError* error;
AVAudioRecorder* soundRecorder = soundRecorder = [[AVAudioRecorder alloc] initWithURL: audioFileURL settings: soundSetting error: &error];
if (error)
{
NSLog(@"Error! soundRecorder initialization failed...");
}
// start recording
[soundRecorder record];