Android PCM -> AAC(编码器)-> PCM(解码器)实时进行正确优化
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/21804390/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
PCM -> AAC (Encoder) -> PCM(Decoder) in real-time with correct optimization
提问by
I'm trying to implement
我正在尝试实施
AudioRecord (MIC) ->
PCM -> AAC Encoder
AAC -> PCM Decode
-> AudioTrack?? (SPEAKER)
with MediaCodec
on Android 4.1+ (API16).
与MediaCodec
在Android 4.1+(API16)。
Firstly, I successfully (but not sure correctly optimized) implemented PCM -> AAC Encoder
by MediaCodec
as intended as below
首先,我成功(但不确定是否正确优化)PCM -> AAC Encoder
按 MediaCodec
如下预期实现
private boolean setEncoder(int rate)
{
encoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return true;
}
INPUT: PCM Bitrate = 44100(Hz) x 16(bit) x 1(Monoral) = 705600 bit/s
输入:PCM 比特率 = 44100(Hz) x 16(bit) x 1(Monoral) = 705600 bit/s
OUTPUT: AAC-HE Bitrate = 64 x 1024(bit) = 65536 bit/s
输出:AAC-HE 比特率 = 64 x 1024(bit) = 65536 比特/秒
So, the data size is approximately compressed x11
,and I confirmed this working by observing a log
因此,数据大小大约被压缩x11
,我通过观察日志确认了这一点
- AudioRecoder﹕ 4096 bytes read
- AudioEncoder﹕ 369 bytes encoded
- AudioRecoder﹕读取 4096 字节
- AudioEncoder﹕ 369 字节编码
the data size is approximately compressed x11
, so far so good.
数据大小近似压缩x11
,到目前为止还好。
Now, I have a UDP server to receive the encoded data, then decode it.
现在,我有一个 UDP 服务器来接收编码数据,然后对其进行解码。
The decoder profile is set as follows:
解码器配置文件设置如下:
private boolean setDecoder(int rate)
{
decoder = MediaCodec.createDecoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
decoder.configure(format, null, null, 0);
return true;
}
Since UDPserver packet buffer size is 1024
由于 UDPserver 数据包缓冲区大小是 1024
- UDPserver ﹕ 1024 bytes received
- UDPserver ﹕ 收到 1024 字节
and since this is the compressed AAC data, I would expect the decoding size will be
由于这是压缩的 AAC 数据,因此我希望解码大小为
approximately 1024 x11
, however the actual result is
大约 1024 x11
,但实际结果是
- AudioDecoder﹕ 8192 bytes decoded
- AudioDecoder﹕ 8192 字节解码
It's approximately x8
, and I feel something wrong.
大约是x8
,我觉得有些不对劲。
The decoder code is as follows:
解码器代码如下:
IOudpPlayer = new Thread(new Runnable()
{
public void run()
{
SocketAddress sockAddress;
String address;
int len = 1024;
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
{
decoder.start();
isPlaying = true;
while (isPlaying)
{
try
{
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
Log.d("AudioDecoder", outData.length + " bytes decoded");
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
}
//===========
}
catch (IOException e)
{
}
}
decoder.stop();
}
catch (Exception e)
{
}
}
});
the full code:
完整代码:
https://gist.github.com/kenokabe/9029256
https://gist.github.com/kenokabe/9029256
also need Permission:
还需要权限:
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
<uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>
A member faddenwho works for Google told me
一位在 Google 工作的成员fadden告诉我
Looks like I'm not setting position & limit on the output buffer.
看起来我没有在输出缓冲区上设置位置和限制。
I have read VP8 Encoding Nexus 5 returns empty/0-Frames, but not sure how to implement correctly.
我已阅读 VP8 Encoding Nexus 5 returns empty/0-Frames,但不确定如何正确实施。
UPDATE: I sort of understood where to modify for
更新:我有点明白在哪里修改
Looks like I'm not setting position & limit on the output buffer.
看起来我没有在输出缓冲区上设置位置和限制。
, so add 2 lines within the while loop of Encoder and Decoder as follows:
,因此在 Encoder 和 Decoder 的 while 循环中添加 2 行,如下所示:
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
https://gist.github.com/kenokabe/9029256/revisions
https://gist.github.com/kenokabe/9029256/revisions
However the result is the same.
然而结果是一样的。
and now, I think, the errors:
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
indicates this decoder fails completely from the first. It's again the data is not seekable
issue. Seeking in AAC streams on AndroidIt's very disappointing if the AAC decoder cannot handle the streaming data in this way but only with adding some header.
现在,我认为,错误:
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
表明这个解码器从一开始就完全失败了。又是the data is not seekable
问题。 在 Android 上寻找 AAC 流如果 AAC 解码器不能以这种方式处理流数据而只能添加一些标头,那将非常令人失望。
UPDATE2: UDP receiver did wrong, so modified
UPDATE2: UDP 接收器做错了,所以修改
https://gist.github.com/kenokabe/9029256
https://gist.github.com/kenokabe/9029256
Now, the error
现在,错误
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
disappeared!!
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
消失了!!
So, it indicates the decoder works without an error, at least,
所以,它表明解码器工作没有错误,至少,
however, this is the log of 1 cycle:
然而,这是 1 个周期的日志:
D/AudioRecoder﹕ 4096 bytes read
D/AudioEncoder﹕ 360 bytes encoded
D/UDP Receiver﹕ received !!! from /127.0.0.1:39000
D/UDP Receiver﹕ 360 bytes received
D/AudioDecoder﹕ 8192 bytes decoded
PCM(4096)->AACencoded(360)->UDP-AAC(360)->(supposed to be )PCM(8192)
PCM(4096)->AACencoded(360)->UDP-AAC(360)->(应该是)PCM(8192)
The final result is about 2x size of the original PCM, something is still wrong.
最终结果大约是原始 PCM 的 2 倍大小,仍然有问题。
So my Question here would be
所以我的问题是
Can you properly optimize my sample code to work correctly?
Is it a right way to use
AudioTrack
API to play the decoded PCM raw data on the fly, and can you show me the proper way to do that? A example code is appreciated.
你能正确优化我的示例代码以使其正常工作吗?
这是使用
AudioTrack
API 即时播放解码后的 PCM 原始数据的正确方法吗,您能告诉我正确的方法吗?示例代码表示赞赏。
Thank you.
谢谢你。
PS. My project targets on Android4.1+(API16), I've read things are easier on API18(Andeoid 4.3+), but for obvious compatibility reasons, unfortunately, I have to skip MediaMuxer etc. here...
附注。我的项目针对 Android4.1+(API16),我读过 API18(Andeoid 4.3+) 上的东西更容易,但出于明显的兼容性原因,不幸的是,我必须在这里跳过 MediaMuxer 等...
回答by sexp1stol
After testing this is what I came up with from modifying your code:
经过测试,这是我通过修改代码得出的结论:
package com.example.app;
import android.app.Activity;
import android.media.AudioManager;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.os.Bundle;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaCodec;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
public class MainActivity extends Activity
{
private AudioRecord recorder;
private AudioTrack player;
private MediaCodec encoder;
private MediaCodec decoder;
private short audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private short channelConfig = AudioFormat.CHANNEL_IN_MONO;
private int bufferSize;
private boolean isRecording;
private boolean isPlaying;
private Thread IOrecorder;
private Thread IOudpPlayer;
private DatagramSocket ds;
private final int localPort = 39000;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
IOrecorder = new Thread(new Runnable()
{
public void run()
{
int read;
byte[] buffer1 = new byte[bufferSize];
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
DatagramPacket packet;
try
{
encoder.start();
recorder.startRecording();
isRecording = true;
while (isRecording)
{
read = recorder.read(buffer1, 0, bufferSize);
// Log.d("AudioRecoder", read + " bytes read");
//------------------------
inputBuffers = encoder.getInputBuffers();
outputBuffers = encoder.getOutputBuffers();
inputBufferIndex = encoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(buffer1);
encoder.queueInputBuffer(inputBufferIndex, 0, buffer1.length, 0, 0);
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioEncoder ", outData.length + " bytes encoded");
//-------------
packet = new DatagramPacket(outData, outData.length,
InetAddress.getByName("127.0.0.1"), localPort);
ds.send(packet);
//------------
encoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
}
// ----------------------;
}
encoder.stop();
recorder.stop();
}
catch (Exception e)
{
e.printStackTrace();
}
}
});
IOudpPlayer = new Thread(new Runnable()
{
public void run()
{
SocketAddress sockAddress;
String address;
int len = 2048
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
{
player.play();
decoder.start();
isPlaying = true;
while (isPlaying)
{
try
{
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
// Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
// Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioDecoder", outData.length + " bytes decoded");
player.write(outData, 0, outData.length);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0..
回答by sexp1stol
Self Answer, here's my best effort so far
自我回答,这是我迄今为止最大的努力
package com.example.app;
import android.app.Activity;
import android.media.AudioManager;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.os.Bundle;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaCodec;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
public class MainActivity extends Activity
{
private AudioRecord recorder;
private AudioTrack player;
private MediaCodec encoder;
private MediaCodec decoder;
private short audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private short channelConfig = AudioFormat.CHANNEL_IN_MONO;
private int bufferSize;
private boolean isRecording;
private boolean isPlaying;
private Thread IOrecorder;
private Thread IOudpPlayer;
private DatagramSocket ds;
private final int localPort = 39000;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
IOrecorder = new Thread(new Runnable()
{
public void run()
{
int read;
byte[] buffer1 = new byte[bufferSize];
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
DatagramPacket packet;
try
{
encoder.start();
recorder.startRecording();
isRecording = true;
while (isRecording)
{
read = recorder.read(buffer1, 0, bufferSize);
// Log.d("AudioRecoder", read + " bytes read");
//------------------------
inputBuffers = encoder.getInputBuffers();
outputBuffers = encoder.getOutputBuffers();
inputBufferIndex = encoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(buffer1);
encoder.queueInputBuffer(inputBufferIndex, 0, buffer1.length, 0, 0);
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioEncoder", outData.length + " bytes encoded");
//-------------
packet = new DatagramPacket(outData, outData.length,
InetAddress.getByName("127.0.0.1"), localPort);
ds.send(packet);
//------------
encoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
}
// ----------------------;
}
encoder.stop();
recorder.stop();
}
catch (Exception e)
{
e.printStackTrace();
}
}
});
IOudpPlayer = new Thread(new Runnable()
{
public void run()
{
SocketAddress sockAddress;
String address;
int len = 1024;
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
{
player.play();
decoder.start();
isPlaying = true;
while (isPlaying)
{
try
{
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
// Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
// Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioDecoder", outData.length + " bytes decoded");
player.write(outData, 0, outData.length);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
}
//===========
}
catch (IOException e)
{
}
}
decoder.stop();
player.stop();
}
catch (Exception e)
{
}
}
});
//===========================================================
int rate = findAudioRecord();
if (rate != -1)
{
Log.v("=========media ", "ready: " + rate);
Log.v("=========media channel ", "ready: " + channelConfig);
boolean encoderReady = setEncoder(rate);
Log.v("=========encoder ", "ready: " + encoderReady);
if (encoderReady)
{
boolean decoderReady = setDecoder(rate);
Log.v("=========decoder ", "ready: " + decoderReady);
if (decoderReady)
{
Log.d("=======bufferSize========", "" + bufferSize);
try
{
setPlayer(rate);
ds = new DatagramSocket(localPort);
IOudpPlayer.start();
IOrecorder.start();
}
catch (SocketException e)
{
e.printStackTrace();
}
}
}
}
}
protected void onDestroy()
{
recorder.release();
player.release();
encoder.release();
decoder.release();
}
/*
protected void onResume()
{
// isRecording = true;
}
protected void onPause()
{
isRecording = false;
}
*/
private int findAudioRecord()
{
for (int rate : new int[]{44100})
{
try
{
Log.v("===========Attempting rate ", rate + "Hz, bits: " + audioFormat + ", channel: " + channelConfig);
bufferSize = AudioRecord.getMinBufferSize(rate, channelConfig, audioFormat);
if (bufferSize != AudioRecord.ERROR_BAD_VALUE)
{
// check if we can instantiate and have a success
recorder = new AudioRecord(AudioSource.MIC, rate, channelConfig, audioFormat, bufferSize);
if (recorder.getState() == AudioRecord.STATE_INITIALIZED)
{
Log.v("===========final rate ", rate + "Hz, bits: " + audioFormat + ", channel: " + channelConfig);
return rate;
}
}
}
catch (Exception e)
{
Log.v("error", "" + rate);
}
}
return -1;
}
private boolean setEncoder(int rate)
{
encoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, rate);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return true;
}
private boolean setDecoder(int rate)
{
decoder = MediaCodec.createDecoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, rate);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
decoder.configure(format, null, null, 0);
return true;
}
private boolean setPlayer(int rate)
{
int bufferSizePlayer = AudioTrack.getMinBufferSize(rate, AudioFormat.CHANNEL_OUT_MONO, audioFormat);
Log.d("====buffer Size player ", String.valueOf(bufferSizePlayer));
player= new AudioTrack(AudioManager.STREAM_MUSIC, rate, AudioFormat.CHANNEL_OUT_MONO, audioFormat, bufferSizePlayer, AudioTrack.MODE_STREAM);
if (player.getState() == AudioTrack.STATE_INITIALIZED)
{
return true;
}
else
{
return false;
}
}
}
回答by Sojan P R
I have tried the above code and it didn't work properly.I was getting lot of silence injected to the decoded output. Issue was not setting proper "csd" value to the decoder.
我已经尝试了上面的代码,但它没有正常工作。我在解码的输出中注入了大量的静音。问题是没有为解码器设置正确的“csd”值。
So if you see "silence" in log or Decoder throwing error make sure you have added the following to your media decoder format
因此,如果您在日志或解码器抛出错误中看到“静音”,请确保已将以下内容添加到媒体解码器格式中
int profile = 2; //AAC LC
int freqIdx = 11; //8KHz
int chanCfg = 1; //Mono
ByteBuffer csd = ByteBuffer.allocate(2);
csd.put(0, (byte) (profile << 3 | freqIdx >> 1));
csd.put(1, (byte)((freqIdx & 0x01) << 7 | chanCfg << 3));
mediaFormat.setByteBuffer("csd-0", csd);
回答by cantonics
D/AudioRecoder﹕ 4096 bytes read D/AudioEncoder﹕ 360 bytes encoded D/UDP Receiver﹕ received !!! from /127.0.0.1:39000 D/UDP Receiver﹕ 360 bytes received D/AudioDecoder﹕ 8192 bytes decoded
D/AudioRecoder﹕4096 字节读取 D/AudioEncoder﹕360 字节编码 D/UDP 接收器﹕收到 !!! 来自 /127.0.0.1:39000 D/UDP 接收器:360 字节收到 D/AudioDecoder:8192 字节解码
This is because acc decoder always decode to stereo channels,even if the encoded data is MONO. so if your encoding side is set to stereo channels, it will be like:
这是因为 acc 解码器总是解码到立体声声道,即使编码数据是 MONO。因此,如果您的编码端设置为立体声通道,它将类似于:
D/AudioRecoder﹕ 8192 bytes read D/AudioEncoder﹕ 360 bytes encoded D/UDP Receiver﹕ received !!! from /127.0.0.1:39000 D/UDP Receiver﹕ 360 bytes received D/AudioDecoder﹕ 8192 bytes decoded
D/AudioRecoder﹕8192 字节读取 D/AudioEncoder﹕360 字节编码 D/UDP 接收器﹕收到 !!! 来自 /127.0.0.1:39000 D/UDP 接收器:360 字节收到 D/AudioDecoder:8192 字节解码
回答by Stephen
I have tested with your souce. there are some points.
我已经用你的酱汁进行了测试。有几点。
Bit Rate is a natural number of K, but not computer K. 64k = 64000, but not 64 * 1024
It's not recommended to write a long code that shares some variables. A. separate Encoder Thread and Decoder Thread into 2 independet classes. B. The DatagramSocket is shared by Sender and Receiver, it not good.
Enumerate Audio Format need more values. i.e. sample rates should be picked from : 8000, 11025, 22050, 44100
比特率是 K 的自然数,但不是计算机 K。64k = 64000,但不是 64 * 1024
不建议编写共享某些变量的长代码。A. 将 Encoder Thread 和 Decoder Thread 分成 2 个独立的类。B. DatagramSocket 是 Sender 和 Receiver 共享的,不好。
枚举音频格式需要更多的值。即采样率应从:8000、11025、22050、44100 中选取
回答by marcone
Your networking code is combining data. You got 369 bytes of compressed data, but on the receiving end you ended up with 1024 bytes. Those 1024 bytes consist of two whole and one partial frame. The two whole frames each decode to 4096 bytes again, for the total of 8192 bytes that you saw. The remaining partial frame will probably be decoded once you send sufficiently more data to the decoder, but you should generally send only whole frames to the decoder.
您的网络代码正在组合数据。你得到了 369 字节的压缩数据,但在接收端你得到了 1024 字节。这 1024 个字节由两个完整帧和一个部分帧组成。两个完整的帧都再次解码为 4096 字节,即您看到的总共 8192 字节。一旦您向解码器发送足够多的数据,剩余的部分帧可能会被解码,但您通常应该只将整个帧发送到解码器。
In addition, MediaCodec.dequeueOutputBuffer()
does not only return (positive) buffer indices, but also (negative) status codes. One of the possible codes is MediaCodec.INFO_OUTPUT_FORMAT_CHANGED
, which indicates that you need to call MediaCodec.getOutputFormat()
to get the format of the audio data. You might see the codec output stereo even if the input was mono. The code you posted simply breaks out of the loop when it receives one of these status codes.
此外,MediaCodec.dequeueOutputBuffer()
不仅返回(正)缓冲区索引,还返回(负)状态代码。可能的代码之一是MediaCodec.INFO_OUTPUT_FORMAT_CHANGED
,表示您需要调用MediaCodec.getOutputFormat()
以获取音频数据的格式。即使输入是单声道,您也可能会看到编解码器输出立体声。您发布的代码只是在收到这些状态代码之一时跳出循环。