java 将音频和视频从 Android 流式传输到 PC/Web。
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/6864352/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Streaming audio and video from Android to PC/web.
提问by ajs
I am recent beginner to Android SDK, and the overall goal of this project is to create an app very similar to Ustream's or Qik's (yeah, I know not the best idea for a beginner). I need to stream live audio and video to the web. There will be a video server, most likely using Wowza, handling the encoding of the videos to the proper format.
我最近是 Android SDK 的初学者,这个项目的总体目标是创建一个与 Ustream 或 Qik 非常相似的应用程序(是的,我不知道对初学者来说最好的主意)。我需要将实时音频和视频流式传输到网络。将有一个视频服务器,很可能使用 Wowza,将视频编码处理为正确的格式。
From what I have found so far, I need to use android's MediaRecorder with the camera as the source and direct the output to the server. That makes sense to me, but I do not know exactly how to go about doing that. Can anyone give me a push in the right direction? I have browsed through an example at "http://ipcamera-for-android.googlecode.com/svn/trunk", but that appears to be far more complicated than necessary for what I need to do and I have been unable to get it working in eclipse to test it anyway.
从我目前发现的情况来看,我需要使用 android 的 MediaRecorder 和相机作为源并将输出定向到服务器。这对我来说很有意义,但我不知道如何去做。谁能给我一个正确的方向推动?我浏览了“http://ipcamera-for-android.googlecode.com/svn/trunk”上的一个例子,但这似乎比我需要做的事情复杂得多,我一直无法得到它无论如何都可以在 Eclipse 中进行测试。
回答by Lior Ohana
Doing so is not simple but possible.
这样做并不简单,但却是可能的。
The MediaRecorder API assumes that the output is a random access file, meaning, it can got forth and back for writing the mp4 (or other) file container. As you can see in the ipcamera-for-android, the output file is directed to a socket which is not random access. The fact makes it hard to parse the outgoing stream since the MediaRecorder API will "write" some data like fps, sps/pps (on h264) and so on only when the recording is done. The API will try to seek back to the beginning of the stream (where the file header exists) but it will fail since the stream is sent to a socket and not to a file.
MediaRecorder API 假设输出是一个随机访问文件,这意味着它可以来回写入 mp4(或其他)文件容器。正如您在 ipcamera-for-android 中看到的那样,输出文件被定向到一个非随机访问的套接字。事实使得解析传出流变得困难,因为 MediaRecorder API 只会在录制完成时“写入”一些数据,如 fps、sps/pps(在 h264 上)等。API 将尝试返回到流的开头(文件头所在的位置),但它会失败,因为流被发送到套接字而不是文件。
Taking the ipcamera-for-android is a good reference, if I recall correctly, before streaming, it records a video to a file, opens the header and takes what it needs from there, than, it start recording to the socket and uses the data it took from the header in order to parse the stream.
使用 ipcamera-for-android 是一个很好的参考,如果我没记错的话,在流式传输之前,它会将视频录制到文件中,打开标题并从那里获取所需的内容,然后开始录制到套接字并使用它从标题中获取的数据以解析流。
You will also need some basic understanding in parsing mp4 (or other file container you'd want to use) in order to capture the frames. You can do that either on the device or on the server side.
您还需要对解析 mp4(或您想要使用的其他文件容器)有一些基本的了解,以便捕获帧。您可以在设备或服务器端执行此操作。
Here is a good start for writing the stream to a socket: Tutorial
这是将流写入套接字的良好开端: 教程
I hope it was helpful, there is no good tutorial for parsing and decoding the outgoing streams since it is not so simple...but again, it is possible with some effort.
我希望它有帮助,没有很好的教程来解析和解码传出的流,因为它不是那么简单......但同样,可以通过一些努力。
Take a look also here to see how to direct the output stream to a stream that can be sent to the server: MediaRecorder Question
也看看这里,看看如何将输出流定向到可以发送到服务器的流: MediaRecorder Question
回答by Laurent Grégtheitroade
SipDroiddoes exactly what you need.
SipDroid正是您所需要的。
It involves a hack to circumvent the limitation of the MediaRecorder
class which require a file descriptor. It saves the result of the MediaRecorder video stream to a local socket(used as a kind of pipe), then re-read(in the same application but another thread) from this socket on the other end, create RTP packetsout of the received data, and finally broadcast the RTP packets to the network (you can use here broadcast or unicast mode, as you wish).
它涉及绕过MediaRecorder
需要文件描述符的类的限制的黑客。它将 MediaRecorder 视频流的结果保存到本地套接字(用作一种管道),然后从另一端的这个套接字重新读取(在同一个应用程序中,但在另一个线程中),从接收到的数据中创建RTP 数据包数据,最后将 RTP 数据包广播到网络(您可以根据需要在此处使用广播或单播模式)。
Basically it boils down to the following (simplified code):
基本上它归结为以下(简化代码):
// Create a MediaRecorder
MediaRecorder mr = new MediaRecorder();
// (Initialize mr as usual)
// Create a LocalServerSocket
LocalServerSocket lss = new LocalServerSocket("foobar");
// Connect both end of this socket
LocalSocket sender = lss.accept();
LocalSocket receiver = new LocalSocket();
receiver.connect(new LocalSocketAddress("foobar"));
// Set the output of the MediaRecorder to the sender socket file descriptor
mr.setOutputFile(sender.getFileDescriptor());
// Start the video recording:
mr.start();
// Launch a background thread that will loop,
// reading from the receiver socket,
// and creating a RTP packet out of read data.
RtpSocket rtpSocket = new RtpSocket();
InputStream in = receiver.getInputStream();
while(true) {
fis.read(buffer, ...);
// Here some data manipulation on the received buffer ...
RtpPacket rtp = new RtpPacket(buffer, ...);
rtpSocket.send(rtpPacket);
}
The implementation of RtpPacket
and RtpSocket
classes (rather simple), and the exact code which manipulate the video stream content can be found in the SipDroid project (especially VideoCamera.java
).
RtpPacket
和RtpSocket
类的实现(相当简单),以及操作视频流内容的确切代码可以在 SipDroid 项目(尤其是VideoCamera.java
)中找到。