ajax WebSockets 协议与 HTTP
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/14703627/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
WebSockets protocol vs HTTP
提问by 4esn0k
There are many blogs and discussions about websocket and HTTP, and many developers and sites strongly advocate websockets, but i still can not understand why.
有很多关于websocket和HTTP的博客和讨论,很多开发者和网站都强烈提倡websockets,但我仍然不明白为什么。
for example (arguments of websocket lovers):
例如(websocket爱好者的论点):
HTML5 Web Sockets represents the next evolution of web communications—a full-duplex, bidirectional communications channel that operates through a single socket over the Web. ( http://www.websocket.org/quantum.html)
HTML5 Web Sockets 代表了 Web 通信的下一个发展——全双工、双向通信通道,通过 Web 上的单个套接字运行。( http://www.websocket.org/quantum.html)
HTTP supports streaming: request body streaming(you are using it while uploading large files) and response body streaming.
HTTP 支持流:请求正文流(您在上传大文件时使用它)和响应正文流。
During making connection with WebSocket, client and server exchange data per frame which is 2 bytes each, compared to 8 kilo bytes of http header when you do continuous polling.
在与 WebSocket 建立连接期间,客户端和服务器每帧交换数据,每帧 2 个字节,而在进行连续轮询时,http 标头为 8 KB。
Why does that 2 bytes not include tcp and under tcp protocols overhead?
为什么那 2 个字节不包括 tcp 和在 tcp 协议下的开销?
GET /about.html HTTP/1.1
Host: example.org
This is ~48 bytes http header.
这是 ~48 字节的 http 标头。
http chunked encoding - https://en.wikipedia.org/wiki/Chunked_transfer_encoding:
http 分块编码 - https://en.wikipedia.org/wiki/Chunked_transfer_encoding:
23
This is the data in the first chunk
1A
and this is the second one
3
con
8
sequence
0
- So, the overhead per each chunk is not big.
- 因此,每个块的开销并不大。
Also both protocols work over TCP, so all TCP issues with long-live connections are still there.
此外,这两种协议都通过 TCP 工作,因此所有长期连接的 TCP 问题仍然存在。
Questions:
问题:
- Why is websockets protocol better?
- Why was it implemented instead of updating http protocol?
- 为什么 websockets 协议更好?
- 为什么实施它而不是更新 http 协议?
回答by kanaka
1) Why is the WebSockets protocol better?
1)为什么WebSockets协议更好?
WebSockets is better for situations that involve low-latency communication especially for low latency for client to server messages. For server to client data you can get fairly low latency using long-held connections and chunked transfer. However, this doesn't help with client to server latency which requires a new connection to be established for each client to server message.
WebSockets 更适合涉及低延迟通信的情况,尤其是客户端到服务器消息的低延迟。对于服务器到客户端的数据,您可以使用长期保持的连接和分块传输获得相当低的延迟。但是,这对客户端到服务器的延迟没有帮助,这需要为每个客户端到服务器的消息建立新的连接。
Your 48 byte HTTP handshake is not realistic for real-world HTTP browser connections where there is often several kilobytes of data sent as part of the request (in both directions) including many headers and cookie data. Here is an example of a request/response to using Chrome:
您的 48 字节 HTTP 握手对于现实世界的 HTTP 浏览器连接来说是不现实的,其中通常有几千字节的数据作为请求的一部分(双向)发送,包括许多标头和 cookie 数据。以下是使用 Chrome 的请求/响应示例:
Example request (2800 bytes including cookie data, 490 bytes without cookie data):
示例请求(2800 字节包括 cookie 数据,490 字节不包括 cookie 数据):
GET / HTTP/1.1
Host: www.cnn.com
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.68 Safari/537.17
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: [[[2428 byte of cookie data]]]
Example response (355 bytes):
示例响应(355 字节):
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 13 Feb 2013 18:56:27 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: CG=US:TX:Arlington; path=/
Last-Modified: Wed, 13 Feb 2013 18:55:22 GMT
Vary: Accept-Encoding
Cache-Control: max-age=60, private
Expires: Wed, 13 Feb 2013 18:56:54 GMT
Content-Encoding: gzip
Both HTTP and WebSockets have equivalent sized initial connection handshakes, but with a WebSocket connection the initial handshake is performed once and then small messages only have 6 bytes of overhead (2 for the header and 4 for the mask value). The latency overhead is not so much from the size of the headers, but from the logic to parse/handle/store those headers. In addition, the TCP connection setup latency is probably a bigger factor than the size or processing time for each request.
HTTP 和 WebSockets 都有相同大小的初始连接握手,但是对于 WebSocket 连接,初始握手执行一次,然后小消息只有 6 个字节的开销(2 个用于标头,4 个用于掩码值)。延迟开销不是来自标头的大小,而是来自解析/处理/存储这些标头的逻辑。此外,TCP 连接建立延迟可能是比每个请求的大小或处理时间更大的因素。
2) Why was it implemented instead of updating HTTP protocol?
2)为什么它被实现而不是更新HTTP协议?
There are efforts to re-engineer the HTTP protocol to achieve better performance and lower latency such as SPDY, HTTP 2.0and QUIC. This will improve the situation for normal HTTP requests, but it is likely that WebSockets and/or WebRTC DataChannel will still have lower latency for client to server data transfer than HTTP protocol (or it will be used in a mode that looks a lot like WebSockets anyways).
正在努力重新设计 HTTP 协议以实现更好的性能和更低的延迟,例如SPDY、HTTP 2.0和QUIC。这将改善正常 HTTP 请求的情况,但很可能 WebSockets 和/或 WebRTC DataChannel 的客户端到服务器数据传输的延迟仍然比 HTTP 协议低(或者它将以看起来很像 WebSockets 的模式使用无论如何)。
Update:
更新:
Here is a framework for thinking about web protocols:
这是一个思考 Web 协议的框架:
- TCP: low-level, bi-directional, full-duplex, and guaranteed order transport layer. No browser support (except via plugin/Flash).
- HTTP 1.0: request-response transport protocol layered on TCP. The client makes one full request, the server gives one full response, and then the connection is closed. The request methods (GET, POST, HEAD) have specific transactional meaning for resources on the server.
- HTTP 1.1: maintains the request-response nature of HTTP 1.0, but allows the connection to stay open for multiple full requests/full responses (one response per request). Still has full headers in the request and response but the connection is re-used and not closed. HTTP 1.1 also added some additional request methods (OPTIONS, PUT, DELETE, TRACE, CONNECT) which also have specific transactional meanings. However, as noted in the introductionto the HTTP 2.0 draft proposal, HTTP 1.1 pipelining is not widely deployed so this greatly limits the utility of HTTP 1.1 to solve latency between browsers and servers.
- Long-poll: sort of a "hack" to HTTP (either 1.0 or 1.1) where the server does not respond immediately (or only responds partially with headers) to the client request. After a server response, the client immediately sends a new request (using the same connection if over HTTP 1.1).
- HTTP streaming: a variety of techniques (multipart/chunked response) that allow the server to send more than one response to a single client request. The W3C is standardizing this as Server-Sent Eventsusing a
text/event-streamMIME type. The browser API (which is fairly similar to the WebSocket API) is called the EventSource API. - Comet/server push: this is an umbrella term that includes both long-poll and HTTP streaming. Comet libraries usually support multiple techniques to try and maximize cross-browser and cross-server support.
- WebSockets: a transport layer built-on TCP that uses an HTTP friendly Upgrade handshake. Unlike TCP, which is a streaming transport, WebSockets is a message based transport: messages are delimited on the wire and are re-assembled in-full before delivery to the application. WebSocket connections are bi-directional, full-duplex and long-lived. After the initial handshake request/response, there is no transactional semantics and there is very little per message overhead. The client and server may send messages at any time and must handle message receipt asynchronously.
- SPDY: a Google initiated proposal to extend HTTP using a more efficient wire protocol but maintaining all HTTP semantics (request/response, cookies, encoding). SPDY introduces a new framing format (with length-prefixed frames) and specifies a way to layering HTTP request/response pairs onto the new framing layer. Headers can be compressed and new headers can be sent after the connection has been established. There are real world implementations of SPDY in browsers and servers.
- HTTP 2.0: has similar goals to SPDY: reduce HTTP latency and overhead while preserving HTTP semantics. The current draft is derived from SPDY and defines an upgrade handshake and data framing that is very similar the the WebSocket standard for handshake and framing. An alternate HTTP 2.0 draft proposal (httpbis-speed-mobility) actually uses WebSockets for the transport layer and adds the SPDY multiplexing and HTTP mapping as an WebSocket extension (WebSocket extensions are negotiated during the handshake).
- WebRTC/CU-WebRTC: proposals to allow peer-to-peer connectivity between browsers. This may enable lower average and maximum latency communication because as the underlying transport is SDP/datagram rather than TCP. This allows out-of-order delivery of packets/messages which avoids the TCP issue of latency spikes caused by dropped packets which delay delivery of all subsequent packets (to guarantee in-order delivery).
- QUIC: is an experimental protocol aimed at reducing web latency over that of TCP. On the surface, QUIC is very similar to TCP+TLS+SPDY implemented on UDP. QUIC provides multiplexing and flow control equivalent to HTTP/2, security equivalent to TLS, and connection semantics, reliability, and congestion control equivalentto TCP. Because TCP is implemented in operating system kernels, and middlebox firmware, making significant changes to TCP is next to impossible. However, since QUIC is built on top of UDP, it suffers from no such limitations. QUIC is designed and optimised for HTTP/2 semantics.
- TCP:低级、双向、全双工和保证顺序传输层。不支持浏览器(通过插件/Flash 除外)。
- HTTP 1.0:基于 TCP 的请求-响应传输协议。客户端发出一个完整的请求,服务器给出一个完整的响应,然后关闭连接。请求方法(GET、POST、HEAD)对服务器上的资源具有特定的事务意义。
- HTTP 1.1:保持 HTTP 1.0 的请求-响应特性,但允许连接为多个完整请求/完整响应(每个请求一个响应)保持打开状态。请求和响应中仍然有完整的标头,但连接被重用并且没有关闭。HTTP 1.1 还添加了一些额外的请求方法(OPTIONS、PUT、DELETE、TRACE、CONNECT),这些方法也具有特定的事务含义。但是,正如在HTTP 2.0 草案提案的介绍中所指出的,HTTP 1.1 流水线并未广泛部署,因此这极大地限制了 HTTP 1.1 解决浏览器和服务器之间延迟的效用。
- 长轮询:对 HTTP(1.0 或 1.1)的一种“黑客攻击”,其中服务器不会立即响应(或仅部分响应标头)对客户端请求。在服务器响应之后,客户端立即发送一个新请求(如果通过 HTTP 1.1 使用相同的连接)。
- HTTP 流:允许服务器向单个客户端请求发送多个响应的各种技术(多部分/分块响应)。W3C 正在将其标准化为使用MIME 类型的服务器发送事件
text/event-stream。浏览器 API(与 WebSocket API 非常相似)称为 EventSource API。 - Comet/server push:这是一个总称,包括长轮询和 HTTP 流。Comet 库通常支持多种技术来尝试和最大化跨浏览器和跨服务器支持。
- WebSockets:传输层内置 TCP,使用 HTTP 友好升级握手。与作为流式传输的 TCP 不同,WebSockets 是基于消息的传输:消息在线路上定界,并在交付给应用程序之前完全重新组装。WebSocket 连接是双向的、全双工的和长寿命的。在初始握手请求/响应之后,没有事务语义并且每条消息的开销非常小。客户端和服务器可以随时发送消息,并且必须异步处理消息接收。
- SPDY:Google 发起的一项提议,使用更高效的有线协议扩展 HTTP,但保留所有 HTTP 语义(请求/响应、cookie、编码)。SPDY 引入了一种新的帧格式(带有长度前缀的帧),并指定了一种将 HTTP 请求/响应对分层到新的帧层上的方法。在建立连接后,可以压缩标头并发送新标头。在浏览器和服务器中有 SPDY 的真实实现。
- HTTP 2.0:与 SPDY 有相似的目标:减少 HTTP 延迟和开销,同时保留 HTTP 语义。当前草案源自 SPDY,并定义了升级握手和数据帧,这与用于握手和帧的 WebSocket 标准非常相似。另一个 HTTP 2.0 草案提案 (httpbis-speed-mobility) 实际上使用 WebSockets 作为传输层,并添加 SPDY 多路复用和 HTTP 映射作为 WebSocket 扩展(WebSocket 扩展在握手期间协商)。
- WebRTC/CU-WebRTC:允许浏览器之间点对点连接的提议。这可以实现更低的平均延迟和最大延迟通信,因为底层传输是 SDP/数据报而不是 TCP。这允许数据包/消息的无序交付,这避免了由丢弃的数据包引起的延迟峰值的 TCP 问题,这些数据包延迟了所有后续数据包的交付(以保证按顺序交付)。
- QUIC:是一种实验性协议,旨在减少 TCP 上的网络延迟。从表面上看,QUIC 与在 UDP 上实现的 TCP+TLS+SPDY 非常相似。QUIC 提供相当于 HTTP/2 的复用和流量控制,相当于 TLS 的安全性,以及相当于 TCP 的连接语义、可靠性和拥塞控制。因为 TCP 是在操作系统内核和中间件固件中实现的,所以对 TCP 进行重大更改几乎是不可能的。然而,由于 QUIC 是建立在 UDP 之上的,所以它没有这样的限制。QUIC 是为 HTTP/2 语义设计和优化的。
References:
参考资料:
- HTTP:
- Server-Sent Event:
- WebSockets:
- SPDY:
- HTTP 2.0:
- IETF HTTP 2.0 httpbis-http2 Draft
- IETF HTTP 2.0 httpbis-speed-mobility Draft
- IETF httpbis-network-friendly Draft- an older HTTP 2.0 related proposal
- WebRTC:
- QUIC:
- HTTP:
- 服务器发送事件:
- 网络套接字:
- SPDY:
- HTTP 2.0:
- WebRTC:
- 快速:
回答by Philipp
You seem to assume that WebSocket is a replacement for HTTP. It is not. It's an extension.
您似乎认为 WebSocket 是 HTTP 的替代品。它不是。这是一个扩展。
The main use-case of WebSockets are Javascript applications which run in the web browser and receive real-time data from a server. Games are a good example.
WebSockets 的主要用例是在 Web 浏览器中运行并从服务器接收实时数据的 Javascript 应用程序。游戏就是一个很好的例子。
Before WebSockets, the only method for Javascript applications to interact with a server was through XmlHttpRequest. But these have a major disadvantage: The server can't send data unless the client has explicitly requested it.
在 WebSockets 出现之前,Javascript 应用程序与服务器交互的唯一方法是通过XmlHttpRequest. 但是这些有一个主要的缺点:除非客户端明确请求,否则服务器无法发送数据。
But the new WebSocket feature allows the server to send data whenever it wants. This allows to implement browser-based games with a much lower latency and without having to use ugly hacks like AJAX long-polling or browser plugins.
但是新的 WebSocket 功能允许服务器随时发送数据。这允许以低得多的延迟实现基于浏览器的游戏,而不必使用像 AJAX 长轮询或浏览器插件这样的丑陋黑客。
So why not use normal HTTP with streamed requests and responses
那么为什么不使用带有流式请求和响应的普通 HTTP
In a comment to another answer you suggested to just stream the client request and response body asynchronously.
在对另一个答案的评论中,您建议只异步传输客户端请求和响应正文。
In fact, WebSockets are basically that. An attempt to open a WebSocket connection from the client looks like a HTTP request at first, but a special directive in the header (Upgrade: websocket) tells the server to start communicating in this asynchronous mode. First drafts of the WebSocket protocolweren't much more than that and some handshaking to ensure that the server actually understands that the client wants to communicate asynchronously. But then it was realized that proxy servers would be confused by that, because they are used to the usual request/response model of HTTP. A potential attack scenarioagainst proxy servers was discovered. To prevent this it was necessary to make WebSocket traffic look unlike any normal HTTP traffic. That's why the masking keys were introduced in the final version of the protocol.
事实上,WebSockets 基本上就是这样。尝试从客户端打开 WebSocket 连接起初看起来像一个 HTTP 请求,但是标头中的特殊指令(升级:websocket)告诉服务器以这种异步模式开始通信。WebSocket 协议的初稿仅此而已,并进行了一些握手以确保服务器真正理解客户端想要异步通信。但是后来人们意识到代理服务器会对此感到困惑,因为它们习惯于 HTTP 的通常请求/响应模型。一个潜在的攻击场景中对代理服务器被发现。为了防止这种情况,有必要使 WebSocket 流量看起来与任何正常的 HTTP 流量不同。这就是为什么在协议的最终版本。
回答by Srushtika Neelakantam
A regular REST API uses the HTTP as the underlying protocol for communication, which follows the request and response paradigm, meaning the communication involves the client requesting some data or resource from a server, and the server responding back to that client. However, HTTP is a stateless protocol, so every request-response cycle will end up having to repeat the header and metadata information. This incurs additional latency in case of frequently repeated request-response cycles.
常规的 REST API 使用 HTTP 作为通信的底层协议,它遵循请求和响应范式,这意味着通信涉及客户端从服务器请求一些数据或资源,服务器响应该客户端。但是,HTTP 是一种无状态协议,因此每个请求-响应周期最终都将不得不重复标头和元数据信息。在频繁重复的请求-响应周期的情况下,这会导致额外的延迟。
With WebSockets, although the communication still starts off as an initial HTTP handshake, it is further upgrades to follow the WebSockets protocol (i.e. if both the server and the client are compliant with the protocol as not all entities support the WebSockets protocol).
使用 WebSockets,虽然通信仍然以初始 HTTP 握手开始,但它是进一步升级以遵循 WebSockets 协议(即如果服务器和客户端都符合该协议,因为并非所有实体都支持 WebSockets 协议)。
Now with WebSockets, it is possible to establish a full duplex and persistent connection between the client and a server. This means that unlike a request and a response, the connection stays open for as long as the application is running (i.e. it's persistent), and since it is full duplex, two-way simultaneous communication is possible i.e now the server is capable of initiating a communication and 'push' some data to the client when new data (that the client is interested in) becomes available.
现在使用 WebSockets,可以在客户端和服务器之间建立全双工和持久连接。这意味着与请求和响应不同,只要应用程序正在运行(即它是持久的),连接就会保持打开状态,并且由于它是全双工的,双向同时通信是可能的,即现在服务器能够启动当新数据(客户端感兴趣的)可用时,进行通信并将一些数据“推送”到客户端。
The WebSockets protocol is stateful and allows you to implement the Publish-Subscribe (or Pub/Sub) messaging pattern which is the primary concept used in the real-time technologies where you are able to get new updates in the form of server push without the client having to request (refresh the page) repeatedly. Examples of such applications are Uber car's location tracking, Push Notifications, Stock market prices updating in real-time, chat, multiplayer games, live online collaboration tools, etc.
WebSockets 协议是有状态的,允许您实现发布-订阅(或发布/订阅)消息传递模式,这是实时技术中使用的主要概念,您可以在其中以服务器推送的形式获取新更新,而无需客户端必须反复请求(刷新页面)。此类应用的示例包括优步汽车的位置跟踪、推送通知、实时更新股票市场价格、聊天、多人游戏、实时在线协作工具等。
You can check out a deep dive article on Websockets which explains the history of this protocol, how it came into being, what it's used for and how you can implement it yourself.
您可以查看有关 Websockets 的深入探讨文章,其中解释了该协议的历史、它是如何产生的、它的用途以及如何自己实现它。
Here's a video from a presentation I did about WebSockets and how they are different than using the regular REST APIs: Standardisation and leveraging the exponential rise in data streaming
这是我关于 WebSockets 的演示视频以及它们与使用常规 REST API 的不同之处:标准化和利用数据流的指数增长
回答by Devy
For the TL;DR, here are 2 cents and a simpler version for your questions:
对于 TL;DR,这里有 2 美分和一个更简单的版本,可以解决您的问题:
WebSockets provides these benefits over HTTP:
- Persistent stateful connection for the duration of connection
- Low latency: near real-time communication between server/client due to no overhead of reestablishing connections for each request as HTTP requires.
- Full duplex: both server and client can send/receive simutaneously
WebSocket and HTTP protocol have been designed to solve different problems, I.E. WebSocket was designed to improve bi-directional communication whereas HTTP was designed to be stateless, distributed using a request/response model. Other than the sharing the ports for legacy reasons (firewall/proxy penetration), there isn't much of a common ground to combine them into one protocol.
与 HTTP 相比,WebSockets 提供了以下优点:
- 连接期间的持久状态连接
- 低延迟:服务器/客户端之间近乎实时的通信,因为没有像 HTTP 要求的那样为每个请求重新建立连接的开销。
- 全双工:服务器和客户端可以同时发送/接收
WebSocket 和 HTTP 协议旨在解决不同的问题,IE WebSocket 旨在改善双向通信,而 HTTP 旨在无状态,使用请求/响应模型进行分布式。除了出于遗留原因(防火墙/代理渗透)共享端口之外,将它们组合成一个协议的共同点并不多。
回答by FranXho
Why is websockets protocol better?
为什么 websockets 协议更好?
I don't think we can compare them side by side like who is better. That won't be fair comparison simply because they are solving two different problems. Their requirements are different. It will be like comparing apples to oranges. They are different.
我不认为我们可以将它们并排比较,比如谁更好。仅仅因为他们正在解决两个不同的问题,这将是不公平的比较。他们的要求是不同的。这就像将苹果与橙子进行比较。它们是不同的。
HTTPis a request–response protocol. Client (browser) wants something, server gives it. That is. If the data client wants is big, the server might send streaming data to void unwanted buffer problems. Here main requirement or problem is how to make the request from clients and how to response the resources(hybertext) they request. That is where HTTP shine.
HTTP是一种请求-响应协议。客户端(浏览器)想要东西,服务器给它。那是。如果客户端想要的数据很大,服务器可能会发送流数据以消除不需要的缓冲区问题。这里的主要要求或问题是如何从客户端发出请求以及如何响应他们请求的资源(超文本)。这就是 HTTP 闪耀的地方。
In HTTP, only client request. Server only responds.
在 HTTP 中,只有客户端请求。服务器仅响应。
WebSocketis not a request-response protocol where only the client can request. It is a socket(very similar to TCP socket). Mean once the connection is open, either side can send data until underlining TCP connection is closed. It is just like a normal socket. The only difference with TCP socket is websocket can be used in web. In web, we have much restriction for a normal socket. Most firewall will block other port than 80 and 433 that HTTP used. Proxies and intermediaries will be problematic as well.So to make the protocol more easier to deploy to existing infrastructures websocket use HTTP handshake to upgrade. That mean when the first time connection is going to open, client sent HTTP request to tell server saying "That is not HTTP request, please upgrade to websocket protocol".
WebSocket不是只有客户端可以请求的请求-响应协议。它是一个套接字(非常类似于 TCP 套接字)。意味着一旦连接打开,任何一方都可以发送数据,直到下划线的 TCP 连接关闭。它就像一个普通的插座。与 TCP socket 的唯一区别是 websocket 可以在 web 中使用。在 web 中,我们对普通套接字有很多限制。大多数防火墙会阻止 HTTP 使用的 80 和 433 之外的其他端口。代理和中介也会有问题。所以为了让协议更容易部署到现有的基础设施,websocket 使用 HTTP 握手来升级。这意味着当第一次连接要打开时,客户端发送 HTTP 请求告诉服务器说“这不是 HTTP 请求,请升级到 websocket 协议”。
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13
Once server understand the request and upgraded to websocket protocol, none of the HTTP protocol applied any more.
一旦服务器理解请求并升级到 websocket 协议,就不再应用任何 HTTP 协议。
So my answer is Neither one is better than each other. They are completely different.
所以我的答案是没有一个比另一个更好。他们是完全不同的。
Why was it implemented instead of updating http protocol?
为什么实施它而不是更新 http 协议?
Well we can make everything under the name called HTTPas well. But shall we? If they are two different things, I will prefer two different names. So do Hickson and Michael Carter .
好吧,我们也可以在名为HTTP的名称下创建所有内容。但是我们可以吗?如果它们是两个不同的东西,我会更喜欢两个不同的名字。希克森和迈克尔卡特也是如此。
回答by parity3
The other answers do not seem to touch on a key aspect here, and that is you make no mention of requiring supporting a web browser as a client. Most of the limitations of plain HTTP above are assuming you would be working with browser/ JS implementations.
其他答案似乎没有触及这里的一个关键方面,那就是您没有提到需要支持 Web 浏览器作为客户端。上面纯 HTTP 的大部分限制是假设您将使用浏览器/JS 实现。
The HTTP protocol is fully capable of full-duplex communication; it is legal to have a client perform a POST with chunked encoding transfer, and a server to return a response with a chunked-encoding body. This would remove the header overhead to just at init time.
HTTP协议完全可以进行全双工通信;让客户端执行带有分块编码传输的 POST 并且服务器返回带有分块编码主体的响应是合法的。这将在初始化时消除标头开销。
So if all you're looking for is full-duplex, control both client and server, and are not interested in extra framing/features of websockets, then I would argue that HTTP is a simpler approach with lower latency/CPU (although the latency would really only differ in microseconds or less for either).
因此,如果您要寻找的只是全双工,控制客户端和服务器,并且对 websockets 的额外帧/功能不感兴趣,那么我认为 HTTP 是一种更简单的方法,具有更低的延迟/CPU(尽管延迟对于两者来说,实际上只会在微秒或更短的时间内有所不同)。


