Python Scapy 中的 HTTP GET 数据包嗅探器

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/27551367/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 01:57:04  来源:igfitidea点击:

HTTP GET packet sniffer in Scapy

pythonhttpscapysniffer

提问by Muhammad Suleman

I am trying to code a simple sniffer in Scapy, which only prints HTTP packets with GET method only. Here's the code:

我正在尝试在 Scapy 中编写一个简单的嗅探器,它仅使用 GET 方法打印 HTTP 数据包。这是代码:

#!/usr/bin/python
from scapy.all import *

def http_header(packet):
        http_packet=str(packet)
        if http_packet.find('GET'):
                print GET_print(packet)
        print packet
def GET_print(packet1):
        print "***************************************GET PACKET****************************************************"
        print packet1

        print "*****************************************************************************************************"


sniff(iface='eth0',prn=http_header)

Here is the output:

这是输出:

*****************************************************************************************************
None
T??Г
     )?pEa??@@???h??#/??t
                             ?}LGku???U
oTE??I(????9qi???S?????
                          XuW?F=???-?k=X:?
***************************************GET PACKET****************************************************
T??Г
     )?pE???@@???h??#/??t
                               ?LGku????
oTE??I?K??AH?*?e??>?v1#D?(mG5T?o????8??喷╭?????"?KT^?'?mB???]?????k>
                                                                                ?_x?X?????8V???w/?Z?=???N?à??\r?????)+}???l?c?9??j;???h??5?T?9H?/O??)??P
         ?Y?qf爂?%?_`??6x??5D?I3???O?
t??tpI#?????$IC??E??
                     ?G?
J??α???=?]??v????b5^|P??DK?)uq?2????w?
                    tB??????y=???n?i?r?.D6?kI?a???6iC???c'??0dPqED?4????[?[??hGh???~|Y/?>`yP  Dq??T??M????f?;???????  gY???di?_x?8|
eo?p?xW9??=???v?Ye?}?T??ɑy?^?C
-?_(?<?{????}???????r
$??J?k-?9????}??f?27??QK??`?GY?8??Sh???Y@8?E9?R??&a?/vkф??6?DF`?/9?I?d( ??-??[A
                                                                                     ??)pP??y\?j]???8?_???vf?b????I7???????+?P<_`
*****************************************************************************************************

What I am expecting is:

我期待的是:

GET / HTTP/1.1
    Host: google.com
    User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20140722 Firefox/24.0 Iceweasel/24.7.0
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
    Accept-Language: en-US,en;q=0.5
    Accept-Encoding: gzip, deflate
    Cookie: PREF=ID=758a20b5fbd4eac9:U=2b2dedf6c84b001f:FF=0:TM=1412150291:LM=1415430021:S=Q-QemmrLqsSsEA9i; NID=67=mRdkPVhtImrOTLi5I1e5JM22J7g26jAcdiDEjj9C5q0H5jj0DWRX27hCM7gLJBeiowW-8omSv-1ycH595SW2InWX2n1JMMNh6b6ZrRsZ9zOCC2a-vstOQnBDSJu6K9LO
    Connection: keep-alive

What can I do to get my expected output?

我该怎么做才能获得预期的输出?

采纳答案by Ellis Percival

You need to use the sprintffunctionof the packet instead of printing the packet itself. You also need to split the string returned from it and join it back together with newline characters, otherwise it spits it out all on one line:

您需要使用数据包的sprintf功能,而不是打印数据包本身。您还需要拆分从它返回的字符串并将其与换行符连接在一起,否则它会在一行中全部吐出:

#!/usr/bin/python
from scapy.all import *

def http_header(packet):
        http_packet=str(packet)
        if http_packet.find('GET'):
                return GET_print(packet)

def GET_print(packet1):
    ret = "***************************************GET PACKET****************************************************\n"
    ret += "\n".join(packet1.sprintf("{Raw:%Raw.load%}\n").split(r"\r\n"))
    ret += "*****************************************************************************************************\n"
    return ret

sniff(iface='eth0', prn=http_header, filter="tcp port 80")

I also added a filter for TCP port 80, but this could be removed if you need to.

我还为 TCP 端口 80 添加了一个过滤器,但如果需要,可以将其删除。

Example output:

示例输出:

***************************************GET PACKET****************************************************
'GET /projects/scapy/doc/usage.html HTTP/1.1
Host: www.secdev.org
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.65 Safari/537.36
Referer: https://www.google.co.uk/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-GB,en;q=0.8,en-US;q=0.6
If-None-Match: "28c84-48498d5654df67640-gzip"
If-Modified-Since: Mon, 19 Apr 2010 15:44:17 GMT

'
*****************************************************************************************************

Pierrepoints out that you can do away with the http_headerfunction entirely by using the lfilterargument to sniff(). I took the liberty of making the code a little more succinct at the same time:

皮埃尔指出,您可以http_header通过使用lfilter参数来完全取消该函数sniff()。我冒昧地同时使代码更加简洁:

#!/usr/bin/python
from scapy.all import *

stars = lambda n: "*" * n

def GET_print(packet):
    return "\n".join((
        stars(40) + "GET PACKET" + stars(40),
        "\n".join(packet.sprintf("{Raw:%Raw.load%}").split(r"\r\n")),
        stars(90)))

sniff(
    iface='eth0',
    prn=GET_print,
    lfilter=lambda p: "GET" in str(p),
    filter="tcp port 80")

回答by jetole

EDIT:

编辑:

Please note that Scapy-httpis now DEPRECATED and is included in Scapy 2.4.3+. Use import scapy.layers.httpor load_layer("http")to enable it.

请注意,Scapy-http现在已弃用并包含在 Scapy 2.4.3+ 中。使用import scapy.layers.httpload_layer("http")启用它。

Answer:

回答

There's a scapy http module that you can install by running pip install scapy-http. Once that is installed, you can import it by running import scapy_http.http. This is separate from your scapy module but adds functionality to scapy so you still need to import scapy as you usually would.

有一个 scapy http 模块,您可以通过运行pip install scapy-http. 安装后,您可以通过运行导入它import scapy_http.http。这与您的 scapy 模块是分开的,但为 scapy 添加了功能,因此您仍然需要像往常一样导入 scapy。

Once imported, change your filter line to

导入后,将过滤器行更改为

sniff(iface="eth0",
prn=GET_print,
lfilter= lambda x: x.haslayer(scapy_http.http.HTTPRequest))

I removed the filter="tcp and port 80"option because, using the http lfilter will return all HTTP Request queries regardless of port, except SSL for the obvious reason that it cannot be sniffed under the usual circumstances. You may want to keep the filteroption for performance reasons.

我删除了该filter="tcp and port 80"选项,因为使用 http lfilter 将返回所有 HTTP 请求查询,而不考虑端口,但 SSL 除外,原因很明显,在通常情况下无法嗅探它。filter出于性能原因,您可能希望保留该选项。

回答by jetole

I had commented on one way to improve it but I decided to whip together a more complete solution. This won't have the asterisk packet breaks but instead just prints the headers as pretty printed dictionary so this may work for you or may not but you can also customize it to suit your needs. Aside from the formatting, this seems like the most efficient means posted on this question so far and you can delegate to a function to add formatting and further deconstruct the dict.

我曾评论过一种改进方法,但我决定拼凑出一个更完整的解决方案。这不会使星号数据包中断,而只是将标题打印为漂亮的打印字典,因此这可能适合您,也可能不适合,但您也可以自定义它以满足您的需求。除了格式之外,这似乎是迄今为止在这个问题上发布的最有效的方法,您可以委托给一个函数来添加格式并进一步解构 dict。

#!/usr/bin/env python2

import argparse
import pprint
import sys

# Suppress scapy warning if no default route for IPv6. This needs to be done before the import from scapy.
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)


# Try to import sniff from scapy.all and show error w/ install instructions if it cannot be imported.
try:
    from scapy.all import sniff
except ImportError:
    sys.stderr.write("ERROR: You must have scapy installed.\n")
    sys.stderr.write("You can install it by running: sudo pip install -U 'scapy>=2.3,<2.4'")
    exit(1)

# Try to import scapy_http.http and show error w/ install instructions if it cannot be imported.
try:
    import scapy_http.http
except ImportError:
    sys.stderr.write("ERROR: You must have scapy-http installed.\n")
    sys.stderr.write("You can install it by running: sudo pip install -U 'scapy>=1.8'")
    exit(1)


if __name__ == "__main__":
    # Parser command line arguments and make them available.
    parser = argparse.ArgumentParser(
        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
        description="Print HTTP Request headers (must be run as root or with capabilities to sniff).",
    )
    parser.add_argument("--interface", "-i", help="Which interface to sniff on.", default="eth0")
    parser.add_argument("--filter", "-f", help='BPF formatted packet filter.', default="tcp and port 80")
    parser.add_argument("--count", "-c", help="Number of packets to capture. 0 is unlimited.", type=int, default=0)
    args = parser.parse_args()

    # Sniff for the data and print it using lambda instead of writing a function to pretty print.
    # There is no reason not to use a function you write for this but I just wanted to keep the example simply while
    # demoing how to only match HTTP requests and to access the HTTP headers as pre-created dict's instead of
    # parsing the data as a string.
    sniff(iface=args.interface,
          promisc=False,
          filter=args.filter,
          lfilter=lambda x: x.haslayer(scapy_http.http.HTTPRequest),
          prn=lambda pkt: pprint.pprint(pkt.getlayer(scapy_http.http.HTTPRequest).fields, indent=4),
          count=args.count
    )