bash 带有 Wget 的 Shell 脚本 - 如果其他嵌套在 for 循环中

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/13042156/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-18 03:38:00  来源:igfitidea点击:

Shell script with Wget - If else nested inside for loop

linuxbashshellwget

提问by el-noobador

I'm trying to make a shell script that reads a list of download URLs to find if they're still active. I'm not sure what's wrong with my current script, (I'm new to this) and any pointers would be a huge help!

我正在尝试制作一个 shell 脚本,该脚本读取下载 URL 列表以查找它们是否仍处于活动状态。我不确定我当前的脚本有什么问题(我是新手),任何指针都会有很大帮助!

user@pc:~/test# cat sites.list

user@pc:~/test# cat sites.list

http://www.google.com/images/srpr/logo3w.png
http://www.google.com/doesnt.exist
notasite

Script:

脚本:

#!/bin/bash
for i in `cat sites.list`
do
wget --spider $i -b
if grep --quiet "200 OK" wget-log; then
echo $i >> ok.txt
else
echo $i >> notok.txt
fi
rm wget-log
done

As is, the script outputs everything to notok.txt - (the first google site should go to ok.txt). But if I run:

按原样,脚本将所有内容输出到 notok.txt -(第一​​个 google 站点应该转到 ok.t​​xt)。但如果我跑:

wget --spider http://www.google.com/images/srpr/logo3w.png -b

And then do:

然后做:

grep "200 OK" wget-log

It greps the string without any problems. What noob mistake did I make with the syntax? Thanks m8s!

它greps字符串没有任何问题。我在语法上犯了什么菜鸟错误?谢谢m8s!

回答by German Garcia

The -b option is sending wget to the background, so you're doing the grep before wget has finished.

-b 选项将 wget 发送到后台,因此您正在 wget 完成之前执行 grep。

Try without the -b option:

尝试不使用 -b 选项:

if wget --spider $i 2>&1 | grep --quiet "200 OK" ; then

回答by ghoti

There are a few issues with what you're doing.

你正在做的事情有一些问题。

  • Your for i inwill have problems with lines that contain whitespace. Better to use while readto read individual lines of a file.
  • You aren't quoting your variables. What if a line in the file (or word in a line) starts with a hyphen? Then wget will interpret that as an option. You have a potential security risk here, as well as an error.
  • Creating and removing files isn't really necessary. If all you're doing is checking whether a URL is reachable, you can do that without temp files and the extra code to remove them.
  • wget isn't necessarily the best tool for this. I'd advise using curlinstead.
  • for i in将遇到包含空格的行的问题。更好地用于while read读取文件的各个行。
  • 你没有引用你的变量。如果文件中的一行(或一行中的单词)以连字符开头怎么办?然后 wget 会将其解释为一个选项。您在这里存在潜在的安全风险以及错误。
  • 创建和删除文件并不是真正必要的。如果您所做的只是检查 URL 是否可访问,则无需临时文件和删除它们的额外代码即可完成此操作。
  • wget 不一定是最好的工具。我建议curl改用。

So here's a better way to handle this...

所以这里有一个更好的方法来处理这个......

#!/bin/bash

sitelist="sites.list"
curl="/usr/bin/curl"

# Some errors, for good measure...
if [[ ! -f "$sitelist" ]]; then
  echo "ERROR: Sitelist is missing." >&2
  exit 1
elif [[ ! -s "$sitelist" ]]; then
  echo "ERROR: Sitelist is empty." >&2
  exit 1
elif [[ ! -x "$curl" ]]; then
  echo "ERROR: I can't work under these conditions." >&2
  exit 1
fi

# Allow more advanced pattern matching (for case..esac below)
shopt -s globstar

while read url; do

  # remove comments
  url=${url%%#*}

  # skip empty lines
  if [[ -z "$url" ]]; then
    continue
  fi

  # Handle just ftp, http and https.
  # We could do full URL pattern matching, but meh.
  case "$url" in
    @(f|ht)tp?(s)://*)
      # Get just the numeric HTTP response code
      http_code=$($curl -sL -w '%{http_code}' "$url" -o /dev/null)
      case "$http_code" in
        200|226)
          # You'll get a 226 in ${http_code} from a valid FTP URL.
          # If all you really care about is that the response is in the 200's,
          # you could match against "2??" instead.
          echo "$url" >> ok.txt
          ;;
        *)
          # You might want different handling for redirects (301/302).
          echo "$url" >> notok.txt
          ;;
      esac
      ;;
    *)
      # If we're here, we didn't get a URL we could read.
      echo "WARNING: invalid url: $url" >&2
      ;;
  esac

done < "$sitelist"

This is untested. For educational purposes only. May contain nuts.

这是未经测试的。仅用于教育目的。可能含有坚果。