bash 通知右侧管道左侧故障?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6565694/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-18 00:17:36  来源:igfitidea点击:

Inform right-hand side of pipeline of left-side failure?

bashpipepipeline

提问by Bittrance

I've grown fond of using a generator-like pattern between functions in my shell scripts. Something like this:

我越来越喜欢在我的 shell 脚本中的函数之间使用类似生成器的模式。像这样的东西:

parse_commands /da/cmd/file | process_commands

However, the basic problem with this pattern is that if parse_command encounters an error, the only way I have found to notify process_command that it failed is by explicitly telling it (e.g. echo "FILE_NOT_FOUND"). This means that every potentially faulting operation in parse_command would have to be fenced.

然而,这种模式的基本问题是,如果 parse_command 遇到错误,我发现通知 process_command 它失败的唯一方法是明确告诉它(例如 echo "FILE_NOT_FOUND")。这意味着 parse_command 中每个潜在的错误操作都必须被隔离。

Is there no way process_command can detect that the left side exited with a non-zero exit code?

有没有办法 process_command 可以检测到左侧以非零退出代码退出?

采纳答案by David W.

Does the pipe process continue even if the first process has ended, or is the issue that you have no way of knowing that the first process failed?

即使第一个进程已经结束,管道进程是否继续,还是您无法知道第一个进程失败的问题?

If it's the latter, you can look at the PIPESTATUSvariable (which is actually a BASH array). That will give you the exit code of the first command:

如果是后者,可以查看PIPESTATUS变量(其实就是一个BASH数组)。这将为您提供第一个命令的退出代码:

parse_commands /da/cmd/file | process_commands
temp=("${PIPESTATUS[@]}")
if [ ${temp[0]} -ne 0 ]
then
    echo 'parse_commands failed'
elif [ ${temp[1]} -ne 0 ]
then
    echo 'parse_commands worked, but process_commands failed'
fi

Otherwise, you'll have to use co-processes.

否则,您将不得不使用协同进程。

回答by lethalman

Use set -o pipefailon top of your bash script so that when the left side of the pipe fails (exit status != 0), the right side does not execute.

set -o pipefail在 bash 脚本的顶部使用,以便当管道的左侧失败(退出状态!= 0)时,右侧不会执行。

回答by jjmontes

Unlike the and operator (&&), the pipe operator (|) works by spawning both processes simultaneously, so the first process can pipe its output to the second process without the need of buffering the intermediate data. This allows for processing of large amounts of data with little memory or disk usage.

与 and 运算符 (&&) 不同,管道运算符 (|) 通过同时生成两个进程来工作,因此第一个进程可以通过管道将其输出传输到第二个进程,而无需缓冲中间数据。这允许以很少的内存或磁盘使用量处理大量数据。

Therefore, the exit status of the first process wouldn't be available to the second one until it's finished.

因此,第一个进程的退出状态在第二个进程完成之前不可用。

回答by Lynch

You could try some work arround using a fifo:

您可以尝试使用 fifo 进行一些工作:

mkfifo /tmp/a
cat /tmp/a | process_commands &

parse_cmd /da/cmd/file > /tmp/a || (echo "error"; # kill process_commands)

回答by Nathaniel M. Beaver

I don't have enough reputation to comment, but the accepted answerwas missing a closing }on line 5.

我没有足够的声誉来发表评论,但接受的答案缺少}第 5 行的结尾。

After fixing this, the code will throw a -ne: unary operator expectederror, which points to a problem: PIPESTATUSis overwritten by the conditional following the ifcommand, so the return value of process_commandswill never be checked!

修复此问题后,代码会抛出-ne: unary operator expected错误,这说明一个问题:PIPESTATUSif命令后面的条件覆盖,因此process_commands永远不会检查的返回值!

This is because [ ${PIPESTATUS[0]} -ne 0 ]is equivalent totest ${PIPESTATUS[0]} -ne 0, which changes $PIPESTATUSjust like any other command. For example:

这是因为[ ${PIPESTATUS[0]} -ne 0 ]相当于test ${PIPESTATUS[0]} -ne 0,这改变$PIPESTATUS就像任何其他的命令。例如:

return0 () { return 0;}
return3 () { return 3;}

return0 | return3
echo "PIPESTATUS: ${PIPESTATUS[@]}"

This returns PIPESTATUS: 0 3as expected. But what if we introduce conditionals?

PIPESTATUS: 0 3将按预期返回。但是如果我们引入条件呢?

return0 | return3
if [ ${PIPESTATUS[0]} -ne 0 ]; then
    echo "1st command error: ${PIPESTATUS[0]}"
elif [ ${PIPESTATUS[1]} -ne 0 ]; then
    echo "2nd command error: ${PIPESTATUS[1]}"
else
    echo "PIPESTATUS: ${PIPESTATUS[@]}"
    echo "Both return codes = 0."
fi

We get the [: -ne: unary operator expectederror, and this:

我们得到[: -ne: unary operator expected错误,这是:

PIPESTATUS: 2
Both return codes = 0.

To fix this, $PIPESTATUSshould be stored in a different array variable, like so:

要解决此问题,$PIPESTATUS应将其存储在不同的数组变量中,如下所示:

return0 | return3
TEMP=("${PIPESTATUS[@]}")
echo "TEMP: ${TEMP[@]}"
if [ ${TEMP[0]} -ne 0 ]; then
    echo "1st command error: ${TEMP[0]}"
elif [ ${TEMP[1]} -ne 0 ]; then
    echo "2nd command error: ${TEMP[1]}"
else
    echo "TEMP: ${TEMP[@]}"
    echo "All return codes = 0."
fi

Which prints:

哪个打印:

TEMP: 0 3
2nd command error: 3

as intended.

如预期。

Edit: I fixed the accepted answer, but I'm leaving this explanation for posterity.

编辑:我修复了已接受的答案,但我将此解释留给后人。

回答by tilo

You may run parse_commands /da/cmd/filein an explicit subshell and echothe exit status of this subshell through the pipe to process_commandswhich is also run in an explicit subshell to process the piped data contained in /dev/stdin.

您可以parse_commands /da/cmd/file在显式子外壳中运行,并且echo该子外壳的退出状态通过管道process_commands也可以在显式子外壳中运行以处理包含在/dev/stdin.

Far from being elegant, but seems to get the job done :)

远非优雅,但似乎完成了工作:)

A simple example:

一个简单的例子:

(
( ls -l ~/.bashrcxyz; echo $? ) | 
( 
piped="$(</dev/stdin)"; 
[[ "$(tail -n 1 <<<"$piped")" -eq 0 ]] && printf '%s\n' "$piped" | sed '$d' || exit 77 
); 
echo $?
)

回答by ObiWahn

What about:

关于什么:

parse_commands /da/cmd/file > >(process_commands)

回答by andrewdski

There is a way to do this in bash 4.0, which adds the coprocbuiltin from ash. This coprocess facility is borrowed from ksh, which uses a different syntax. The only shell I have access to on my system that supports coprocesses is ksh. Here is a solution written with ksh:

在 bash 4.0 中有一种方法可以做到这一点,它添加了coproc来自 ash的内置函数。这个协处理工具是从 ksh 借来的,它使用不同的语法。我的系统上唯一可以访问的支持协进程的 shell 是 ksh。这是用 ksh 编写的解决方案:

parse_commands  /da/cmd/file |&
parser=$!

process_commands <&p &
processor=$!

if wait $parser
then
    wait $processor
    exit $?
else
    kill $processor
    exit 1
fi

The idea is to start parse_commandsin the background with pipes connecting it to the main shell. The pid is saved in parser. Then process_commandsis started with the output of parse_commandsas its input. (That is what <&pdoes.) This is also put in the background with its pid saved in processor.

这个想法是parse_commands在后台开始,用管道将它连接到主壳。pid 保存在parser. 然后process_commands以 的输出parse_commands作为其输入开始。(就是<&p这样。)这也被放在后台,其 pid 保存在processor.

With both of those in the background connected by a pipe, our main shell is free to wait for the parser to terminate. If it terminates without an error, we wait for the processor to finish and exit with its return code. If it terminates withan error, we kill the processor and exit with non-zero status.

通过管道连接后台的两个,我们的主 shell 可以自由地等待解析器终止。如果它在没有错误的情况下终止,我们等待处理器完成并以其返回码退出。如果它错误而终止,我们将终止处理器并以非零状态退出。

It should be fairly straightforward to translate this to use the bash 4.0 / ash coprocbuiltin, but I don't have good documentation, nor a way to test that.

将其转换为使用 bash 4.0 / ashcoproc内置程序应该相当简单,但我没有好的文档,也没有测试它的方法。

回答by borrible

If you have command1 && command2then command2 will only be executed when the first command is successful - otherwise boolean short-circuiting kicks in. One way of using this would be to build a first command (your parse_commands...) that dumps to a temporary and then have the second command take from that file.

如果你有command1 && command2那么 command2 将只在第一个命令成功时执行 - 否则布尔短路开始。使用它的一种方法是构建第一个命令(你的parse_commands...),该命令转储到临时命令,然后使用第二个命令从那个文件中取出。

Edit: By judicious use of ;you can tidy up the temporary file, e.g.

编辑:明智地使用;你可以整理临时文件,例如

(command1 && command2) ; rm temporaryfile