使用 bash 通过 ssh 启动进程,然后在 sigint 上终止它

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/3235180/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-17 22:19:38  来源:igfitidea点击:

Starting a process over ssh using bash and then killing it on sigint

bashsshsignals

提问by getekha

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.

我想使用 ssh 在不同的机器上开始几个工作。如果用户随后中断主脚本,我想优雅地关闭所有作业。

Here is a short example of what I'm trying to do:

这是我正在尝试做的一个简短示例:

#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
    kill -SIGTERM $bash2_pid
    exit
}

ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait

However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.

但是 bar.sh 进程仍在运行远程机器。如果我在终端窗口中执行相同的命令,它会关闭远程主机上的进程。

Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?

当我运行 bash 脚本时,有没有一种简单的方法可以实现这一点?或者我是否需要让它登录到远程机器,找到正确的进程并以这种方式杀死它?

edit: Seems like I have to go with option B, killing the remotescript through another ssh connection

编辑:似乎我必须使用选项 B,通过另一个 ssh 连接杀死远程脚本

So no I want to know how do I get the remotepid? I've tried a something along the lines of :

所以不,我想知道如何获得 remotepid?我尝试了一些类似的东西:

remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')

This doesn't work since it blocks.

这不起作用,因为它会阻塞。

How do I wait for a variable to print and then "release" a subprocess?

如何等待变量打印然后“释放”子进程?

回答by lhunath

It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.

最好让您的清理工作由启动进程的 ssh 管理,而不是稍后通过第二个 ssh 会话进行终止。

When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.

当 ssh 连接到您的终端时;它表现得很好。但是,将它从您的终端中分离出来,(正如您所注意到的)发出信号或管理远程进程会变得很痛苦。您可以关闭链接,但不能关闭远程进程。

That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:

这给您留下了一个选择:使用链接作为远程进程获得需要关闭的通知的一种方式。最简洁的方法是使用阻塞 I/O。从 ssh 进行远程读取输入,并在您希望进程关闭时进行;向它发送一些数据,以便远程的读取操作解除阻塞,它可以继续进行清理:

command & read; kill $!

This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.

这就是我们想要在遥控器上运行的。我们调用我们想要远程运行的命令;我们阅读一行文本(在我们收到文本之前一直阻塞),当我们完成后,发出终止命令的信号。

To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.

要将信号从我们的本地脚本发送到远程,我们现在需要做的就是向它发送一行文本。不幸的是,在这里,Bash 没有给你很多好的选择。至少,如果您想与 bash < 4.0 兼容,则不会。

With bash 4 we can use co-processes:

使用 bash 4,我们可以使用协同进程:

coproc ssh user@host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...

Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROCarray. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the readand kills the command.

现在,当本地脚本退出时(不要在INTTERM等上捕获。 Just EXIT)它会在COPROC数组的第二个元素中向文件发送一个新行。该文件是连接到ssh's的管道stdin,有效地将我们的线路路由到ssh. 远程命令读取的行,结束该readkillS中的命令。

Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:

在 bash 4 之前,事情变得有点困难,因为我们没有协同进程。在这种情况下,我们需要自己做管道:

mkfifo /tmp/mysshcommand
ssh user@host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT

This should work in pretty much any bash version.

这几乎适用于任何 bash 版本。

回答by Random Stranger

Try this:

尝试这个:

ssh -tt host command </dev/null &

When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.

当您杀死本地 ssh 进程时,远程 pty 将关闭并且 SIGHUP 将发送到远程进程。

回答by Eric Woodruff

Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-inputI came up with this script

参考 lhunath 和https://unix.stackexchange.com/questions/71205/background-process-pipe-in​​put的答案我想出了这个脚本

run.sh:

运行.sh:

#/bin/bash
log="log"                                                                                 
eval "$@" \&                                                                              
PID=$!                                                                                    
echo "running" "$@" "in PID $PID"> $log                                                   
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0                              
trap "echo EXIT >> $log" EXIT                                                             
wait $PID

The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.

不同之处在于此版本在连接关闭时终止进程,但在运行完成时还会返回命令的退出代码。

 $ ssh localhost ./run.sh true; echo $?; cat log
 0
 running true in PID 19247
 EXIT

 $ ssh localhost ./run.sh false; echo $?; cat log
 1
 running false in PID 19298
 EXIT

 $ ssh localhost ./run.sh sleep 99; echo $?; cat log
 ^C130
 running sleep 99 in PID 20499
 killed
 EXIT

 $ ssh localhost ./run.sh sleep 2; echo $?; cat log
 0
 running sleep 2 in PID 20556
 EXIT

For a one-liner:

对于单线:

 ssh localhost "sleep 99 & PID=$!; { (cat <&3 3<&- >/dev/null; kill $PID) & } 3<&0; wait $PID"

For convenience:

为了方便:

 HUP_KILL="& PID=$!; { (cat <&3 3<&- >/dev/null; kill $PID) & } 3<&0; wait $PID"
 ssh localhost "sleep 99 $HUP_KILL"

Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.

注意:根据产生的子进程所需的行为,kill 0 可能比 kill $PID 更可取。如果您愿意,您也可以杀死 -HUP 或杀死 -INT。

Update: A secondary job control channel is better than reading from stdin.

更新:辅助作业控制通道比从标准输入读取更好。

ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2

Set job control mode and monitor the job control channel:

设置作业控制模式并监控作业控制通道:

set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$@" &
wait %3

Finally, here's another approach and a reference to a bug filed on openssh: https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14

最后,这是另一种方法和对 openssh 上提交的错误的引用:https://bugzilla.mindrot.org/show_bug.cgi ?id =396#c14

This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.

这是我找到的最好的方法。您希望服务器端的某些东西尝试读取 stdin,然后在失败时杀死进程组,但您还希望客户端的 stdin 阻塞,直到服务器端进程完成并且不会留下像 <(睡眠无限)可能。

ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1

It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.

它实际上似乎并没有将标准输出重定向到任何地方,但它确实起到了阻塞输入的作用并避免了捕获击键。

回答by pablolo

The solution for bash 3.2:

bash 3.2的解决方案:

mkfifo /tmp/mysshcommand
ssh user@host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT

doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.

不起作用。ssh 命令不在“客户端”机器上的 ps 列表中。只有在我将某些内容回显到管道中之后,它才会出现在客户端计算机的进程列表中。出现在“服务器”机器上的进程只是命令本身,而不是读取/终止部分。

Writing again into the pipe does not terminate the process.

再次写入管道不会终止进程。

So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.

所以总结一下,我需要写入命令启动的管道,如果我再次写入,它不会像预期的那样杀死远程命令。

回答by MattyV

You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):

您可能需要考虑挂载远程文件系统并从主控箱运行脚本。例如,如果您的内核是使用 fuse 编译的(可以检查以下内容):

/sbin/lsmod | grep -i fuse

You can then mount the remote file system with the following command:

然后,您可以使用以下命令挂载远程文件系统:

sshfs user@remote_system: mount_point

Now just run your script on the file located in mount_point.

现在只需在位于 mount_point 的文件上运行您的脚本。