bash 如何使用 Docker 运行多个 Python 脚本和一个可执行文件?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/53920742/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to run multiple Python scripts and an executable files using Docker?
提问by Benyamin Jafari
I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
我想创建一个包含两个 Python 包以及一个包含可执行文件的包的容器。
Here's my main package (dockerized_package) tree:
这是我的主包 (dockerized_package) 树:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│?? ├── config
│?? │?? └── config.json
│?? ├── config.json
│?? ├── gowebapp
├── pythonic_project1
│?? ├── __main__.py
│?? ├── requirements.txt
│?? ├── start.sh
│?? └── utility
│?? └── utility.py
└── pythonic_project2
├── collect
│?? ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
Dockerfile 内容:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
docker-compose 内容:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
问题:
When I run this project with docker-compose up --build
, only the last CMD
command runs. Hence, I think the previous CMD
commands are killed in Dockerfile
because when I remove the last two CMD
, the first CMD
works well.
当我用 运行这个项目时docker-compose up --build
,只CMD
运行最后一个命令。因此,我认为前面的CMD
命令被杀死了,Dockerfile
因为当我删除最后两个时CMD
,第一个CMD
运行良好。
Is there any approach to run multiple Python scripts and an executable file in the background?
有没有办法在后台运行多个 Python 脚本和一个可执行文件?
I've also tried with the bash files without any success either.
我也尝试过使用 bash 文件也没有成功。
回答by Farzad Vertigo
As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh
will run them alongside each other:
正如文档中提到的,docker文件中只能有一个CMD,如果有更多,最后一个会覆盖其他并生效。使用 docker 的一个关键点可能是隔离您的程序,因此乍一看,您可能希望将它们移动到单独的容器中并使用共享卷或 docker 网络相互通信,但如果您真的需要它们运行同一个容器,将它们包含在 bash 脚本中,并用 CMD 替换最后一个 CMDrun.sh
将它们并排运行:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh
to the Dockerfile and use RUN chmod a+x run.sh
to make it executable. CMD should be CMD ["./run.sh"]
添加COPY run.sh
到 Dockerfile 并使用RUN chmod a+x run.sh
以使其可执行。CMD 应该是CMD ["./run.sh"]
回答by frankegoesdown
try it via entrypoint.sh
通过 entrypoint.sh 试试
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol &
says that you run service as daemon in background
符号&
表示您在后台将服务作为守护进程运行
回答by David Maze
Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
最佳实践是将它们作为三个独立的容器启动。这是双重正确的,因为您将三个独立的应用程序捆绑到一个容器中,然后尝试从中启动三个独立的东西。
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
在您的每个项目子目录中创建一个单独的 Dockerfile。这些可以更简单,特别是对于只包含已编译二进制文件的文件
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml
file have three separate stanzas to launch the containers
然后在您的docker-compose.yml
文件中有三个单独的节来启动容器
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml
file:
如果你真的不能重新排列你的 Dockerfiles,你至少可以从docker-compose.yml
文件中的同一个镜像启动三个容器:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
使用多个容器和图像构建项目有几个很好的理由:
- If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
- If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
- If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
- If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.
- 如果您运行自己的 shell 脚本并使用后台进程(就像其他答案一样),它只是不会注意到其中一个进程是否死亡;在这里你可以使用 Docker 的重启机制来重启单个容器。
- 如果您对其中一个程序进行了更新,则只能更新并重新启动该单个容器,而其余部分保持不变。
- 如果您曾经使用过更复杂的容器编排器(Docker Swarm、Nomad、Kubernetes),那么不同的组件可以在不同的主机上运行,并且在单个节点上需要更小的 CPU/内存资源块。
- 如果您使用过更复杂的容器编排器,您可以单独扩展使用更多 CPU 的组件。