Node 和 docker - 如何处理 babel 或 typescript 构建?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/37406616/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Node and docker - how to handle babel or typescript build?
提问by J?rgen Tvedt
I have a node application that I want to host in a Docker container, which should be straight forward, as seen in this article:
我有一个节点应用程序,我想在 Docker 容器中托管它,这应该很简单,如本文所示:
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server.
然而,在我的项目中,源代码不能直接运行,它们必须从 ES6 和/或 Typescript 编译。我使用 gulp 来构建 babel、browserify 和 tsify - 浏览器和服务器的设置不同。
What would be the best workflow for building and automatingdocker images in this case? Are there any resources on the web that describes such a workflow? Should the Dockerimage do the building after npm install
or should I create a shell script to do all this and simply have the Dockerfile pack it all together?
在这种情况下,构建和自动化docker 镜像的最佳工作流程是什么?网络上是否有任何资源描述了这种工作流程?Dockerimage 应该在之后进行构建npm install
还是我应该创建一个 shell 脚本来完成所有这些,然后简单地让 Dockerfile 将它们打包在一起?
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
如果 Dockerfile 应该进行构建 - 图像将需要包含所有开发依赖项,这并不理想?
Note: I have been able to set up a docker container, and run it - but this required all files to be installed and built beforehand.
注意:我已经能够设置一个 docker 容器并运行它 - 但这需要事先安装和构建所有文件。
采纳答案by gerichhome
One possible solution is to wrap your build procedure in a special docker image. It is often referred as Builder image. It should contain all your build dependencies: nodejs, npm, gulp, babel, tsc and etc. It encapsulates all your build process, removing the need to install these tools on the host.
一种可能的解决方案是将您的构建过程包装在一个特殊的 docker 镜像中。它通常被称为Builder image。它应该包含您所有的构建依赖项:nodejs、npm、gulp、babel、tsc 等。它封装了您的所有构建过程,无需在主机上安装这些工具。
First you run the builder image, mounting the source code directory as a volume. The same or a separate volume can be used as output directory. The first image takes your code and runs all build commands.
首先运行构建器映像,将源代码目录安装为卷。可以使用相同或单独的卷作为输出目录。第一个图像获取您的代码并运行所有构建命令。
As a first step you take your built code and pack it into production docker image as you do now.
作为第一步,您可以像现在一样将构建的代码打包到生产 docker 镜像中。
Here is an example of docker builder image for TypeScript: https://hub.docker.com/r/sandrokeil/typescript/
以下是 TypeScript 的 docker builder 镜像示例:https: //hub.docker.com/r/sandrokeil/typescript/
It is ok to have the same docker builder for several projects as it is typically designed to be general purpose wrapper around some common tools. But it is ok to build your own that describes more complicated procedure.
可以为多个项目使用相同的 docker builder,因为它通常被设计为围绕一些常见工具的通用包装器。但是可以构建自己的描述更复杂程序的程序。
The good thing about builder image is that your host environment remains unpolluted and you are free to try newer versions of compiler/different tools/change order/do tasks in parallel just by modifing Dockerfile of your builder image. And at any time you can rollback your experiment with build procedure.
构建器映像的好处是您的主机环境保持未受污染,您可以自由地尝试更新版本的编译器/不同工具/更改订单/并行执行任务,只需修改构建器映像的 Dockerfile。您可以随时使用构建过程回滚您的实验。
回答by Greg
The modern recommendation for this sort of thing (as of Docker 17.05) is to use a multi-stage build. This way you can use all your dev/build dependencies in the one Dockerfile but have the end result optimised and free of unnecessary code.
对这类事情的现代建议(从 Docker 17.05 开始)是使用多阶段构建。通过这种方式,您可以在一个 Dockerfile 中使用所有开发/构建依赖项,但最终结果得到优化并且没有不必要的代码。
I'm not so familiar with typescript, but here's an example implementation using yarn and babel. Using this Dockerfile, we can build a development image (with docker build --target development .
) for running nodemon, tests etc locally; but with a straight docker build .
we get a lean, optimised production image, which runs the app with pm2.
我对打字稿不太熟悉,但这里有一个使用 yarn 和 babel 的示例实现。使用这个 Dockerfile,我们可以构建一个docker build --target development .
用于在本地运行 nodemon、测试等的开发镜像(使用);但是直接docker build .
我们得到了一个精简的、优化的生产图像,它运行带有pm2的应用程序。
# common base image for development and production
FROM node:10.11.0-alpine AS base
WORKDIR /app
# dev image contains everything needed for testing, development and building
FROM base AS development
COPY package.json yarn.lock ./
# first set aside prod dependencies so we can copy in to the prod image
RUN yarn install --pure-lockfile --production
RUN cp -R node_modules /tmp/node_modules
# install all dependencies and add source code
RUN yarn install --pure-lockfile
COPY . .
# builder runs unit tests and linter, then builds production code
FROM development as builder
RUN yarn lint
RUN yarn test:unit --colors
RUN yarn babel ./src --out-dir ./dist --copy-files
# release includes bare minimum required to run the app, copied from builder
FROM base AS release
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
CMD ["yarn", "pm2-runtime", "dist/index.js"]
回答by Lukas Hechenberger
I personally prefer to just remove dev dependencies after running babel during build:
我个人更喜欢在构建期间运行 babel 后删除开发依赖项:
FROM node:7
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Copy app source
COPY src /usr/src/app/src
# Compile app sources
RUN npm run compile
# Remove dev dependencies
RUN npm prune --production
# Expose port and CMD
EXPOSE 8080
CMD [ "npm", "start" ]
回答by Derese Getachew
Follow these steps:
跟着这些步骤:
Step 1: make sure you have your babel dependencies inside of dependenciesnot dev dependencieson package.json. Also Add a deploy script that is referencing to babel from the node_modules folder. you will be calling this script from within docker This is what my package.json file looks like
第 1 步:确保你的 babel 依赖包含在依赖包中,而不是package.json 上的开发依赖包。同时添加一个从 node_modules 文件夹引用 babel 的部署脚本。你将从 docker 中调用这个脚本 这就是我的 package.json 文件的样子
{
"name": "tmeasy_api",
"version": "1.0.0",
"description": "Trade made easy Application",
"main": "build/index.js",
"scripts": {
"build": "babel -w src/ -d build/ -s inline",
"deploy" : "node_modules/babel-cli/bin/babel.js src/ -d build/",
},
"devDependencies": {
"nodemon": "^1.9.2"
},
"dependencies": {
"babel-cli": "^6.10.1",
"babel-polyfill": "^6.9.1",
"babel-preset-es2015": "^6.9.0",
"babel-preset-stage-0": "^6.5.0",
"babel-preset-stage-3": "^6.22.0"
}
}
build is for your development purposes on your local machine and deploy is to be called from within you dockerfile.
build 用于您在本地机器上的开发目的,而 deploy 将从您的 dockerfile 中调用。
Step 2: since we want to do the babael transformation ourselves make sure to add .dockerignore with the build folder that you are using during development. This is what my .dockerignore file looks like.
第 2 步:由于我们想要自己进行 babael 转换,因此请确保将 .dockerignore 添加到您在开发过程中使用的构建文件夹中。这就是我的 .dockerignore 文件的样子。
build
node_modules
Step 3. Construct your dockerfile. below is a sample of my docker file
步骤 3. 构建您的 dockerfile。下面是我的 docker 文件示例
FROM node:6
MAINTAINER stackoverflow
ENV NODE_ENV=production
ENV PORT=3000
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /var/www && cp -a /tmp/node_modules /var/www
# copy current working directory into docker; but it first checks for
# .dockerignore so build will not be included.
COPY . /var/www/
WORKDIR /var/www/
# remove any previous builds and create a new build folder and then
# call our node script deploy
RUN rm -f build
RUN mkdir build
RUN chmod 777 /var/www/build
RUN npm run deploy
VOLUME /var/www/uploads
EXPOSE $PORT
ENTRYPOINT ["node","build/index.js"]
回答by Augie Gardner
I just released a great seed app for Typescript and Node.js using Docker.
我刚刚使用 Docker 为 Typescript 和 Node.js 发布了一个很棒的种子应用程序。
You can find it on GitHub.
您可以在GitHub 上找到它。
The project explains all of the commands that the Dockerfile uses and it combines tsc
with gulp
for some added benefits.
该项目解释了 Dockerfile 使用的所有命令,并将其结合tsc
以gulp
获得一些额外的好处。
If you don't want to check out the repo, here's the details:
如果你不想查看 repo,这里是详细信息:
Dockerfile
文件
FROM node:8
ENV USER=app
ENV SUBDIR=appDir
RUN useradd --user-group --create-home --shell /bin/false $USER &&\
npm install --global tsc-watch npm ntypescript typescript gulp-cli
ENV HOME=/home/$USER
COPY package.json gulpfile.js $HOME/$SUBDIR/
RUN chown -R $USER:$USER $HOME/*
USER $USER
WORKDIR $HOME/$SUBDIR
RUN npm install
CMD ["node", "dist/index.js"]
docker-compose.yml
docker-compose.yml
version: '3.1'
services:
app:
build: .
command: npm run build
environment:
NODE_ENV: development
ports:
- '3000:3000'
volumes:
- .:/home/app/appDir
- /home/app/appDir/node_modules
package.json
包.json
{
"name": "docker-node-typescript",
"version": "1.0.0",
"description": "",
"scripts": {
"build": "gulp copy; gulp watch & tsc-watch -p . --onSuccess \"node dist/index.js\"",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "Stephen Gardner ([email protected])",
"license": "ISC",
"dependencies": {
"express": "^4.10.2",
"gulp": "^3.9.1",
"socket.io": "^1.2.0"
},
"devDependencies": {
"@types/express": "^4.11.0",
"@types/node": "^8.5.8"
}
}
tsconfig.json
配置文件
{
"compileOnSave": false,
"compilerOptions": {
"outDir": "./dist/",
"sourceMap": true,
"declaration": false,
"module": "commonjs",
"moduleResolution": "node",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"target": "ES6"
},
"include": [
"**/*.ts"
],
"exclude": [
"node_modules",
"**/*.spec.ts"
]
}
To get more towards the answer of your question -- the ts is being compiled from the docker-compose.yml
file's calling of npm run build
which then calls tsc
. tsc
then copies our files to the dist
folder and a simple node dist/index.js
command runs this file. Instead of using nodemon, we use tsc-watch
and gulp.watch
to watch for changes in the app and fire node dist/index.js
again after every re-compilation.
为了更多地回答您的问题 - ts 正在从docker-compose.yml
文件的调用中编译npm run build
,然后调用tsc
. tsc
然后将我们的文件复制到dist
文件夹,一个简单的node dist/index.js
命令运行这个文件。我们没有使用 nodemon,而是使用tsc-watch
和gulp.watch
来监视应用程序中的变化,并node dist/index.js
在每次重新编译后再次触发。
Hope that helps :) If you have any questions, let me know!
希望有帮助:) 如果您有任何问题,请告诉我!
回答by J?rgen Tvedt
For the moment, I'm using a workflow where:
目前,我正在使用一个工作流程,其中:
npm install
andtsd install
locallygulp
build locally- In Dockerfile, copy all program files, but not typings/node_modules to docker image
- In Dockerfile,
npm install --production
npm install
和tsd install
本地gulp
本地构建- 在 Dockerfile 中,将所有程序文件复制到 docker 镜像,但不复制typings/node_modules
- 在 Dockerfile 中,
npm install --production
This way I get only the wanted files in the image, but it would be nicer if the Dockerfile could do the build itself.
这样我只得到图像中想要的文件,但如果 Dockerfile 可以自己进行构建会更好。
Dockerfile:
Dockerfile:
FROM node:5.1
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Bundle app
COPY package.json index.js /usr/src/app/
COPY views/ /usr/src/app/views/
COPY build/ /usr/src/app/build/
COPY public/ /usr/src/app/public/
# Install app dependencies
RUN npm install --production --silent
EXPOSE 3000
CMD [ "node", "index.js" ]
I guess a complete automation in the "imaging process" could be established by building in the Dockerimage script and then deleting the unwanted files before installing again.
我想可以通过在 Dockerimage 脚本中构建然后在再次安装之前删除不需要的文件来建立“成像过程”中的完全自动化。
回答by k00ni
In my project, however, the sources can not be run directly, they must be compiled from ES6 and/or Typescript. I use gulp to build with babel, browserify and tsify - with different setups for browser and server. What would be the best workflow for building and automating docker images in this case?
然而,在我的项目中,源代码不能直接运行,它们必须从 ES6 和/或 Typescript 编译。我使用 gulp 来构建 babel、browserify 和 tsify - 浏览器和服务器的设置不同。在这种情况下,构建和自动化 docker 镜像的最佳工作流程是什么?
When i understand you right, you want to deploy your web app inside a Docker container and provide different flavours for different target-environments (you mentioned different browser and server). (1)
当我理解正确时,您希望将 Web 应用程序部署在 Docker 容器中,并为不同的目标环境提供不同的风格(您提到了不同的浏览器和服务器)。(1)
If the Dockerfile should do the build - the image would need to contain all the dev-dependencies, which are not ideal?
如果 Dockerfile 应该进行构建 - 图像将需要包含所有开发依赖项,这并不理想?
It depends. If you want to provide a ready-to-go-image, it has to contain everything your web app needs to run. One advantage is, that you later only need to start the container, pass some parameters and you are ready to go.
这取决于。如果你想提供一个随时可用的图像,它必须包含你的网络应用程序运行所需的一切。一个优点是,您稍后只需要启动容器,传递一些参数即可开始使用。
During the development phase, that image is not really necessary, because of your usually pre-defined dev-environment. It costs time and resources, if you generate such an image after each change.
在开发阶段,由于您通常预定义的开发环境,该映像并不是真正必要的。如果在每次更改后生成这样的图像,则需要花费时间和资源。
Suggested approach:I would suggest a two way setup:
建议的方法:我建议采用两种方式设置:
- During development: Use a fixed environment to develop your app. All software can run locally or inside a docker/VM. I suggest using a Docker container with your dev-setup, especially if you work in a team and everybody needs to have the same dev-basement.
- Deploy Web app: As i understood you right (1), you want to deploy the app for different environments and therefore need to create/provide different configurations. To realize something like that, you could start with a shell-script which packages your app into different docker container. You run the script before your deploy. If you have Jekyll running, it calls your shell-script after each commit, after all tests ran fine.
- 开发期间:使用固定环境来开发您的应用程序。所有软件都可以在本地或在 docker/VM 内运行。我建议在你的开发设置中使用 Docker 容器,特别是如果你在一个团队中工作并且每个人都需要有相同的开发基础。
- 部署 Web 应用程序:正如我理解的那样 (1),您想为不同的环境部署应用程序,因此需要创建/提供不同的配置。要实现类似的功能,您可以从一个 shell 脚本开始,它将您的应用程序打包到不同的 docker 容器中。在部署之前运行脚本。如果你有 Jekyll 运行,它会在每次提交后调用你的 shell 脚本,在所有测试运行正常之后。
Docker container for both development and deploy phase:I would like to refer to a project of mine and a colleague: https://github.com/k00ni/Docker-Nodejs-environment
开发和部署阶段的Docker容器:我想参考我和同事的一个项目:https: //github.com/k00ni/Docker-Nodejs-environment
This docker provides a whole development- and deploy-environment by maintaining:
这个 docker 通过维护提供了一个完整的开发和部署环境:
- Node.js
- NPM
- Gulp
- Babel (auto transpiling from ECMA6 to JavaScript on a file change)
- Webpack
- 节点.js
- 新产品管理
- 吞咽
- Babel(在文件更改时从 ECMA6 自动转换为 JavaScript)
- 网络包
and other JavaScript helpers insidethe docker container. You just link your project folder via a volume inside the docker container. It initializes your environment (e.g. deploys all dependencies from package.json) and you are good to go.
以及docker 容器内的其他 JavaScript 助手。您只需通过 docker 容器内的卷链接您的项目文件夹。它初始化您的环境(例如从 package.json 部署所有依赖项),您就可以开始了。
You can use it for developmentpurposes so that you and your team are using the same environment (Node.js version, NPM version,...) Another advantage is, that file changes lead to a re-compiling of ECMA6/ReactJS/... files to JavaScript files (No need to do this by hand after each change). We use Babel for that.
您可以将它用于开发目的,以便您和您的团队使用相同的环境(Node.js 版本、NPM 版本……)另一个优点是,文件更改会导致重新编译 ECMA6/ReactJS/。 .. 文件到 JavaScript 文件(每次更改后无需手动执行此操作)。我们为此使用 Babel。
For deploymentpurposes, just extend this Docker image and change required parts. Instead of linking your app inside the container, you can pull it via Git (or something like that). You will use the same basement for all your work.
出于部署目的,只需扩展此 Docker 映像并更改所需的部分。您可以通过 Git(或类似方式)拉取它,而不是将您的应用程序链接到容器内。您将使用同一个地下室进行所有工作。