是否可以下载网站的完整代码、HTML、CSS 和 JavaScript 文件?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/39261675/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Is it possible to download a websites entire code, HTML, CSS and JavaScript files?
提问by Ryan Brienza
Is it possible to fully download a website or view all of its code? Like for example I know you can view page source in a browser but is there a way to download all of a websites code like HTML, CSS and JavaScript then run it on my own server or change it up and run that?
是否可以完全下载网站或查看其所有代码?例如,我知道您可以在浏览器中查看页面源代码,但是有没有办法下载所有网站代码,如 HTML、CSS 和 JavaScript,然后在我自己的服务器上运行它,或者更改它并运行它?
回答by Michael Kolber
Hit Ctrl+S and save it as an HTML file (not MHTML). Then, in the <head>
tag, add a <base href="http://downloaded_site's_address.com">
tag. For this webpage, for example, it would be <base href="http://stackoverflow.com">
.
按 Ctrl+S 并将其另存为 HTML 文件(不是 MHTML)。然后,在<head>
标签中,添加一个<base href="http://downloaded_site's_address.com">
标签。例如,对于此网页,它将是<base href="http://stackoverflow.com">
.
This makes sure that all relative links point back to where they're supposed to instead of to the folder you saved the HTML file in, so all of the resources (CSS, images, JavaScript, etc.) load correctly instead of leaving you with just HTML.
See MDNfor more details on the <base>
tag.
这可确保所有相关链接都指向它们应该指向的位置,而不是指向您保存 HTML 文件的文件夹,因此所有资源(CSS、图像、JavaScript 等)都能正确加载,而不是将您留在只是 HTML。
有关<base>
标签的更多详细信息,请参阅 MDN。
回答by Renan Ben Moshe
The HTML, CSS and JavaScript are sent to your computer when you ask for them on a HTTP protocol (for instance, when you enter a url on your browser), therefore, you have these parts and could replicate on your own pc or server. But if the website has a server-side code (databases, some type of authentication, etc), you will not have access to it, and therefore, won't be able to replicate on your own pc/server.
当您通过 HTTP 协议请求 HTML、CSS 和 JavaScript 时(例如,当您在浏览器上输入 url 时),它们会发送到您的计算机,因此,您拥有这些部分并且可以在您自己的 PC 或服务器上复制。但是,如果网站有服务器端代码(数据库、某种类型的身份验证等),您将无法访问它,因此将无法在您自己的 PC/服务器上进行复制。
回答by nixkuroi
In Chrome, go to File -> Save Page as.
在 Chrome 中,转到文件 -> 页面另存为。
That will download the entire contents of the page.
这将下载页面的全部内容。
回答by HostAgent.NET
Sure. There are tools/scrapers for this, such as SurfOffline and A1 Website Download. I've used both. They'll allow you to scrape a URL for all its files, including html/css, etc. Tools like this were invented to view websites while offline, hence the names.
当然。有用于此的工具/抓取工具,例如 SurfOffline 和 A1 网站下载。我两个都用过。它们将允许您抓取所有文件的 URL,包括 html/css 等。像这样的工具被发明来离线查看网站,因此得名。
However, just keep in mind that these can only download front-end/display facing files, so they can't download back-end scrips, like PHP files, etc.
但是,请记住,这些只能下载面向前端/显示的文件,因此它们不能下载后端脚本,例如 PHP 文件等。
回答by Muhammad Ibnuh
You can use HTTrack tools to grab all website content and all the entire image, css, html, javascript.
您可以使用 HTTrack 工具抓取所有网站内容和所有整个图像、css、html、javascript。