使用 HTML5/Canvas/JavaScript 在浏览器中截取屏幕截图
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4912092/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Using HTML5/Canvas/JavaScript to take in-browser screenshots
提问by joelvh
Google's "Report a Bug" or "Feedback Tool" lets you select an area of your browser window to create a screenshot that is submitted with your feedback about a bug.
Google 的“报告错误”或“反馈工具”可让您选择浏览器窗口的一个区域来创建屏幕截图,并与您对错误的反馈一起提交。
Screenshot by Jason Small, posted in a duplicate question.
Jason Small 的截图,发布在一个重复的问题中。
How are they doing this? Google's JavaScript feedback API is loaded from hereand their overview of the feedback modulewill demonstrate the screenshot capability.
他们是如何做到这一点的?Google 的 JavaScript 反馈 API 从这里加载,他们对反馈模块的概述将演示屏幕截图功能。
回答by Niklas
JavaScript can read the DOM and render a fairly accurate representation of that using canvas
. I have been working on a script which converts HTML into a canvas image. Decided today to make an implementation of it into sending feedbacks like you described.
JavaScript 可以读取 DOM 并使用canvas
. 我一直在研究将 HTML 转换为画布图像的脚本。今天决定将其实现为发送您所描述的反馈。
The script allows you to create feedback forms which include a screenshot, created on the client's browser, along with the form. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
该脚本允许您创建反馈表单,其中包括在客户端浏览器上创建的屏幕截图以及表单。屏幕截图基于 DOM,因此可能无法 100% 准确真实呈现,因为它不会制作实际屏幕截图,而是根据页面上的可用信息构建屏幕截图。
It does not require any rendering from the server, as the whole image is created on the client's browser. The HTML2Canvas script itself is still in a very experimental state, as it does not parse nearly as much of the CSS3 attributes I would want it to, nor does it have any support to load CORS images even if a proxy was available.
它不需要来自服务器的任何渲染,因为整个图像是在客户端的浏览器上创建的。HTML2Canvas 脚本本身仍处于非常实验性的状态,因为它几乎没有解析我想要的 CSS3 属性,也没有任何支持加载 CORS 图像,即使代理可用。
Still quite limited browser compatibility (not because more couldn't be supported, just haven't had time to make it more cross browser supported).
仍然相当有限的浏览器兼容性(不是因为不能支持更多,只是没有时间让它更支持跨浏览器)。
For more information, have a look at the examples here:
有关更多信息,请查看此处的示例:
http://hertzen.com/experiments/jsfeedback/
http://hertzen.com/experiments/jsfeedback/
editThe html2canvas script is now available separately hereand some examples here.
编辑html2canvas 脚本现在可以在此处单独使用,在此处提供一些示例。
edit 2Another confirmation that Google uses a very similar method (in fact, based on the documentation, the only major difference is their async method of traversing/drawing) can be found in this presentation by Elliott Sprehn from the Google Feedback team: http://www.elliottsprehn.com/preso/fluentconf/
编辑 2谷歌反馈团队的 Elliott Sprehn 在本演示文稿中可以找到谷歌使用非常相似的方法(实际上,根据文档,唯一的主要区别是它们的异步遍历/绘图方法)的另一个确认: http: //www.elliottsprehn.com/preso/fluentconf/
回答by Matt Sinclair
Your web app can now take a 'native' screenshot of the client's entire desktop using getUserMedia()
:
您的 Web 应用程序现在可以使用getUserMedia()
以下命令获取客户端整个桌面的“本机”屏幕截图:
Have a look at this example:
看看这个例子:
https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/
https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/
The client will have to be using chrome (for now) and will need to enable screen capture support under chrome://flags.
客户端必须使用 chrome(目前)并且需要在 chrome://flags 下启用屏幕捕获支持。
回答by Kamil Kie?czewski
As Niklas mentionedyou can use the html2canvaslibrary to take a screenshot using JS in the browser. I will extend his answer in this point by providing an example of taking a screenshot using this library:
正如Niklas 提到的,您可以使用html2canvas库在浏览器中使用 JS 截取屏幕截图。在这一点上,我将通过提供一个使用此库截取屏幕截图的示例来扩展他的回答:
function report() {
let region = document.querySelector("body"); // whole screen
html2canvas(region, {
onrendered: function(canvas) {
let pngUrl = canvas.toDataURL(); // png in dataURL format
let img = document.querySelector(".screen");
img.src = pngUrl;
// here you can allow user to set bug-region
// and send it with 'pngUrl' to server
},
});
}
.container {
margin-top: 10px;
border: solid 1px black;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/0.4.1/html2canvas.min.js"></script>
<div>Screenshot tester</div>
<button onclick="report()">Take screenshot</button>
<div class="container">
<img width="75%" class="screen">
</div>
In report()
function in onrendered
after getting image as data URI you can show it to the user and allow him to draw "bug region" by mouse and then send a screenshot and region coordinates to the server.
在获取图像作为数据 URI 后的report()
函数中onrendered
,您可以将其显示给用户并允许他通过鼠标绘制“错误区域”,然后将屏幕截图和区域坐标发送到服务器。
In this exampleasync/await
version was made: with nice makeScreenshot()
function.
在此示例中async/await
,制作了版本:具有很好的makeScreenshot()
功能。
UPDATE
更新
Simple example which allows you to take screenshot, select region, describe bug and send POST request (here jsfiddle) (the main function is report()
).
简单的例子,它允许你截屏、选择区域、描述错误和发送 POST 请求(这里是 jsfiddle)(主要功能是report()
)。
async function report() {
let screenshot = await makeScreenshot(); // png dataUrl
let img = q(".screen");
img.src = screenshot;
let c = q(".bug-container");
c.classList.remove('hide')
let box = await getBox();
c.classList.add('hide');
send(screenshot,box); // sed post request with bug image, region and description
alert('To see POST requset with image go to: chrome console > network tab');
}
// ----- Helper functions
let q = s => document.querySelector(s); // query selector helper
window.report = report; // bind report be visible in fiddle html
async function makeScreenshot(selector="body")
{
return new Promise((resolve, reject) => {
let node = document.querySelector(selector);
html2canvas(node, { onrendered: (canvas) => {
let pngUrl = canvas.toDataURL();
resolve(pngUrl);
}});
});
}
async function getBox(box) {
return new Promise((resolve, reject) => {
let b = q(".bug");
let r = q(".region");
let scr = q(".screen");
let send = q(".send");
let start=0;
let sx,sy,ex,ey=-1;
r.style.width=0;
r.style.height=0;
let drawBox= () => {
r.style.left = (ex > 0 ? sx : sx+ex ) +'px';
r.style.top = (ey > 0 ? sy : sy+ey) +'px';
r.style.width = Math.abs(ex) +'px';
r.style.height = Math.abs(ey) +'px';
}
//console.log({b,r, scr});
b.addEventListener("click", e=>{
if(start==0) {
sx=e.pageX;
sy=e.pageY;
ex=0;
ey=0;
drawBox();
}
start=(start+1)%3;
});
b.addEventListener("mousemove", e=>{
//console.log(e)
if(start==1) {
ex=e.pageX-sx;
ey=e.pageY-sy
drawBox();
}
});
send.addEventListener("click", e=>{
start=0;
let a=100/75 //zoom out img 75%
resolve({
x:Math.floor(((ex > 0 ? sx : sx+ex )-scr.offsetLeft)*a),
y:Math.floor(((ey > 0 ? sy : sy+ey )-b.offsetTop)*a),
width:Math.floor(Math.abs(ex)*a),
height:Math.floor(Math.abs(ex)*a),
desc: q('.bug-desc').value
});
});
});
}
function send(image,box) {
let formData = new FormData();
let req = new XMLHttpRequest();
formData.append("box", JSON.stringify(box));
formData.append("screenshot", image);
req.open("POST", '/upload/screenshot');
req.send(formData);
}
.bug-container { background: rgb(255,0,0,0.1); margin-top:20px; text-align: center; }
.send { border-radius:5px; padding:10px; background: green; cursor: pointer; }
.region { position: absolute; background: rgba(255,0,0,0.4); }
.example { height: 100px; background: yellow; }
.bug { margin-top: 10px; cursor: crosshair; }
.hide { display: none; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/0.4.1/html2canvas.min.js"></script>
<body>
<div>Screenshot tester</div>
<button onclick="report()">Report bug</button>
<div class="example">Lorem ipsum</div>
<div class="bug-container hide">
<div>Select bug region</div>
<div class="bug">
<img width="75%" class="screen" >
<div class="region"></div>
</div>
<div>
<textarea class="bug-desc">Describe bug here...</textarea>
</div>
<div class="send">SEND BUG</div>
</div>
</body>
回答by Nikolay Makhonin
Get screenshot as Canvas or Jpeg Blob / ArrayBuffer using getDisplayMediaAPI:
使用getDisplayMediaAPI以 Canvas 或 Jpeg Blob / ArrayBuffer 的形式获取屏幕截图:
// docs: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
// see: https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/#20893521368186473
// see: https://github.com/muaz-khan/WebRTC-Experiment/blob/master/Pluginfree-Screen-Sharing/conference.js
function getDisplayMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getDisplayMedia) {
return navigator.mediaDevices.getDisplayMedia(options)
}
if (navigator.getDisplayMedia) {
return navigator.getDisplayMedia(options)
}
if (navigator.webkitGetDisplayMedia) {
return navigator.webkitGetDisplayMedia(options)
}
if (navigator.mozGetDisplayMedia) {
return navigator.mozGetDisplayMedia(options)
}
throw new Error('getDisplayMedia is not defined')
}
function getUserMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
return navigator.mediaDevices.getUserMedia(options)
}
if (navigator.getUserMedia) {
return navigator.getUserMedia(options)
}
if (navigator.webkitGetUserMedia) {
return navigator.webkitGetUserMedia(options)
}
if (navigator.mozGetUserMedia) {
return navigator.mozGetUserMedia(options)
}
throw new Error('getUserMedia is not defined')
}
async function takeScreenshotStream() {
// see: https://developer.mozilla.org/en-US/docs/Web/API/Window/screen
const width = screen.width * (window.devicePixelRatio || 1)
const height = screen.height * (window.devicePixelRatio || 1)
const errors = []
let stream
try {
stream = await getDisplayMedia({
audio: false,
// see: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamConstraints/video
video: {
width,
height,
frameRate: 1,
},
})
} catch (ex) {
errors.push(ex)
}
try {
// for electron js
stream = await getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
// chromeMediaSourceId: source.id,
minWidth : width,
maxWidth : width,
minHeight : height,
maxHeight : height,
},
},
})
} catch (ex) {
errors.push(ex)
}
if (errors.length) {
console.debug(...errors)
}
return stream
}
async function takeScreenshotCanvas() {
const stream = await takeScreenshotStream()
if (!stream) {
return null
}
// from: https://stackoverflow.com/a/57665309/5221762
const video = document.createElement('video')
const result = await new Promise((resolve, reject) => {
video.onloadedmetadata = () => {
video.play()
video.pause()
// from: https://github.com/kasprownik/electron-screencapture/blob/master/index.js
const canvas = document.createElement('canvas')
canvas.width = video.videoWidth
canvas.height = video.videoHeight
const context = canvas.getContext('2d')
// see: https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
resolve(canvas)
}
video.srcObject = stream
})
stream.getTracks().forEach(function (track) {
track.stop()
})
return result
}
// from: https://stackoverflow.com/a/46182044/5221762
function getJpegBlob(canvas) {
return new Promise((resolve, reject) => {
// docs: https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
canvas.toBlob(blob => resolve(blob), 'image/jpeg', 0.95)
})
}
async function getJpegBytes(canvas) {
const blob = await getJpegBlob(canvas)
return new Promise((resolve, reject) => {
const fileReader = new FileReader()
fileReader.addEventListener('loadend', function () {
if (this.error) {
reject(this.error)
return
}
resolve(this.result)
})
fileReader.readAsArrayBuffer(blob)
})
}
async function takeScreenshotJpegBlob() {
const canvas = await takeScreenshotCanvas()
if (!canvas) {
return null
}
return getJpegBlob(canvas)
}
async function takeScreenshotJpegBytes() {
const canvas = await takeScreenshotCanvas()
if (!canvas) {
return null
}
return getJpegBytes(canvas)
}
function blobToCanvas(blob, maxWidth, maxHeight) {
return new Promise((resolve, reject) => {
const img = new Image()
img.onload = function () {
const canvas = document.createElement('canvas')
const scale = Math.min(
1,
maxWidth ? maxWidth / img.width : 1,
maxHeight ? maxHeight / img.height : 1,
)
canvas.width = img.width * scale
canvas.height = img.height * scale
const ctx = canvas.getContext('2d')
ctx.drawImage(img, 0, 0, img.width, img.height, 0, 0, canvas.width, canvas.height)
resolve(canvas)
}
img.onerror = () => {
reject(new Error('Error load blob to Image'))
}
img.src = URL.createObjectURL(blob)
})
}
DEMO:
演示:
// take the screenshot
var screenshotJpegBlob = await takeScreenshotJpegBlob()
// show preview with max size 300 x 300 px
var previewCanvas = await blobToCanvas(screenshotJpegBlob, 300, 300)
previewCanvas.style.position = 'fixed'
document.body.appendChild(previewCanvas)
// send it to the server
let formdata = new FormData()
formdata.append("screenshot", screenshotJpegBlob)
await fetch('https://your-web-site.com/', {
method: 'POST',
body: formdata,
'Content-Type' : "multipart/form-data",
})
回答by JSON C11
Heres an example using: getDisplayMedia
下面是一个使用示例:getDisplayMedia
document.body.innerHTML = '<video style="width: 100%; height: 100%; border: 1px black solid;"/>';
navigator.mediaDevices.getDisplayMedia()
.then( mediaStream => {
const video = document.querySelector('video');
video.srcObject = mediaStream;
video.onloadedmetadata = e => {
video.play();
video.pause();
};
})
.catch( err => console.log(`${err.name}: ${err.message}`));
Also worth checking out is the Screen Capture APIdocs.
同样值得一看的是Screen Capture API文档。