如何在 iOS 上实现方框或高斯模糊
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1140117/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to implement a box or gaussian blur on iOS
提问by willc2
I want to be able to take an image and blur it relatively quickly (say in 0.1 sec). Image size would almost never be larger than 256 x 256 px.
我希望能够拍摄图像并相对较快地模糊它(比如在 0.1 秒内)。图像尺寸几乎永远不会大于 256 x 256 像素。
Do I have to loop thru every pixel and average them with neighbors or is there a higher-level way that I could do this?
我是否必须遍历每个像素并将它们与邻居平均,或者是否有更高级别的方法可以做到这一点?
PS: I am aware that multiple box blurs can approximate a gaussian blur.
PS:我知道多个框模糊可以近似高斯模糊。
回答by Gabe
I found a really fast pretty crappy way for iOS3.2+ apps
我为 iOS3.2+ 应用程序找到了一种非常快速、非常糟糕的方法
UIView *myView = [self view];
CALayer *layer = [myView layer];
[layer setRasterizationScale:0.25];
[layer setShouldRasterize:YES];
This rasterizes the view down to 4x4 pixel chunks then scales it back up using bilinear filtering... it's EXTREMELY fast and looks ok if you are just wanting to blur a background view under a modal view.
这将视图栅格化为 4x4 像素块,然后使用双线性过滤将其放大……它非常快,如果您只想在模态视图下模糊背景视图,它看起来还可以。
To undo it, just set the rasterization scale back to 1.0 or turn off rasterization.
要撤消它,只需将光栅化比例设置回 1.0 或关闭光栅化。
回答by mahboudz
From how-do-i-create-blurred-text-in-an-iphone-view:
从how-do-i-create-blurred-text-in-an-iphone-view :
Take a look at Apple's GLImageProcessing iPhone sample. It does some blurring, among other things.
看看 Apple 的 GLImageProcessing iPhone 示例。除其他外,它会进行一些模糊处理。
The relevant code includes:
相关代码包括:
static void blur(V2fT2f *quad, float t) // t = 1
{
GLint tex;
V2fT2f tmpquad[4];
float offw = t / Input.wide;
float offh = t / Input.high;
int i;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);
// Three pass small blur, using rotated pattern to sample 17 texels:
//
// .\/..
// ./\/
// \/X/\ rotated samples filter across texel corners
// /\/.
// ../\.
// Pass one: center nearest sample
glVertexPointer (2, GL_FLOAT, sizeof(V2fT2f), &quad[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &quad[0].s);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glColor4f(1.0/5, 1.0/5, 1.0/5, 1.0);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass two: accumulate two rotated linear samples
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s + 1.5 * offw;
tmpquad[i].y = quad[i].t + 0.5 * offh;
tmpquad[i].s = quad[i].s - 1.5 * offw;
tmpquad[i].t = quad[i].t - 0.5 * offh;
}
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].x);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].s);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, tex);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glColor4f(0.5, 0.5, 0.5, 2.0/5);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass three: accumulate two rotated linear samples
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s - 0.5 * offw;
tmpquad[i].y = quad[i].t + 1.5 * offh;
tmpquad[i].s = quad[i].s + 0.5 * offw;
tmpquad[i].t = quad[i].t - 1.5 * offh;
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Restore state
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Half.texID);
glDisable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glDisable(GL_BLEND);
}
回答by Hannes Ovrén
If you always or at least often use the same blur settings you might gain speed by doing the filtering in frequency domain instead of the spatial domain.
如果您总是或至少经常使用相同的模糊设置,您可能会通过在频域而不是空间域中进行过滤来提高速度。
- Precaclulate your filter image G(u,v), which is a 2D gaussian
- Apply fourier transform to your input image f(x,y)->F(u,v)
- Filter by multiplication: H(u,v) = F(u,v) .* G(u,v) (pixelwise multiplication, not matrix multiplication)
- Transform your filtered image back into the spatial domain by inverse fourier transform: H(u,v) -> h(x,y)
- 预先计算您的过滤器图像 G(u,v),这是一个 2D 高斯
- 对输入图像应用傅立叶变换 f(x,y)->F(u,v)
- 按乘法过滤: H(u,v) = F(u,v) .* G(u,v) (逐像素乘法,而不是矩阵乘法)
- 通过逆傅立叶变换将过滤后的图像转换回空间域:H(u,v) -> h(x,y)
The pros of this approach is that pixel-wise multiplication should be pretty fast compared to averaging a neighborhood. So if you process a lot of images this might help.
这种方法的优点是与平均邻域相比,逐像素乘法应该相当快。因此,如果您处理大量图像,这可能会有所帮助。
The downside is that I have no idea how fast you can do fourier transforms on the iPhone so this might very well be much slower than other implementations.
缺点是我不知道在 iPhone 上进行傅立叶变换的速度有多快,所以这很可能比其他实现慢得多。
Other than that I guess since the iPhone has OpenGL support you could maybe use its texturing functions/drawing to do it. Sorry to say though that I am no OpenGL expert and can't really give any practical advice as how that is done.
除此之外,我猜因为 iPhone 有 OpenGL 支持,你可以使用它的纹理功能/绘图来做到这一点。很抱歉,我不是 OpenGL 专家,无法真正给出任何实际建议。
回答by amattn
Here's two tricks for poor man's blur:
这是穷人模糊的两个技巧:
Take the image, a draw it at partial opacity 5 or 6 (or however many you want) times each time offseting by a couple pixels in a different direction. drawing more times in more directions gets you a better blur, but you obviously trade off processing time. This works well if you want a blur with a relatively small radius.
For monochromatic images, you can actually use the build in shadow as a simple blur.
拍摄图像,每次以部分不透明度绘制 5 或 6 次(或您想要的任何次数),并在不同方向偏移几个像素。在更多方向上绘制更多次可以使您获得更好的模糊效果,但您显然会牺牲处理时间。如果您想要半径相对较小的模糊,这很有效。
对于单色图像,您实际上可以使用内置阴影作为简单的模糊效果。
回答by balpha
回答by Phill
Any algorithm that modifys images on a pixel level via openGL is going to be a tad slow; pixel-by-pixel manipulation on a openGL texture and then update it every frame is sadly performance in-adequate.
任何通过 openGL 在像素级别修改图像的算法都会有点慢;对 openGL 纹理进行逐像素操作,然后每帧更新它,遗憾的是性能不足。
Spend some time writing a test rig and experimenting with pixel manipulation before committing to implementing a complex blur routine.
在致力于实施复杂的模糊例程之前,花一些时间编写测试装置并尝试像素操作。
回答by GeneCode
The very basic of blur (or maybe more of a soften), is to average 2 neighboring pixels and apply the average to both pixels. iterate this throughout the image, and you get a slight blur (soften).
模糊(或者可能更多的是柔化)的基本原理是平均 2 个相邻像素并将平均值应用于两个像素。在整个图像中迭代这个,你会得到轻微的模糊(软化)。