在 iOS 中绘制部分图像的最有效方法

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/8035673/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-30 15:15:45  来源:igfitidea点击:

Most efficient way to draw part of an image in iOS

iphoneiosuiimagequartz-2ddrawrect

提问by hpique

Given an UIImageand a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect(without scaling)?

给定 anUIImage和 a CGRect,绘制对应于CGRect(不缩放)的图像部分的最有效方法(在内存和时间中)是什么?

For reference, this is how I currently do it:

作为参考,这是我目前的做法:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);    
    CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
    CGContextTranslateCTM(context, 0, rect.size.height);
    CGContextScaleCTM(context, 1.0, -1.0);
    CGContextDrawImage(context, rect, imageRef);
    CGImageRelease(imageRef);
}

Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplayfrequency. Playing with UIImageView's frame and clipToBoundsproduces better results (with less flexibility).

不幸的是,对于中等大小的图像和高setNeedsDisplay频率,这似乎非常慢。使用UIImageView的框架并clipToBounds产生更好的结果(灵活性较低)。

回答by Eonil

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.

我猜你这样做是为了在屏幕上显示图像的一部分,因为你提到了UIImageView. 优化问题总是需要具体定义。



Trust Apple for Regular UI stuff

信任 Apple 的常规 UI 内容

Actually, UIImageViewwith clipsToBoundsis one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplaymessage.

实际上,如果您的目标只是剪切图像的矩形区域(不要太大),那么UIImageViewwithclipsToBounds是存档目标的最快/最简单的方法之一。此外,您不需要发送setNeedsDisplay消息。

Or you can try putting the UIImageViewinside of an empty UIViewand set clipping at the container view. With this technique, you can transform your image freely by setting transformproperty in 2D (scaling, rotation, translation).

或者您可以尝试在容器视图中放置UIImageView一个空的内部UIView并设置剪辑。使用这种技术,您可以通过transform在 2D 中设置属性(缩放、旋转、平移)来自由地转换图像。

If you need 3D transformation, you still can use CALayerwith masksToBoundsproperty, but using CALayerwill give you very little extra performance usually not considerable.

如果您需要 3D 转换,您仍然可以使用CALayerwithmasksToBounds属性,但是使用CALayer不会给您带来很少的额外性能,通常不会很可观。

Anyway, you need to know all of the low-level details to use them properly for optimization.

无论如何,您需要了解所有底层细节才能正确使用它们进行优化。



Why is that one of the fastest ways?

为什么这是最快的方法之一?

UIViewis just a thin layer on top of CALayerwhich is implemented on top of OpenGLwhich is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

UIView只是一个薄层,CALayer它是在OpenGL之上实现的,它实际上是与GPU 的直接接口。这意味着 UIKit 正在被 GPU 加速。

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGLimplementation. If you use just a few images to display, you'll get acceptable performance with UIViewimplementation because it can get full acceleration of underlying OpenGL(which means GPU acceleration).

因此,如果您正确使用它们(我的意思是,在设计的限制范围内),它的性能将与普通OpenGL实现一样好。如果您只使用几张图像来显示,您将通过UIView实现获得可接受的性能,因为它可以获得底层OpenGL 的完全加速(这意味着 GPU 加速)。

Anyway if you need extreme optimizationfor hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayerlacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.

无论如何,如果您需要像在游戏应用程序中一样对数百个带有精细调整的像素着色器的动画精灵进行极端优化,您应该直接使用 OpenGL,因为CALayer在较低级别缺少许多优化选项。无论如何,至少在 UI 的优化方面,要超越 Apple 是非常困难的。



Why your method is slower than UIImageView?

为什么你的方法比UIImageView?

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

您应该了解的就是 GPU 加速。在所有最近的计算机中,只有 GPU 才能实现快速的图形性能。然后,关键是您使用的方法是否在 GPU 之上实现。

IMO, CGImagedrawing methods are not implemented with GPU. I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImageis implemented in CPU because,

IMO,CGImage绘图方法不是用 GPU 实现的。我想我在 Apple 的文档中读到了这个,但我不记得在哪里。所以我不确定这一点。无论如何,我相信CGImage是在 CPU 中实现的,因为,

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImageshould be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.
  1. 它的 API 看起来像是为 CPU 设计的,例如位图编辑界面和文本绘制。它们不太适合 GPU 接口。
  2. 位图上下文接口允许直接访问内存。这意味着它的后端存储位于 CPU 内存中。也许在统一内存架构(以及 Metal API)上有些不同,但无论如何,最初的设计意图CGImage应该是针对 CPU 的。
  3. 许多最近发布的其他 Apple API 明确提到了 GPU 加速。这意味着他们的旧 API 不是。如果没有特别说明,一般都是默认在CPU中完成的。

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

所以它似乎是在CPU中完成的。在 CPU 中完成的图形操作比在 GPU 中慢很多。

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.

简单地裁剪图像和合成图像层对于 GPU 来说是非常简单和廉价的操作(与 CPU 相比),因此您可以预期 UIKit 库将利用这一点,因为整个 UIKit 是在 OpenGL 之上实现的。



About Limitations

关于限制

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

因为优化是一种微观管理的工作,具体的数字和小事实非常重要。什么是中号?iOS 上的 OpenGL 通常将最大纹理大小限制为 1024x1024 像素(在最近的版本中可能更大)。如果你的图片比这个大,就不行了,否则性能会大大下降(我认为 UIImageView 是针对限制内的图片进行优化的)。

If you need to display huge images with clipping, you have to use another optimization like CATiledLayerand that's a totally different story.

如果你需要用剪裁来显示巨大的图像,你必须使用另一种优化,就像CATiledLayer这是一个完全不同的故事。

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.

除非您想了解 OpenGL 的每个细节,否则不要使用 OpenGL。它需要对底层图形和至少 100 倍以上的代码有充分的了解。



About Some Future

关于未来

Though it is not very likely happen, but CGImagestuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.

虽然它不太可能发生,但CGImage东西(或其他任何东西)不需要只停留在 CPU 中。不要忘记检查您正在使用的 API 的基础技术。尽管如此,GPU 的东西与 CPU 是非常不同的怪物,然后 API 人员通常会明确明确地提到它们。

回答by Scott Lahteine

It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.

如果您不仅可以设置 UIImageView 的图像,还可以设置要在该 UIImage 中显示的左上角偏移,那么最终会更快,从精灵图集创建的图像更少。也许这是可能的。

Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.

同时,我在应用程序中使用的实用程序类中创建了这些有用的函数。它从另一个 UIImage 的一部分创建一个 UIImage,其中包含使用标准 UIImageOrientation 值指定的旋转、缩放和翻转选项。

My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.

我的应用程序在初始化期间创建了很多 UIImages,这必然需要时间。但是在选择某个选项卡之前不需要某些图像。为了使加载速度更快,我可以在启动时生成的单独线程中创建它们,然后如果选择了该选项卡,则等待它完成。

+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
    return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}

// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {

            // convert y coordinate to origin bottom-left
    CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
            orgX = -aperture.origin.x,
            scaleX = 1.0,
            scaleY = 1.0,
            rot = 0.0;
    CGSize size;

    switch (orientation) {
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            size = CGSizeMake(aperture.size.height, aperture.size.width);
            break;
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
        case UIImageOrientationUp:
        case UIImageOrientationUpMirrored:
            size = aperture.size;
            break;
        default:
            assert(NO);
            return nil;
    }


    switch (orientation) {
        case UIImageOrientationRight:
            rot = 1.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationRightMirrored:
            rot = 1.0 * M_PI / 2.0;
            scaleY = -1.0;
            break;
        case UIImageOrientationDown:
            scaleX = scaleY = -1.0;
            orgX -= aperture.size.width;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationDownMirrored:
            orgY -= aperture.size.height;
            scaleY = -1.0;
            break;
        case UIImageOrientationLeft:
            rot = 3.0 * M_PI / 2.0;
            orgX -= aperture.size.height;
            break;
        case UIImageOrientationLeftMirrored:
            rot = 3.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            orgX -= aperture.size.width;
            scaleY = -1.0;
            break;
        case UIImageOrientationUp:
            break;
        case UIImageOrientationUpMirrored:
            orgX -= aperture.size.width;
            scaleX = -1.0;
            break;
    }

    // set the draw rect to pan the image to the right spot
    CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);

    // create a context for the new image
    UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
    CGContextRef gc = UIGraphicsGetCurrentContext();

    // apply rotation and scaling
    CGContextRotateCTM(gc, rot);
    CGContextScaleCTM(gc, scaleX, scaleY);

    // draw the image to our clipped context using the offset rect
    CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);

    // pull the image from our cropped context
    UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();

    // pop the context to get back to the default
    UIGraphicsEndImageContext();

    // Note: this is autoreleased
    return cropped;
}

回答by malex

The very simple way to move big image inside UIImageViewas follows.

UIImageView 中移动大图像的非常简单的方法如下。

Let we have the image of size (100, 400) representing 4 states of some picture one below another. We want to show the 2nd picture having offsetY = 100 in square UIImageViewof size (100, 100). The solution is:

让我们有一个大小为 (100, 400) 的图像,代表某个图片的 4 个状态,一个在另一个下面。我们想在大小为 (100, 100) 的方形UIImageView中显示 offsetY = 100 的第二张图片。解决办法是:

UIImageView *iView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
CGRect contentFrame = CGRectMake(0, 0.25, 1, 0.25);
iView.layer.contentsRect = contentFrame;
iView.image = [UIImage imageNamed:@"NAME"];

Here contentFrameis normalized frame relative to real UIImagesize. So, "0" means that we start visible part of image from left border, "0.25" means that we have vertical offset 100, "1" means that we want to show full width of the image, and finally, "0.25" means that we want to show only 1/4 part of image in height.

这里的contentFrame是相对于真实UIImage大小的规范化框架。因此,“0”表示我们从左边框开始图像的可见部分,“0.25”表示我们的垂直偏移量为 100,“1”表示我们想要显示图像的全宽,最后,“0.25”表示我们只想在高度上显示图像的 1/4 部分。

Thus, in local image coordinates we show the following frame

因此,在局部图像坐标中,我们显示以下帧

CGRect visibleAbsoluteFrame = CGRectMake(0*100, 0.25*400, 1*100, 0.25*400)
or CGRectMake(0, 100, 100, 100);

回答by David Dunham

Rather than creating a new image (which is costly because it allocates memory), how about using CGContextClipToRect?

与其创建一个新图像(因为它分配内存,所以成本很高),不如使用CGContextClipToRect?

回答by Adam Freeman

The quickest way is to use an image mask: an image that is the same size as the image to mask but with a certain pixel pattern indicating which portion of the image to mask out when rendering ->

最快的方法是使用图像遮罩:与要遮罩的图像大小相同但具有特定像素图案的图像,指示渲染时要遮罩图像的哪一部分 ->

// maskImage is used to block off the portion that you do not want rendered
// note that rect is not actually used because the image mask defines the rect that is rendered
-(void) drawRect:(CGRect)rect maskImage:(UIImage*)maskImage {

    UIGraphicsBeginImageContext(image_.size);
    [maskImage drawInRect:image_.bounds];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGImageRef maskRef = maskImage.CGImage;
    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                    CGImageGetHeight(maskRef),
                                    CGImageGetBitsPerComponent(maskRef),
                                    CGImageGetBitsPerPixel(maskRef),
                                    CGImageGetBytesPerRow(maskRef),
                                    CGImageGetDataProvider(maskRef), NULL, false);

    CGImageRef maskedImageRef = CGImageCreateWithMask([image_ CGImage], mask);
    image_ = [UIImage imageWithCGImage:maskedImageRef scale:1.0f orientation:image_.imageOrientation];

    CGImageRelease(mask);
    CGImageRelease(maskedImageRef); 
}