ios 是否可以以编程方式反转图像的颜色?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6672517/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-30 20:43:20  来源:igfitidea点击:

Is programmatically inverting the colors of an image possible?

iphoneiosipaduiimage

提问by Brodie

I want to take an image and invert the colors in iOS.

我想拍摄图像并在 iOS 中反转颜色。

回答by Tommy

To expand on quixoto's answer and because I have relevant source code from a project of my own, if you were to need to drop to on-CPU pixel manipulation then the following, which I've added exposition to, should do the trick:

为了扩展 quixoto 的答案,并且因为我有来自我自己的项目的相关源代码,如果您需要使用 CPU 上的像素操作,那么我已经添加了说明的以下内容应该可以解决问题:

@implementation UIImage (NegativeImage)

- (UIImage *)negativeImage
{
    // get width and height as integers, since we'll be using them as
    // array subscripts, etc, and this'll save a whole lot of casting
    CGSize size = self.size;
    int width = size.width;
    int height = size.height;

    // Create a suitable RGB+alpha bitmap context in BGRA colour space
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
    CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
    CGColorSpaceRelease(colourSpace);

    // draw the current image to the newly created context
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);

    // run through every pixel, a scan line at a time...
    for(int y = 0; y < height; y++)
    {
        // get a pointer to the start of this scan line
        unsigned char *linePointer = &memoryPool[y * width * 4];

        // step through the pixels one by one...
        for(int x = 0; x < width; x++)
        {
            // get RGB values. We're dealing with premultiplied alpha
            // here, so we need to divide by the alpha channel (if it
            // isn't zero, of course) to get uninflected RGB. We
            // multiply by 255 to keep precision while still using
            // integers
            int r, g, b; 
            if(linePointer[3])
            {
                r = linePointer[0] * 255 / linePointer[3];
                g = linePointer[1] * 255 / linePointer[3];
                b = linePointer[2] * 255 / linePointer[3];
            }
            else
                r = g = b = 0;

            // perform the colour inversion
            r = 255 - r;
            g = 255 - g;
            b = 255 - b;

            // multiply by alpha again, divide by 255 to undo the
            // scaling before, store the new values and advance
            // the pointer we're reading pixel data from
            linePointer[0] = r * linePointer[3] / 255;
            linePointer[1] = g * linePointer[3] / 255;
            linePointer[2] = b * linePointer[3] / 255;
            linePointer += 4;
        }
    }

    // get a CG image from the context, wrap that into a
    // UIImage
    CGImageRef cgImage = CGBitmapContextCreateImage(context);
    UIImage *returnImage = [UIImage imageWithCGImage:cgImage];

    // clean up
    CGImageRelease(cgImage);
    CGContextRelease(context);
    free(memoryPool);

    // and return
    return returnImage;
}

@end

So that adds a category method to UIImage that:

这样就为 UIImage 添加了一个 category 方法:

  1. creates a clear CoreGraphics bitmap context that it can access the memory of
  2. draws the UIImage to it
  3. runs through every pixel, converting from premultiplied alpha to uninflected RGB, inverting each channel separately, multiplying by alpha again and storing back
  4. gets an image from the context and wraps it into a UIImage
  5. cleans up after itself, and returns the UIImage
  1. 创建一个清晰的 CoreGraphics 位图上下文,它可以访问
  2. 绘制 UIImage 给它
  3. 遍历每个像素,从预乘 alpha 转换为未弯曲的 RGB,分别反转每个通道,再次乘以 alpha 并存储回
  4. 从上下文中获取图像并将其包装到 UIImage 中
  5. 自行清理,并返回 UIImage

回答by user2704438

With CoreImage:

使用 CoreImage:

#import <CoreImage/CoreImage.h>

@implementation UIImage (ColorInverse)

+ (UIImage *)inverseColor:(UIImage *)image {
    CIImage *coreImage = [CIImage imageWithCGImage:image.CGImage];
    CIFilter *filter = [CIFilter filterWithName:@"CIColorInvert"];
    [filter setValue:coreImage forKey:kCIInputImageKey];
    CIImage *result = [filter valueForKey:kCIOutputImageKey];
    return [UIImage imageWithCIImage:result];
}

@end

回答by Ben Zotto

Sure, it's possible-- one way is using the "difference" blend mode (kCGBlendModeDifference). See this question(among others) for the outline of the code to set up the image processing. Use your image as the bottom (base) image, and then draw a pure white bitmap on top of it.

当然,这是可能的——一种方法是使用“差异”混合模式 ( kCGBlendModeDifference)。有关设置图像处理的代码大纲,请参阅此问题(以及其他问题)。使用您的图像作为底部(基础)图像,然后在其顶部绘制纯白色位图。

You can also do the per-pixel operation manually by getting the CGImageRefand drawing it into a bitmap context, and then looping over the pixels in the bitmap context.

您还可以通过获取CGImageRef并将其绘制到位图上下文中,然后循环遍历位图上下文中的像素来手动执行逐像素操作。

回答by Andrea

Tommy answer is THE answer but I'd like to point out that could be a really intense and time consuming task for bigger images. There two frameworks that could help you in manipulating images:

汤米的答案就是答案,但我想指出,对于更大的图像,这可能是一项非常紧张且耗时的任务。有两个框架可以帮助您处理图像:

  1. CoreImage
  2. Accelerator

    And it really worth to mention the amazing GPUImage framework from Brad Larson, GPUImage makes the routines run on the GPU using custom fragment shader in OpenGlES 2.0 environment, with remarkable speed improvement. With CoreImge if a negative filter is available you can choose CPU or GPU, using Accelerator all routines run on CPU but using vector math image processing.
  1. 核心图像
  2. Accelerator

    值得一提的是 Brad Larson 令人惊叹的 GPUImage 框架,GPUImage 使用 OpenGlES 2.0 环境中的自定义片段着色器使例程在 GPU 上运行,并具有显着的速度提升。使用 CoreImge,如果负过滤器可用,您可以选择 CPU 或 GPU,使用加速器,所有例程都在 CPU 上运行,但使用矢量数学图像处理。

回答by BadPirate

Created a swift extension to do just this. Also because CIImage based UIImages break down (most libraries assume CGImage is set) I added an option to return a UIImage that is based on a modified CIImage:

创建了一个 swift 扩展来做到这一点。同样因为基于 CIImage 的 UIImages 崩溃(大多数库假设 CGImage 已设置)我添加了一个选项来返回基于修改后的 CIImage 的 UIImage:

extension UIImage {
    func inverseImage(cgResult: Bool) -> UIImage? {
        let coreImage = UIKit.CIImage(image: self)
        guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        guard let result = filter.valueForKey(kCIOutputImageKey) as? UIKit.CIImage else { return nil }
        if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
            return UIImage(CGImage: CIContext(options: nil).createCGImage(result, fromRect: result.extent))
        }
        return UIImage(CIImage: result)
    }
}

回答by MLBDG

Swift 3 update:(from @BadPirate Answer)

Swift 3 更新:(来自 @BadPirate 答案)

extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
    let coreImage = UIKit.CIImage(image: self)
    guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
    filter.setValue(coreImage, forKey: kCIInputImageKey)
    guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
    if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
        return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
    }
    return UIImage(ciImage: result)
  }
}

回答by Ilya Stukalov

Updated to Swift 5 version of @MLBDG answer

更新到 Swift 5 版本的@MLBDG 答案

extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
    let coreImage = self.ciImage
    guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
    filter.setValue(coreImage, forKey: kCIInputImageKey)
    guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
    if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
        return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
    }
    return UIImage(ciImage: result)
}

}

}