objective-c 将黑白滤镜应用于 UIImage
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/22422480/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Apply Black and White Filter to UIImage
提问by user3422862
I need to apply a black-and-white filter on a UIImage. I have a view in which there's a photo taken by the user, but I don't have any ideas on transforming the colors of the image.
我需要在 UIImage 上应用黑白滤镜。我有一个视图,其中有用户拍摄的照片,但我对转换图像颜色没有任何想法。
- (void)viewDidLoad {
[super viewDidLoad];
self.navigationItem.title = NSLocalizedString(@"#Paint!", nil);
imageView.image = image;
}
How can I do that?
我怎样才能做到这一点?
回答by MCMatan
Objective C
目标 C
- (UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Swift
迅速
func convertToGrayScale(image: UIImage) -> UIImage {
// Create image rectangle with current image width/height
let imageRect:CGRect = CGRect(x:0, y:0, width:image.size.width, height: image.size.height)
// Grayscale color space
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = image.size.width
let height = image.size.height
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(image.cgImage!, in: imageRect)
let imageRef = context!.makeImage()
// Create a new UIImage object
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
回答by rickster
Judging by the ciimagetag, perhaps the OP was thinking (correctly) that Core Image would provide a quick and easy way to do this?
从ciimage标签来看,也许 OP 正在(正确地)认为 Core Image 会提供一种快速简便的方法来做到这一点?
Here's that, both in ObjC:
这是,两者都在 ObjC 中:
- (UIImage *)grayscaleImage:(UIImage *)image {
CIImage *ciImage = [[CIImage alloc] initWithImage:image];
CIImage *grayscale = [ciImage imageByApplyingFilter:@"CIColorControls"
withInputParameters: @{kCIInputSaturationKey : @0.0}];
return [UIImage imageWithCIImage:grayscale];
}
and Swift:
和斯威夫特:
func grayscaleImage(image: UIImage) -> UIImage {
let ciImage = CIImage(image: image)
let grayscale = ciImage.imageByApplyingFilter("CIColorControls",
withInputParameters: [ kCIInputSaturationKey: 0.0 ])
return UIImage(CIImage: grayscale)
}
CIColorControlsis just one of several built-in Core Image filters that can convert an image to grayscale. CIPhotoEffectMono, CIPhotoEffectNtheitroad, and CIPhotoEffectTonalare different tone-mapping presets (each takes no parameters), and you can do your own tone mapping with filters like CIColorMap.
CIColorControls只是几个可以将图像转换为灰度的内置 Core Image 过滤器之一。CIPhotoEffectMono、CIPhotoEffectNtheitroad和CIPhotoEffectTonal是不同的色调映射预设(每个都没有参数),您可以使用CIColorMap等滤镜进行自己的色调映射。
Unlike alternatives that involve creating and drawing into one's own CGBitmapContext, these preserve the size/scale and alpha of the original image without extra work.
与涉及创建和绘制自己的CGBitmapContext.
回答by Chris Stillwell
While PiratM's solution works you lose the alpha channel. To preserve the alpha channel you need to do a few extra steps.
虽然 PiratM 的解决方案有效,但您会丢失 alpha 通道。要保留 Alpha 通道,您需要执行一些额外的步骤。
+(UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
context = CGBitmapContextCreate(nil,image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly );
CGContextDrawImage(context, imageRect, [image CGImage]);
CGImageRef mask = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:CGImageCreateWithMask(imageRef, mask)];
CGImageRelease(imageRef);
CGImageRelease(mask);
// Return the new grayscale image
return newImage;
}
回答by FBente
The Version of @rickster looks good considering the alpha channel. But a UIImageView without .AspectFit or Fill contentMode can't display it. Therefore the UIImage has to be created with an CGImage. This Version implemented as Swift UIImage extension keeps the current scale and gives some optional input parameters:
考虑到 alpha 通道,@rickster 的版本看起来不错。但是没有 .AspectFit 或 Fill contentMode 的 UIImageView 无法显示它。因此,必须使用 CGImage 创建 UIImage。此版本作为 Swift UIImage 扩展实现,保持当前比例并提供一些可选的输入参数:
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
}
The longer but faster way is the modificated version of @ChrisStillwells answer. Implemented as an UIImage extension considering the alpha channel and current scale in Swift:
更长但更快的方法是@ChrisStillwells 答案的修改版本。考虑到 Swift 中的 alpha 通道和当前比例,作为 UIImage 扩展实现:
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
回答by dengST30
In Swift 5, using CoreImage to do image filter,
在 Swift 5 中,使用 CoreImage 做图片过滤,
thanks @rickster
谢谢@rickster
extension UIImage{
var grayscaled: UIImage?{
let ciImage = CIImage(image: self)
let grayscale = ciImage?.applyingFilter("CIColorControls",
parameters: [ kCIInputSaturationKey: 0.0 ])
if let gray = grayscale{
return UIImage(ciImage: gray)
}
else{
return nil
}
}
}
回答by black_pearl
updated @FBente's to Swift 5,
将@FBente 更新为Swift 5,
using CoreImage to do image filtering,
使用 CoreImage 进行图像过滤,
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
var grayScaled: UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPoint.zero, size: pixelSize)
// Grayscale color space
let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceGray()
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
if let context: CGContext = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
guard let cg = self.cgImage else{
return nil
}
context.draw(cg, in: imageRect)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImage = context.makeImage(){
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.alphaOnly.rawValue)
guard let context = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfoAlphaOnly.rawValue) else{
return nil
}
context.draw(cg, in: imageRect)
if let mask: CGImage = context.makeImage() {
// Create a new UIImage object
if let newCGImage = imageRef.masking(mask){
// Return the new grayscale image
return UIImage(cgImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
回答by Charlton Provatas
Swift 4 Solution
Swift 4 解决方案
extension UIImage {
var withGrayscale: UIImage {
guard let ciImage = CIImage(image: self, options: nil) else { return self }
let paramsColor: [String: AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0), kCIInputContrastKey: NSNumber(value: 1.0), kCIInputSaturationKey: NSNumber(value: 0.0)]
let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)
guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return self }
return UIImage(cgImage: processedCGImage, scale: scale, orientation: imageOrientation)
}
}
回答by Anson Yao
Swift 3.0 version:
斯威夫特 3.0 版本:
extension UIImage {
func convertedToGrayImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
let rect = CGRect(x: 0.0, y: 0.0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
guard let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
guard let cgImage = cgImage else { return nil }
context.draw(cgImage, in: rect)
guard let imageRef = context.makeImage() else { return nil }
let newImage = UIImage(cgImage: imageRef.copy()!)
return newImage
}
}
回答by neoneye
Swift3 + GPUImage
Swift3 + GPUImage
import GPUImage
extension UIImage {
func blackWhite() -> UIImage? {
guard let image: GPUImagePicture = GPUImagePicture(image: self) else {
print("unable to create GPUImagePicture")
return nil
}
let filter = GPUImageAverageLuminanceThresholdFilter()
image.addTarget(filter)
filter.useNextFrameForImageCapture()
image.processImage()
guard let processedImage: UIImage = filter.imageFromCurrentFramebuffer(with: UIImageOrientation.up) else {
print("unable to obtain UIImage from filter")
return nil
}
return processedImage
}
}
回答by Jo Essfb
This code (objective c) work:
此代码(目标 c)工作:
CIImage * ciimage = ...;
CIFilter * filter = [CIFilter filterWithName:@"CIColorControls" withInputParameters:@{kCIInputSaturationKey : @0.0,kCIInputContrastKey : @10.0,kCIInputImageKey : ciimage}];
CIImage * grayscale = [filtre outputImage];
The kCIInputContrastKey : @10.0is to obtain an almost black and white image.
这kCIInputContrastKey : @10.0是为了获得几乎黑白的图像。

