macos Mac OS X:使用 CGContextRef C 函数绘制到离屏 NSGraphicsContext 中无效。为什么?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/10627557/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-21 09:10:34  来源:igfitidea点击:

Mac OS X: Drawing into an offscreen NSGraphicsContext using CGContextRef C functions has no effect. Why?

objective-cmacoscocoacore-graphicsquartz-2d

提问by Todd Ditchendorf

Mac OS X 10.7.4

Mac OS X 10.7.4

I am drawing into an offscreen graphics context created via +[NSGraphicsContext graphicsContextWithBitmapImageRep:].

我正在绘制通过+[NSGraphicsContext graphicsContextWithBitmapImageRep:].

When I draw into this graphics context using the NSBezierPathclass, everything works as expected.

当我使用NSBezierPathclass绘制到这个图形上下文中时,一切都按预期工作。

However, when I draw into this graphics context using the CGContextRefC functions, I see no results of my drawing. Nothing works.

但是,当我使用CGContextRefC 函数绘制到这个图形上下文中时,我看不到我的绘制结果。没有任何作用。

For reasons I won't get into, I really need to draw using the CGContextReffunctions (rather than the Cocoa NSBezierPathclass).

由于我不会讨论的原因,我真的需要使用CGContextRef函数(而不是 CocoaNSBezierPath类)进行绘制。

My code sample is listed below. I am attempting to draw a simple "X". One stroke using NSBezierPath, one stroke using CGContextRefC functions. The first stroke works, the second does not. What am I doing wrong?

下面列出了我的代码示例。我试图画一个简单的“X”。一招用NSBezierPath,一招用CGContextRefC功能。第一个笔画有效,第二个无效。我究竟做错了什么?

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;

NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
   initWithBitmapDataPlanes:NULL
   pixelsWide:imgSize.width
   pixelsHigh:imgSize.height
   bitsPerSample:8
   samplesPerPixel:4
   hasAlpha:YES
   isPlanar:NO
   colorSpaceName:NSDeviceRGBColorSpace
   bitmapFormat:NSAlphaFirstBitmapFormat
   bytesPerRow:0
   bitsPerPixel:0] autorelease];

// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext setCurrentContext:g];

NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];

CGContextRef ctx = [g graphicsPort];

// lock and draw
[img lockFocus];

// draw first stroke with Cocoa. this works!
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics. This doesn't work!
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);

[img unlockFocus];

回答by Kurt Revis

You don't specify how you are looking at the results. I assume you are looking at the NSImageimgand not the NSBitmapImageRepoffscreenRep.

您没有指定如何查看结果。我假设您正在查看的是NSImageimg而不是NSBitmapImageRepoffscreenRep.

When you call [img lockFocus], you are changing the current NSGraphicsContextto be a context to draw into img. So, the NSBezierPathdrawing goes into imgand that's what you see. The CG drawing goes into offscreenRepwhich you aren't looking at.

当您调用时[img lockFocus],您正在将当前更改为NSGraphicsContext要绘制到的上下文img。所以,NSBezierPath绘图进入img,这就是你所看到的。CG 绘图进入了offscreenRep你没有看到的地方。

Instead of locking focus onto an NSImage and drawing into it, create an NSImage and add the offscreenRep as one of its reps.

不要将焦点锁定在 NSImage 上并绘制到其中,而是创建一个 NSImage 并将 offscreenRep 添加为它的代表之一。

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;

NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
   initWithBitmapDataPlanes:NULL
   pixelsWide:imgSize.width
   pixelsHigh:imgSize.height
   bitsPerSample:8
   samplesPerPixel:4
   hasAlpha:YES
   isPlanar:NO
   colorSpaceName:NSDeviceRGBColorSpace
   bitmapFormat:NSAlphaFirstBitmapFormat
   bytesPerRow:0
   bitsPerPixel:0] autorelease];

// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:g];

// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics
CGContextRef ctx = [g graphicsPort];    
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);

// done drawing, so set the current context back to what it was
[NSGraphicsContext restoreGraphicsState];

// create an NSImage and add the rep to it    
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
[img addRepresentation:offscreenRep];

// then go on to save or view the NSImage

回答by Mecki

I wonder why everyone writes such complicated code for drawing to an image. Unless you care for the exact bitmap representation of an image (and usually you don't!), there is no need to create one. You can just create a blank image and directly draw to it. In that case the system will create an appropriate bitmap representation (or maybe a PDF representation or whatever the system believes to be more suitable for drawing).

我想知道为什么每个人都编写如此复杂的代码来绘制图像。除非您关心图像的精确位图表示(通常您不关心!),否则没有必要创建一个。您可以创建一个空白图像并直接对其进行绘制。在这种情况下,系统将创建适当的位图表示(或者可能是 PDF 表示或系统认为更适合绘图的任何内容)。

The documentation of the init method

init 方法的文档

- (instancetype)initWithSize:(NSSize)aSize

which exists since MacOS 10.0 and still isn't deprecated, clearly says:

自 MacOS 10.0 以来就存在并且仍然没有被弃用,清楚地说:

After using this method to initialize an image object, you are expected to provide the image contents before trying to draw the image. You might lock focus on the image and draw to the imageor you might explicitly add an image representation that you created.

使用此方法初始化图像对象后,您需要在尝试绘制图像之前提供图像内容。您可以将焦点锁定在图像上并绘制到图像上,或者您可以明确添加您创建的图像表示。

So here's how I would have written that code:

所以这就是我编写代码的方式:

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSImage * image = [[NSImage alloc] initWithSize:imgRect.size];

[image lockFocus];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgRect.size.width, imgRect.size.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[image unlockFocus];

That's all folks.

这就是所有的人。

graphicsPortis actually void *:

graphicsPort实际上是void *

@property (readonly) void * graphicsPort 

and documented as

并记录为

The low-level, platform-specific graphics context represented by the graphic port.

由图形端口表示的低级、特定于平台的图形上下文。

Which may be pretty much everything, but the final note says

这可能几乎是一切,但最后的注释说

In OS X, this is the Core Graphics context, a CGContextRefobject (opaque type).

在 OS X 中,这是 Core Graphics 上下文,一个CGContextRef对象(不透明类型)。

This property has been deprecated in 10.10 in favor of the new property

此属性已在 10.10 中弃用,以支持新属性

@property (readonly) CGContextRef CGContext

which is only available in 10.10 and later. If you have to support older systems, it's fine to still use graphicsPort.

仅在 10.10 及更高版本中可用。如果您必须支持旧系统,仍然可以使用graphicsPort.

回答by Vlad

Here are 3 ways of drawing same image (Swift 4).

这是绘制相同图像的 3 种方法(Swift 4)。

The method suggested by @Mecki produces an image without blurringartefacts (like blurred curves). But this can be fixed by adjusting CGContextsettings (not included in this example).

@Mecki 建议的方法生成的图像没有blurring人工制品(如模糊曲线)。但这可以通过调整CGContext设置来解决(不包括在本示例中)。

public struct ImageFactory {

   public static func image(size: CGSize, fillColor: NSColor, rounded: Bool = false) -> NSImage? {
      let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
      return drawImage(size: size) { context in
         if rounded {
            let radius = min(size.height, size.width)
            let path = NSBezierPath(roundedRect: rect, xRadius: 0.5 * radius, yRadius: 0.5 * radius).cgPath
            context.addPath(path)
            context.clip()
         }
         context.setFillColor(fillColor.cgColor)
         context.fill(rect)
      }
   }

}

extension ImageFactory {

   private static func drawImage(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      return drawImageInLockedImageContext(size: size, drawingCalls: drawingCalls)
   }

   private static func drawImageInLockedImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      let image = NSImage(size: size)
      image.lockFocus()
      guard let context = NSGraphicsContext.current else {
         image.unlockFocus()
         return nil
      }
      drawingCalls(context.cgContext)
      image.unlockFocus()
      return image
   }

   // Has scalling or antialiasing issues, like blurred curves.
   private static func drawImageInBitmapImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      guard let offscreenRep = NSBitmapImageRep(pixelsWide: Int(size.width), pixelsHigh: Int(size.height),
                                                bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true,
                                                isPlanar: false, colorSpaceName: .deviceRGB) else {
                                                   return nil
      }
      guard let context = NSGraphicsContext(bitmapImageRep: offscreenRep) else {
         return nil
      }
      NSGraphicsContext.saveGraphicsState()
      NSGraphicsContext.current = context
      drawingCalls(context.cgContext)
      NSGraphicsContext.restoreGraphicsState()
      let img = NSImage(size: size)
      img.addRepresentation(offscreenRep)
      return img
   }

   // Has scalling or antialiasing issues, like blurred curves.
   private static func drawImageInCGContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      let colorSpace = CGColorSpaceCreateDeviceRGB()
      let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
      guard let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8,
                                    bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
         return nil
      }
      drawingCalls(context)
      guard let image = context.makeImage() else {
         return nil
      }
      return NSImage(cgImage: image, size: size)
   }
}

回答by Robin Stewart

Swift 4:I use this code, which replicates the convenient API from UIKit (but runs on macOS):

Swift 4:我使用此代码,它复制了 UIKit 中的便捷 API(但在 macOS 上运行):

public class UIGraphicsImageRenderer {
    let size: CGSize

    init(size: CGSize) {
        self.size = size
    }

    func image(actions: (CGContext) -> Void) -> NSImage {
        let image = NSImage(size: size)
        image.lockFocusFlipped(true)
        actions(NSGraphicsContext.current!.cgContext)
        image.unlockFocus()
        return image
    }
}

Usage:

用法:

let renderer = UIGraphicsImageRenderer(size: imageSize)
let image = renderer.image { ctx in
    // Drawing commands here
}

回答by FrankByte.com

The solution by @Robin Stewart worked well for me. I was able to condense it to an NSImage extension.

@Robin Stewart 的解决方案对我来说效果很好。我能够将其压缩为 NSImage 扩展名。

extension NSImage {
    convenience init(size: CGSize, actions: (CGContext) -> Void) {
        self.init(size: size)
        lockFocusFlipped(false)
        actions(NSGraphicsContext.current!.cgContext)
        unlockFocus()
    }
}

Usage:

用法:

let image = NSImage(size: CGSize(width: 100, height: 100), actions: { ctx in
    // Drawing commands here for example:
    // ctx.setFillColor(.white)
    // ctx.fill(pageRect)
})