ios UInt 和 Int 之间的 Swift 转换

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/26086440/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 02:52:26  来源:igfitidea点击:

Swift converting between UInt and Int

iosswiftint

提问by Akshat

I have the following methods

我有以下方法

var photos = [MWPhoto] = [MWPhoto]()

func numberOfPhotosInPhotoBrowser(photoBrowser: MWPhotoBrowser!) -> UInt {

    return self.photos.count
}

func photoBrowser(photoBrowser: MWPhotoBrowser!, photoAtIndex index: UInt) -> MWPhotoProtocol! {

    return self.photos[index]
}

However for the first I get Int is not convertible to UInt(since self.photos.countis an Int

然而,对于我得到的第一个Int is not convertible to UInt(因为self.photos.countInt

and for the second UInt is not convertible to Int- since the self.photos[can only take an Int for its index.

第二个UInt is not convertible to Int- 因为self.photos[它的索引只能取一个 Int 。

How can I correctly convert the UInt to Int and back?

如何正确地将 UInt 转换为 Int 并返回?

回答by Sandeep

In the first one, the return type is UInt, but you return Int since count returns Int.

在第一个中,返回类型是UInt,但由于 count 返回 Int ,因此您返回 Int 。

Basically UInt has initializer which take variants of value types arguments such as Int, CGFloat, Double or event string and return a new value type.

基本上 UInt 具有初始化器,它采用值类型参数的变体,例如 Int、CGFloat、Double 或事件字符串,并返回一个新的值类型。

  • UInt(8)// result is 8 UInt value type
  • UInt(20.12)// result is 20 UInt value type
  • UInt(Double(10))// result is 10 UInt value type
  • UInt("10")// result is 10 UInt value type, note this is failable initializer, can be a value or nil
  • UInt(8)// 结果是 8 UInt 值类型
  • UInt(20.12)// 结果是 20 UInt 值类型
  • UInt(Double(10))// 结果是 10 UInt 值类型
  • UInt("10")// 结果是 10 UInt 值类型,注意这是可失败的初始化器,可以是值或 nil

-

——

func numberOfPhotosInPhotoBrowser(photoBrowser: MWPhotoBrowser!) -> UInt {

    return UInt(self.photos.count)
}

For the second one, the array subscript expects Int value where you are passing UInt, so create a new Intvalue type from UInt,

对于第二个,数组下标需要传递UInt 的Int值,因此从UInt创建一个新的Int值类型,

func photoBrowser(photoBrowser: MWPhotoBrowser!, photoAtIndex index: UInt) -> MWPhotoProtocol! {

    return self.photos[Int(index)]
}

回答by appdevelopersspotlight

// initializing Int
var someInt: Int = 8
someInt

// Converting Int to UInt
var someIntToUInt: UInt = UInt(someInt)
someIntToUInt

// initializing UInt   
var someUInt: UInt = 10
someUInt

// Converting UInt to Int   
var someUIntToInt: Int = Int(someUInt)
someUIntToInt

回答by coldfire

If you want unsigned int from negative value use UInt(bitPattern:)

如果你想从负值使用 unsigned int 使用 UInt(bitPattern :)

let intVal = -1
let uintVal = UInt(bitPattern: intVal) // uintVal == 0xffffffffffffffff

回答by Esqarrouth

Add this anywhere outside of a class:

在类之外的任何地方添加:

extension UInt {
    /// SwiftExtensionKit
    var toInt: Int { return Int(self) }
}

Then just call:

然后只需调用:

self.photos[index].toInt

回答by RenniePet

I got so frustrated with Swift's cryptic method parameters bitPattern:and truncatingBitPattern:and my inability to remember which one to use when, that I created the following class containing a large number of conversion methods.

我对 Swift 的神秘方法参数bitPattern:truncatingBitPattern:感到非常沮丧,而且我无法记住何时使用哪个,因此我创建了以下包含大量转换方法的类。

I'm not necessarily recommending that you include this in your program. I'm sure many people will say that Swift is trying to protect us from ourselves and that sabotaging that effort is dumb. So maybe you should just keep this file somewhere as a kind of cheat sheet so you can quickly determine how to do a conversion, and copy the parameters into your program when needed.

我不一定建议您将其包含在您的程序中。我相信很多人会说 Swift 试图保护我们免受自己的伤害,而破坏这种努力是愚蠢的。因此,也许您应该将此文件作为备忘单保存在某处,以便您可以快速确定如何进行转换,并在需要时将参数复制到您的程序中。

Incidentally, JDIstands for "just do it".

顺便说一下,JDI代表“就去做”。

/// Class containing a large number of static methods to convert an Int to a UInt or vice-versa, and 
/// also to perform conversions between different bit sizes, for example UInt32 to UInt8.
///
/// Many of these "conversions" are trivial, and are only included for the sake of completeness.
///
/// A few of the conversions involving Int and UInt can give different results when run on 32-bit
/// and 64-bit systems. All of the conversion where the bit size of both the source and the target
/// are specified will always give the same result independent of platform.
public class JDI {

   // MARK: - To signed Int

   // To Int8
   public static func ToInt8(_ x : Int8) -> Int8 {
      return x
   }
   public static func ToInt8(_ x : Int32) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }
   public static func ToInt8(_ x : Int64) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }
   public static func ToInt8(_ x : Int) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }
   public static func ToInt8(_ x : UInt8) -> Int8 {
      return Int8(bitPattern: x)
   }
   public static func ToInt8(_ x : UInt32) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }
   public static func ToInt8(_ x : UInt64) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }
   public static func ToInt8(_ x : UInt) -> Int8 {
      return Int8(truncatingBitPattern: x)
   }

   // To Int32
   public static func ToInt32(_ x : Int8) -> Int32 {
      return Int32(x)
   }
   public static func ToInt32(_ x : Int32) -> Int32 {
      return x
   }
   public static func ToInt32(_ x : Int64) -> Int32 {
      return Int32(truncatingBitPattern: x)
   }
   public static func ToInt32(_ x : Int) -> Int32 {
      return Int32(truncatingBitPattern: x)
   }
   public static func ToInt32(_ x : UInt8) -> Int32 {
      return Int32(x)
   }
   public static func ToInt32(_ x : UInt32) -> Int32 {
      return Int32(bitPattern: x)
   }
   public static func ToInt32(_ x : UInt64) -> Int32 {
      return Int32(truncatingBitPattern: x)
   }
   public static func ToInt32(_ x : UInt) -> Int32 {
      return Int32(truncatingBitPattern: x)
   }

   // To Int64
   public static func ToInt64(_ x : Int8) -> Int64 {
      return Int64(x)
   }
   public static func ToInt64(_ x : Int32) -> Int64 {
      return Int64(x)
   }
   public static func ToInt64(_ x : Int64) -> Int64 {
      return x
   }
   public static func ToInt64(_ x : Int) -> Int64 {
      return Int64(x)
   }
   public static func ToInt64(_ x : UInt8) -> Int64 {
      return Int64(x)
   }
   public static func ToInt64(_ x : UInt32) -> Int64 {
      return Int64(x)
   }
   public static func ToInt64(_ x : UInt64) -> Int64 {
      return Int64(bitPattern: x)
   }
   public static func ToInt64(_ x : UInt) -> Int64 {
      return Int64(bitPattern: UInt64(x))  // Does not extend high bit of 32-bit input
   }

   // To Int
   public static func ToInt(_ x : Int8) -> Int {
      return Int(x)
   }
   public static func ToInt(_ x : Int32) -> Int {
      return Int(x)
   }
   public static func ToInt(_ x : Int64) -> Int {
      return Int(truncatingBitPattern: x)
   }
   public static func ToInt(_ x : Int) -> Int {
      return x
   }
   public static func ToInt(_ x : UInt8) -> Int {
      return Int(x)
   }
   public static func ToInt(_ x : UInt32) -> Int {
      if MemoryLayout<Int>.size == MemoryLayout<Int32>.size {
         return Int(Int32(bitPattern: x))  // For 32-bit systems, non-authorized interpretation
      }
      return Int(x)
   }
   public static func ToInt(_ x : UInt64) -> Int {
      return Int(truncatingBitPattern: x)
   }
   public static func ToInt(_ x : UInt) -> Int {
      return Int(bitPattern: x)
   }

   // MARK: - To unsigned Int

   // To UInt8
   public static func ToUInt8(_ x : Int8) -> UInt8 {
      return UInt8(bitPattern: x)
   }
   public static func ToUInt8(_ x : Int32) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }
   public static func ToUInt8(_ x : Int64) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }
   public static func ToUInt8(_ x : Int) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }
   public static func ToUInt8(_ x : UInt8) -> UInt8 {
      return x
   }
   public static func ToUInt8(_ x : UInt32) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }
   public static func ToUInt8(_ x : UInt64) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }
   public static func ToUInt8(_ x : UInt) -> UInt8 {
      return UInt8(truncatingBitPattern: x)
   }

   // To UInt32
   public static func ToUInt32(_ x : Int8) -> UInt32 {
      return UInt32(bitPattern: Int32(x))  // Extend sign bit, assume minus input significant
   }
   public static func ToUInt32(_ x : Int32) -> UInt32 {
      return UInt32(bitPattern: x)
   }
   public static func ToUInt32(_ x : Int64) -> UInt32 {
      return UInt32(truncatingBitPattern: x)
   }
   public static func ToUInt32(_ x : Int) -> UInt32 {
      return UInt32(truncatingBitPattern: x)
   }
   public static func ToUInt32(_ x : UInt8) -> UInt32 {
      return UInt32(x)
   }
   public static func ToUInt32(_ x : UInt32) -> UInt32 {
      return x
   }
   public static func ToUInt32(_ x : UInt64) -> UInt32 {
      return UInt32(truncatingBitPattern: x)
   }
   public static func ToUInt32(_ x : UInt) -> UInt32 {
      return UInt32(truncatingBitPattern: x)
   }

   // To UInt64
   public static func ToUInt64(_ x : Int8) -> UInt64 {
      return UInt64(bitPattern: Int64(x))  // Extend sign bit, assume minus input significant
   }
   public static func ToUInt64(_ x : Int32) -> UInt64 {
      return UInt64(bitPattern: Int64(x))  // Extend sign bit, assume minus input significant
   }
   public static func ToUInt64(_ x : Int64) -> UInt64 {
      return UInt64(bitPattern: x)
   }
   public static func ToUInt64(_ x : Int) -> UInt64 {
      return UInt64(bitPattern: Int64(x))  // Extend sign bit if necessary, assume minus input significant
   }
   public static func ToUInt64(_ x : UInt8) -> UInt64 {
      return UInt64(x)
   }
   public static func ToUInt64(_ x : UInt32) -> UInt64 {
      return UInt64(x)
   }
   public static func ToUInt64(_ x : UInt64) -> UInt64 {
      return x
   }
   public static func ToUInt64(_ x : UInt) -> UInt64 {
      return UInt64(x)  // Does not extend high bit of 32-bit input
   }

   // To UInt
   public static func ToUInt(_ x : Int8) -> UInt {
      return UInt(bitPattern: Int(x))  // Extend sign bit, assume minus input significant
   }
   public static func ToUInt(_ x : Int32) -> UInt {
      return UInt(truncatingBitPattern: Int64(x))  // Extend sign bit, assume minus input significant
   }
   public static func ToUInt(_ x : Int64) -> UInt {
      return UInt(truncatingBitPattern: x)
   }
   public static func ToUInt(_ x : Int) -> UInt {
      return UInt(bitPattern: x)
   }
   public static func ToUInt(_ x : UInt8) -> UInt {
      return UInt(x)
   }
   public static func ToUInt(_ x : UInt32) -> UInt {
      return UInt(x)
   }
   public static func ToUInt(_ x : UInt64) -> UInt {
      return UInt(truncatingBitPattern: x)
   }
   public static func ToUInt(_ x : UInt) -> UInt {
      return x
   }
}

Here's some test code:

下面是一些测试代码:

   public func doTest() {

      // To Int8

      assert(JDI.ToInt8(42 as Int8) == 42)
      assert(JDI.ToInt8(-13 as Int8) == -13)

      assert(JDI.ToInt8(42 as Int32) == 42)
      assert(JDI.ToInt8(257 as Int32) == 1)

      assert(JDI.ToInt8(42 as Int64) == 42)
      assert(JDI.ToInt8(257 as Int64) == 1)

      assert(JDI.ToInt8(42 as Int) == 42)
      assert(JDI.ToInt8(257 as Int) == 1)

      assert(JDI.ToInt8(42 as UInt8) == 42)
      assert(JDI.ToInt8(0xf3 as UInt8) == -13)

      assert(JDI.ToInt8(42 as UInt32) == 42)
      assert(JDI.ToInt8(0xfffffff3 as UInt32) == -13)

      assert(JDI.ToInt8(42 as UInt64) == 42)
      assert(JDI.ToInt8(UInt64.max - 12) == -13)

      assert(JDI.ToInt8(42 as UInt) == 42)
      assert(JDI.ToInt8(UInt.max - 12) == -13)

      // To Int32

      assert(JDI.ToInt32(42 as Int8) == 42)
      assert(JDI.ToInt32(-13 as Int8) == -13)

      assert(JDI.ToInt32(42 as Int32) == 42)
      assert(JDI.ToInt32(-13 as Int32) == -13)

      assert(JDI.ToInt32(42 as Int64) == 42)
      assert(JDI.ToInt32(Int64(Int32.min) - 1) == Int32.max)

      assert(JDI.ToInt32(42 as Int) == 42)
      assert(JDI.ToInt32(-13 as Int) == -13)

      assert(JDI.ToInt32(42 as UInt8) == 42)
      assert(JDI.ToInt32(0xf3 as UInt8) == 243)

      assert(JDI.ToInt32(42 as UInt32) == 42)
      assert(JDI.ToInt32(0xfffffff3 as UInt32) == -13)

      assert(JDI.ToInt32(42 as UInt64) == 42)
      assert(JDI.ToInt32(UInt64.max - 12) == -13)

      assert(JDI.ToInt32(42 as UInt) == 42)
      assert(JDI.ToInt32(UInt.max - 12) == -13)

      // To Int64

      assert(JDI.ToInt64(42 as Int8) == 42)
      assert(JDI.ToInt64(-13 as Int8) == -13)

      assert(JDI.ToInt64(42 as Int32) == 42)
      assert(JDI.ToInt64(-13 as Int32) == -13)

      assert(JDI.ToInt64(42 as Int64) == 42)
      assert(JDI.ToInt64(-13 as Int64) == -13)

      assert(JDI.ToInt64(42 as Int) == 42)
      assert(JDI.ToInt64(-13 as Int) == -13)

      assert(JDI.ToInt64(42 as UInt8) == 42)
      assert(JDI.ToInt64(0xf3 as UInt8) == 243)

      assert(JDI.ToInt64(42 as UInt32) == 42)
      assert(JDI.ToInt64(0xfffffff3 as UInt32) == 4294967283)

      assert(JDI.ToInt64(42 as UInt64) == 42)
      assert(JDI.ToInt64(UInt64.max - 12) == -13)

      assert(JDI.ToInt64(42 as UInt) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToInt64(UInt.max - 12) == 4294967283)  // For 32-bit systems
      #else
         assert(JDI.ToInt64(UInt.max - 12) == -13)  // For 64-bit systems
      #endif

      // To Int

      assert(JDI.ToInt(42 as Int8) == 42)
      assert(JDI.ToInt(-13 as Int8) == -13)

      assert(JDI.ToInt(42 as Int32) == 42)
      assert(JDI.ToInt(-13 as Int32) == -13)

      assert(JDI.ToInt(42 as Int64) == 42)
      assert(JDI.ToInt(-13 as Int64) == -13)

      assert(JDI.ToInt(42 as Int) == 42)
      assert(JDI.ToInt(-13 as Int) == -13)

      assert(JDI.ToInt(42 as UInt8) == 42)
      assert(JDI.ToInt(0xf3 as UInt8) == 243)

      assert(JDI.ToInt(42 as UInt32) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToInt(0xfffffff3 as UInt32) == -13)  // For 32-bit systems
      #else
         assert(JDI.ToInt(0xfffffff3 as UInt32) == 4294967283)  // For 64-bit systems
      #endif

      assert(JDI.ToInt(42 as UInt64) == 42)
      assert(JDI.ToInt(UInt64.max - 12) == -13)

      assert(JDI.ToInt(42 as UInt) == 42)
      assert(JDI.ToInt(UInt.max - 12) == -13)

      // To UInt8

      assert(JDI.ToUInt8(42 as Int8) == 42)
      assert(JDI.ToUInt8(-13 as Int8) == 0xf3)

      assert(JDI.ToUInt8(42 as Int32) == 42)
      assert(JDI.ToUInt8(-13 as Int32) == 0xf3)

      assert(JDI.ToUInt8(42 as Int64) == 42)
      assert(JDI.ToUInt8(-13 as Int64) == 0xf3)
      assert(JDI.ToUInt8(Int64.max - 12) == 0xf3)

      assert(JDI.ToUInt8(42 as Int) == 42)
      assert(JDI.ToUInt8(-13 as Int) == 0xf3)
      assert(JDI.ToUInt8(Int.max - 12) == 0xf3)

      assert(JDI.ToUInt8(42 as UInt8) == 42)
      assert(JDI.ToUInt8(0xf3 as UInt8) == 0xf3)

      assert(JDI.ToUInt8(42 as UInt32) == 42)
      assert(JDI.ToUInt8(0xfffffff3 as UInt32) == 0xf3)

      assert(JDI.ToUInt8(42 as UInt64) == 42)
      assert(JDI.ToUInt8(UInt64.max - 12) == 0xf3)

      assert(JDI.ToUInt8(42 as UInt) == 42)
      assert(JDI.ToUInt8(UInt.max - 12) == 0xf3)

      // To UInt32

      assert(JDI.ToUInt32(42 as Int8) == 42)
      assert(JDI.ToUInt32(-13 as Int8) == 0xfffffff3)

      assert(JDI.ToUInt32(42 as Int32) == 42)
      assert(JDI.ToUInt32(-13 as Int32) == 0xfffffff3)

      assert(JDI.ToUInt32(42 as Int64) == 42)
      assert(JDI.ToUInt32(-13 as Int64) == 0xfffffff3)
      assert(JDI.ToUInt32(Int64.max - 12) == 0xfffffff3)

      assert(JDI.ToUInt32(42 as Int) == 42)
      assert(JDI.ToUInt32(-13 as Int) == 0xfffffff3)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt32(Int.max - 12) == 0x7ffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt32(Int.max - 12) == 0xfffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt32(42 as UInt8) == 42)
      assert(JDI.ToUInt32(0xf3 as UInt8) == 0xf3)

      assert(JDI.ToUInt32(42 as UInt32) == 42)
      assert(JDI.ToUInt32(0xfffffff3 as UInt32) == 0xfffffff3)

      assert(JDI.ToUInt32(42 as UInt64) == 42)
      assert(JDI.ToUInt32(UInt64.max - 12) == 0xfffffff3)

      assert(JDI.ToUInt32(42 as UInt) == 42)
      assert(JDI.ToUInt32(UInt.max - 12) == 0xfffffff3)

      // To UInt64

      assert(JDI.ToUInt64(42 as Int8) == 42)
      assert(JDI.ToUInt64(-13 as Int8) == 0xfffffffffffffff3)

      assert(JDI.ToUInt64(42 as Int32) == 42)
      assert(JDI.ToUInt64(-13 as Int32) == 0xfffffffffffffff3)

      assert(JDI.ToUInt64(42 as Int64) == 42)
      assert(JDI.ToUInt64(-13 as Int64) == 0xfffffffffffffff3)
      assert(JDI.ToUInt64(Int64.max - 12) == (UInt64.max >> 1) - 12)

      assert(JDI.ToUInt64(42 as Int) == 42)
      assert(JDI.ToUInt64(-13 as Int) == 0xfffffffffffffff3)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt64(Int.max - 12) == 0x7ffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt64(Int.max - 12) == 0x7ffffffffffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt64(42 as UInt8) == 42)
      assert(JDI.ToUInt64(0xf3 as UInt8) == 0xf3)

      assert(JDI.ToUInt64(42 as UInt32) == 42)
      assert(JDI.ToUInt64(0xfffffff3 as UInt32) == 0xfffffff3)

      assert(JDI.ToUInt64(42 as UInt64) == 42)
      assert(JDI.ToUInt64(UInt64.max - 12) == 0xfffffffffffffff3)

      assert(JDI.ToUInt64(42 as UInt) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt64(UInt.max - 12) == 0xfffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt64(UInt.max - 12) == 0xfffffffffffffff3)  // For 64-bit systems
      #endif

      // To UInt

      assert(JDI.ToUInt(42 as Int8) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(-13 as Int8) == 0xfffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt(-13 as Int8) == 0xfffffffffffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt(42 as Int32) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(-13 as Int32) == 0xfffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt(-13 as Int32) == 0xfffffffffffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt(42 as Int64) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(-13 as Int64) == 0xfffffff3)  // For 32-bit systems
         assert(JDI.ToUInt(Int64.max - 12) == 0xfffffff3)
      #else
         assert(JDI.ToUInt(-13 as Int64) == 0xfffffffffffffff3)  // For 64-bit systems
         assert(JDI.ToUInt(Int64.max - 12) == 0x7ffffffffffffff3)
      #endif

      assert(JDI.ToUInt(42 as Int) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(Int.max - 12) == 0x7ffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt(Int.max - 12) == 0x7ffffffffffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt(42 as UInt8) == 42)
      assert(JDI.ToUInt(0xf3 as UInt8) == 0xf3)

      assert(JDI.ToUInt(42 as UInt32) == 42)
      assert(JDI.ToUInt(0xfffffff3 as UInt32) == 0xfffffff3)

      assert(JDI.ToUInt(42 as UInt64) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(UInt64.max - 12) == 0xfffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt(UInt64.max - 12) == 0xfffffffffffffff3)  // For 64-bit systems
      #endif

      assert(JDI.ToUInt(42 as UInt) == 42)
      #if (arch(i386) || arch(arm))
         assert(JDI.ToUInt(UInt.max - 12) == 0xfffffff3)  // For 32-bit systems
      #else
         assert(JDI.ToUInt(UInt.max - 12) == 0xfffffffffffffff3)  // For 64-bit systems
      #endif

      print("\nTesting JDI complete.\n")
   }