Don't Double
my money: on Swift Numeric types
I’ve seen many iOS devs (especially new ones, but also seasoned ones) who do not give much thought to numerical data types. The common heuristic is straightforward: need to handle numerical data? Just use the all-round, always precise and ready-to-go Double
! Maybe you even know that Float
is faster than Double
at the expense of precision, and that’s it. This approach works well most of the time until it doesn’t, and you enter the rabbit hole of floating-point math shenanigans that we all love:
func testSumOfDoubles() {
let sum: Double = 0.1 + 0.2
XCTAssertEqual(sum, 0.3) // ❌ “0.30000000000000004 is not equal to 0.3”
}
Swift Numeric Types Overview
Swift offers 15 types that conform to the Numeric
protocol, the main interface defining numeric types. This protocol ensures that any type conforming to it can support operations such as multiplication (by the Numeric
protocol itself), addition (by inheriting conformance to the AdditiveArithmetic
protocol), equality and comparison (by conforming to Equatable
), and the verbose ExpressibleByIntegerLiteral
, which means that it can be safely initialized with an integer:
let percentage: Double = 4 // Percentage equals 4.0
let count = 12 // Type is inferred as ‘Int’
You can check them all here for completeness, but you don’t really need to know them by heart. You can probably deduce what most of them achieve following these rules:
- -
Double
andFloat
variants are used for floating-point arithmetic, conforming to the IEEE Standard for Floating-Point Arithmetic (IEEE 754). These types are represented in binary (base-2). Most developers are familiar with floating-point precision and some of its quirks, like the one in the snippet at the beginning of the article.
- -
Int
and its variants refer to Integer value types, representing whole numbers.
- - The types prefixed by U (short for unsigned) represent that they do not support negative numbers (e.g.
-1
,-2
cannot be represented by unsigned types).
- - The types can also be suffixed by the bits they use to be stored. More bits mean larger representable numbers but also higher memory usage. They are named in 8-bit intervals (e.g.
Int8
,Int16
)
For example, we now can guess that the type
UInt64
represents a 64-bit unsigned number.
Except in scenarios when you are interfacing with C
, C++
or even some Obj-C
libraries, you don’t really bump into all these Numeric
types, and can live a worry-free life by only knowing when to use regular Int
and Double
.
And what about Decimal
?
The problem with this train of thought is the non-obvious oversight of the Decimal
type, which does not conform directly to the Numeric
protocol but through its superset, the SignedNumeric
protocol.
The biggest difference is that it is the only type of the ones mentioned that does not have a binary representation (meaning it does not conform to neither BinaryInteger
or BinaryFloatingPoint
), and instead, it is the only data type of the ones mentioned that supports base-10 arithmetic- you know, the one we generally use in our daily life.
Rules of thumb
- - On
Decimal
: Base-2 types falter when performing human-centric base-10 math. They make equality tests harder, and the errors compound when rounding. This limitation makesDecimal
the preferred choice for applications needing precise arithmetic akin to everyday human calculations, such as when dealing with money 💸 (financial apps) or weights ⚖️ (recipe apps and the likes). If the domain of your app lies somewhere here, your life will be much easier adoptingDecimal
.
- - On
Double
: Binary representations excel at handling irrational or periodic numbers that our base-10 system struggles with. For example, if you need accurate representation of fractions like 1/3 stick toDouble
. Or even better: if you do not have any specific need for base-10 maths, you can safely stick toDoubles
all the way.
Appendix: more floating-point shenanigans!
// Unexpected roundings
let number1 = 15.999999999999998
let number2 = 15.999999999999999
print(Int(number1)) // prints 15
print(Int(number2)) // prints 16
// Invisible small increments
let number3 = 1.0000000000000002
let number4 = 1.0000000000000003
print(number3 == 1.0) // prints true
print(number4 == 1.0) // prints true
// Equality illusions
let a = 0.1 + 0.2
let b = 0.3
let c = 0.1 + 0.2 - 0.3
print(a == b) // prints false
print(c == 0.0) // prints true
print(a - b == c) // prints false
// Invisible small numbers
let bigNumber = 1e16
let smallNumber = 1.0
print(bigNumber + smallNumber == bigNumber) // prints true
print((bigNumber + smallNumber) - bigNumber) // prints 0.0
I hope you found this article helpful. For any questions, comments, or feedback, don't hesitate to reach out: connect with me any of the social links below, or drop me an e-mail at marcos@varanios.com.
Thank you for reading! 🥰