00:06 Nick Lockwood has written a great tutorial series in which he
recreates Wolfenstein 3D in
Swift. He'll join us for some episodes to walk through parts of it.
00:25 Nick grew up with Wolfenstein, and it was one of the games that
inspired him to start programming. He's always wanted to recreate it using the
original look and techniques, and Swift, with its balance of high-level and
low-level APIs, finally offered an opportunity for him to realize this goal.
00:54 For this series, the idea is to not use any abstractions, such as
Core Graphics, to draw our graphics, but rather to build our own drawing
primitives from the ground up. The reason for this is — aside from wanting to
avoid heavy abstractions on top of our own code — that we theoretically end up
with a project that can run on any platform that supports Swift, including Linux
and perhaps even Windows.
01:35 As a disclaimer: this series won't teach us a lot about modern game
development. Rather, this is how games might have been built in the early 90s.
Modern games make heavy use of graphics cards and GPUs, whereas our project will
run solely on the CPU, which means it'll quickly drain a phone battery.
The Goal
02:05 Aside from a few small modifications, we'll be following selected
parts of Nick's tutorials pretty closely. Reading and coding along with the
extensive tutorials is quite a bit of work, so we thought it would be
interesting to see some parts of it on video as well. The full series, called
Retro Rampage, can be found on
GitHub.
03:07 Let's first take a look at what the final result would be after
having completed the entire series. The game opens with an enemy monster
immediately coming at us. After we shoot that first enemy, we walk around
through the maze and open sliding doors until we're killed by an enemy.
03:42 Our version of the game won't be as complete as this demo, but
we'll still learn about some interesting aspects, like drawing a scene in 3D.
03:57 We start out with a more-or-less empty project with only a bit of
UIKit code that sets up an image view:
class ViewController: UIViewController {
let imageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
setUpImageView()
}
}
extension ViewController {
func setUpImageView() {
view.addSubview(imageView)
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.topAnchor.constraint(equalTo: view.topAnchor).isActive = true
imageView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
imageView.widthAnchor.constraint(equalTo: view.widthAnchor).isActive = true
imageView.heightAnchor.constraint(equalTo: view.heightAnchor).isActive = true
imageView.contentMode = .scaleAspectFit
imageView.backgroundColor = .black
}
}
04:14 And that's basically all the UIKit code we'll write. The idea is
to let UIKit put an image on the screen, and after that, we'll have complete
control over what goes into that image, and we'll update this image at 60 frames
per second.
Color
04:46 Typically, when we put an image on the screen in iOS, we're using
a UIImage
, which sits on top of CGImage
. But we want to peel away the UIKit
layers and set up some basic building blocks. The most primitive type we can get
started with is a type that represents a color.
05:32 UIColor
gives us an abstraction over a color using four
floating-point values, which represent red, green, blue, and alpha. This is
already more than what goes on under the hood, and we want our type to look more
like the raw data, which means we will store each of the four components in
eight bits:
struct Color {
var r, g, b, a: UInt8
}
06:42 Then we can write some static properties that let us easily create
specific color values:
extension Color {
static let clear = Color(r: 0, g: 0, b: 0, a: 0)
static let black = Color(r: 0, g: 0, b: 0)
static let white = Color(r: 255, g: 255, b: 255)
static let gray = Color(r: 192, g: 192, b: 192)
static let red = Color(r: 255, g: 0, b: 0)
static let green = Color(r: 0, g: 255, b: 0)
static let blue = Color(r: 0, g: 0, b: 255)
}
07:21 By setting a default value of 255
for the alpha component, the
auto-generated initializer also uses this default value, so we can create an
opaque color by omitting the alpha parameter:
struct Color {
var r, g, b: UInt8
var a: UInt8 = 255
}
Bitmap
08:06 Now that we have Color
, we can move on to defining a bitmap
type. Images are typically stored as a continuous array of color values. Since
we want to copy our image type into the image view as efficiently as possible,
we model our bitmap type in a way that is very close to what its memory storage
looks like:
struct Bitmap {
var pixels: [Color]
}
09:01 An array of pixels alone doesn't tell us what the dimensions of
the image are; we need to specify either the width or the height. We choose to
store the image's width, and we provide the height as a computed property:
struct Bitmap {
let width: Int
var pixels: [Color]
var height: Int {
pixels.count / width
}
}
09:33 Even though the computer is happy to work with a linear array of
pixels, it doesn't provide a very convenient interface for us as developers. If
we want to set a certain pixel's color, it would be much handier if we could
refer to that pixel using two-dimensional coordinates:
struct Bitmap {
subscript(x: Int, y: Int) -> Color {
get { pixels[y * width + x] }
set { pixels[y * width + x] = newValue }
}
}
11:04 In order to conveniently create a Bitmap
of a single color, we
write an initializer that creates an array of pixels from a given width, height,
and color value:
struct Bitmap {
init(width: Int, height: Int, color: Color) {
self.width = width
pixels = Array(repeating: color, count: width * height)
}
}
12:11 Now we can create a Bitmap
in viewDidLoad
:
let bitmap = Bitmap(width: 8, height: 8, color: .white)
13:11 And in order to turn the bitmap into an image that we can pass to
the image view, we paste in a UIImage
initializer:
extension UIImage {
convenience init?(bitmap: Bitmap) {
let alphaInfo = CGImageAlphaInfo.premultipliedLast
let bytesPerPixel = MemoryLayout<Color>.stride
let bytesPerRow = bitmap.width * bytesPerPixel
guard let providerRef = CGDataProvider(data: Data(bytes: bitmap.pixels, count: bitmap.height * bytesPerRow) as CFData) else {
return nil
}
guard let cgImage = CGImage(
width: bitmap.width,
height: bitmap.height,
bitsPerComponent: 8,
bitsPerPixel: bytesPerPixel * 8,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: alphaInfo.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
) else {
return nil
}
self.init(cgImage: cgImage)
}
}
13:45 Let's roughly go over these lines. First, the alphaInfo
variable
tells the CGImage
we're creating that the alpha component is stored in the
last (i.e. the least significant) bits of each pixel's color value and that the
red, green, and blue components are already multiplied by the alpha component.
15:41 We define the number of bytes per pixel by using the memory size
of the Color
type — we know this is four bytes, but it's best practice to get
this value from the stride
constant. By multiplying this number of bytes with
the bitmap's width, we get the number of bytes per row.
16:16 Then we create a CGDataProvider
that takes the array of pixels
and turns it into raw data for the CGImage
.
16:38 Finally, we use all the above variables as parameters to create a
CGImage
, from which we subsequently create a UIImage
. Doing so takes quite a
few lines, and we don't see this type of code every day, but when we walk though
it line by line, we can see that the code doesn't hide many surprises, and we're
simply following the documentation.
17:10 Now let's use the initializer to get our image on the screen. We
make the bitmap's first pixel blue, and we pass the bitmap to the image view:
class ViewController: UIViewController {
let imageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
setUpImageView()
var bitmap = Bitmap(width: 8, height: 8, color: .white)
bitmap[0, 0] = Color.blue
imageView.image = UIImage(bitmap: bitmap)
}
}
18:02 We see the blue pixel in the top-left corner, but it's blurred
because the image is stretched out to fit the screen. This look is the result of
UIKit's default behavior, which tries to make lower-res images look better by
applying some filters. In this case, we want to actually see the individual
pixels because it makes for a retro look, so we tell the image view to use the
nearest neighbor algorithm as its magnification filter. This makes the blue
pixel show up as a sharp rectangle:
extension ViewController {
func setUpImageView() {
imageView.layer.magnificationFilter = .nearest
}
}
Update Loop
19:20 As this episode's last step, we will set up the basic mechanism
of updating the game state in order to make something move onscreen. Currently,
we're creating and displaying our image in viewDidLoad
only once. But we
actually want to set up a render loop.
19:58 Under the hood, iOS is already using a run loop to constantly
refresh the screen, and we want to tie into this process. We could use a
Timer
, but the recommended approach is to use a CADisplayLink
instead.
CADisplayLink
is basically a timer that is synchronized to the refresh rate of
the screen, and it prevents a jittery effect that could occur when the game
engine updates and screen updates get out of sync.
20:22 So, how often we get called by the display link depends on the
refresh rate of the screen. This is usually 60 frames per second, although some
of the newer iPads might even reach 120 fps. The display link lets us specify a
multiplier in order to update every single frame, or every two frames, etc. But
we want to let it go as fast as it can, so we'll use the default speed.
20:48 CADisplayLink
hasn't been modernized like Timer
's API, so we
can't use a closure, but we have to specify a target and selector. For this, we
create an @objc
method that will be called with the display link:
class ViewController: UIViewController {
let imageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
setUpImageView()
let displayLink = CADisplayLink(target: self, selector: #selector(update))
}
@objc func update(_ displayLink: CADisplayLink) {
}
}
21:24 And, unlike Timer
, we have to manually add the display link to
the main run loop:
displayLink.add(to: .main, forMode: .common)
21:33 Unless we need to pause or remove the display link, we don't need
to keep a reference to the display link; adding it to the run loop makes sure
it's kept alive.
22:12 The second parameter, the run loop mode, is used to handle
performance issues such as pausing a timer when the user is interacting with the
screen. Because we want our updating mechanism to be as fast and as constant as
possible, we pass in the .common
mode to let the run loop update the display
link under all circumstances.
22:37 Now we can move the bitmap into our update method:
class ViewController: UIViewController {
@objc func update(_ displayLink: CADisplayLink) {
var bitmap = bitmap(width: 8, height: 8, color: .white)
bitmap[0, 0] = Color.blue
imageView.image = UIImage(bitmap: bitmap)
}
}
22:54 This results in our image being updated at 60 frames per second.
To actually see any of this, we can insert a quick hack to move the blue pixel
one position to the right after every second:
class ViewController: UIViewController {
@objc func update(_ displayLink: CADisplayLink) {
var bitmap = bitmap(width: 8, height: 8, color: .white)
let x = Int(displayLink.timestamp) % 8
bitmap[x, 0] = Color.blue
imageView.image = UIImage(bitmap: bitmap)
}
}
Architecture
23:54 We're mixing a lot of responsibilities in the view controller,
and it makes sense to move some (or actually most) logic out into separate
types.
24:20 Games ask for a different architecture than generic apps do. We
want to separate our game logic from our view logic. Within the "view" domain,
we can further differentiate between the actual view onscreen and the visual
elements we're drawing within our image. We want to pull the drawing code out of
the view controller, and later, we'll also want to abstract our model that
defines game entities such as the player, the floor, walls, etc.
25:25 As a first step in that direction, we pull the creation of the
bitmap out to a new type called Renderer
:
struct Renderer {
var bitmap = Bitmap(width: 8, height: 8, color: .white)
mutating func draw(x: Int) {
bitmap[x, 0] = Color.blue
}
}
27:08 In our view controller's update
method, we create a renderer,
pass the x
coordinate in, and pass the renderer's bitmap to the image view:
class ViewController: UIViewController {
@objc func update(_ displayLink: CADisplayLink) {
let x = Int(displayLink.timestamp) % 8
var renderer = Renderer()
renderer.draw(x: x)
imageView.image = UIImage(bitmap: renderer.bitmap)
}
}
27:44 In this episode, we've covered the basics of rendering. Next
time, we'll continue with what it is we'll be rendering, i.e. laying out the
floorplan and visualizing the player.