After starting work on the Proteus Client (Boston Biomotion) in the summer of 2016, it was clear that I’d be working fast, refactoring constantly, and experimenting with a lot of code. In an effort to bring some order to my work, I decided to go all in on TypeScript. I merged the PR that migrated all my js(x) to ts(x) on October 6 of that year. It was, without a doubt, the best gamble on technology that I can ever remember making. React’s simplicity and its preference for straightforward, clear, safe patterns makes it a perfect partner for TypeScript. I’m always finding new ways to get more out of them and cannot begin to imagine working on a large project without the safety net of the compiler. Below, I’ll share a few of my favorite patterns.

These all focus on a particular type of pain point that I run into when maintaining a large project: consistency and safety of objects coming from and going to disparate places. In other words: “How can I be sure that I have the thing that I think I have?” Most engineers in dynamic languages use a combination of tests, ducktype checks, trust, and a hell of a lot of grep and manual debugging. They keep a ton of stuff in their head and hope that everyone knows how everything works or is willing to trace stuff out when it doesn’t work right. I offer some examples of how we can make our lives easier by leveraging the compiler.

Better reducers, mapStateToProps, and component store access

At the beginning of the year, I wrote this post about TypeScript and Redux. It laid out my pattern for ensuring safety and consistency between the output of each reducer, the output of each mapStateToProps function, and the data accessed from within each component. In the eleven months (how has it been so long!?) since writing this, I’ve stuck with this pattern and truly love it. No change there. Read that if you haven’t already.

Better Redux actions

An omission in the aformentioned writeup was how to handle action creators. This was skipped because, at the time, I didn’t have a healthy pattern for it. You can see the evidence of this in one of my code snippets from that post:

// A reducer
function crucialObject(currentState = { firstKey: 'none', secondKey: 'none' }, action) {
  switch (action.type) {
    case 'FIRST_VALUE': {
      return { firstKey: 'new', secondKey: action.newValue };
    }
    case 'SECOND_VALUE': {
      return Object.assign({}, currentState, { secondKey: action.newValue });
    }
    default:
      return currentState;
  }
}

Note the implicit any of action. Note the trust that action.newValue would just… be there… and be what we expect it to be. Gross. In reality, this did not scale at all. My reducers grew to be frightening, messy places where data might be there, where I couldn’t be sure what keys were supposed to be present, where I couldn’t tell which action was responsible for which key.

There are a few libraries that try to solve this problem. I felt like they were complicated what should be a pretty straightforward issue. The pattern I settled on is nearly identical to the one outlined here. I differ in my preference for a slightly more manual approach within my reducers. While that post’s author likes a type that joins all possible actions, like this:

export type ActionTypes =
    | IncrementAction
    | DecrementAction
    | OtherAction;

function counterReducer(s: State, action: ActionsTypes) {
  switch (action.type) {
    case Actions.TypeKeys.INC:
      return { counter: s.counter + action.by };
    case Actions.TypeKeys.DEC:
      return { counter: s.counter - action.by };
    default:
      return s;
  }
}

…I name things a bit differently and just tell the compiler what each action is.

export interface Increment {
  type: ActionTypes.INCREMENT;
  by: number;
}

export interface Decrement {
  type: ActionTypes.DECREMENT;
  count: number; // to illustrate that sometimes, your actions might end up with weird, inconsistent keys
}

function counterReducer(s: CounterState, action: { type: string }) : CounterState {
  switch (action.type) {
    case ActionTypes.INCREMENT:
      const typedAction = action as Increment;
      return { counter: s.counter + typedAction.by };
    case ActionTypes.DECREMENT:
      const typedAction = action as Decrement;
      return { counter: s.counter - typedAction.by };
    default:
      return s;
  }
}

I like doing it this way because I think it makes it easier to quickly see what you’re working with in each case statement. It also makes it easy if you have an action without a payload, since you can be a little lazy and not define an interface for it. It takes a little more discipline but it’s worth it.

Either way, the result is the same: your actions will be consistent when they are created and read.

The Empty object and isEmpty

A tricky issue I ran into when first getting into this was dealing with empty objects. Say user reducer either returns a PersistedUser or nothing. How would I represent that? You can’t return undefined from a reducer and this:

function user(currentState: PersistedUser | {}, action: { type: string} ): PersistedUser | {} {
  ...
}

…is no good because AnyInterface | {} is treated as any, bypassing all type safety.

I settled on a simple pattern that I feel like I picked up in another language, but I can’t remember where. I define a simple interface that I call Empty:

export interface Empty {
  empty: true;
}

An Empty represents an object that is deliberately, explicitly blank. It might be a guest user, or a way to demonstrate that there is not a connection to the robot, or any number of processes that have not yet occurred. I define types like this:

export type UserState = PersistedUser | Empty;

And then export a very simple isEmpty function:

export function isEmpty(config: any) : config is Empty {
  return config !== null && config !== undefined && config.empty !== undefined;
}

By defining types that are either Empty or something else, I’m forced to always prove to the compiler that I’m acting on the right object. There are times where I use Empty when I could leave something undefined, since optional values are easy to cheat with !. I deal with the isEmpty case and move on.

A Better isEmpty() with Generics

An issue I ran into last week involved a refactor that allowed some bad code to slip through my isEmpty function. I started with this interface and type:

export interface Device {
  connected: boolean;
  ...some other things
}

export type DeviceState = Device | Empty;

I used it in components like this:

interface StateProps {
  device: DeviceState;
}

class DeviceAwareComponent extends Component<StateProps, object> {
  render() {
    if (isEmpty(this.props.device)) {
      return (something)
    }

    // go on with rendering happy path
  }
}

That was all well and good until I refactored that interface. I was left with this:

export interface ProteusState {
  status: ProteusConnectionStatus;
  device: DeviceState;
  errorMessage?: string;
}

export type DeviceState = Device | Empty;

// and back in the component

interface StateProps {
  proteus: ProteusState;
}

class DeviceAwareComponent extends Component<StateProps, object> {
  render() {
    // here's the problem
    if (isEmpty(this.props.proteus)) {
      return (something)
    }

    // go on with rendering happy path
  }
}

As you might notice, I forgot to change my isEmpty call to look at this.props.proteus.device. As far as my function was concerned, everything was fine. It has no awareness of whether it’s possible for this.props.proteus to be Empty, so it let it through, even when the device was in an invalid state. This was a pretty big problem and I needed a safer way of handling it.

My solution was to enhance the behavior of isEmpty with an optional generic that I can use to identify the expected interface of the object if it is not empty. By doing this, the compiler will do an extra check to ensure that what I think I’m passing is what is actually being passed. The code looks like this:

export function isEmpty<T = any>(config?: T | Empty) : config is Empty {
  return config !== null && config !== undefined && (config as any).empty !== undefined;
}

I can then modify the broken function call above and the compiler will bark at me immediately.

  // This fails to compile! The object passed does not match the generic given to the function.
  if (isEmpty<DeviceState>(this.props.proteus)) {
    return (something)
  }

I’m now using this everywhere that I call isEmpty without a clear sense of what the object should be if it is not Empty. Had this been in place ahead of time, it would have kept my bug from sneaking into my commit!

Bonus: better testing with rosie and generics

This isn’t specific to React, but with a little extra typing, TypeScript transforms the ease with which tests can be written in JavaScript when using rosie to create factories.

On the backend, I’m still trying to wean myself off of Ruby on Rails. My API is built with Grape and my responses are Grape Entities, but ActiveRecord remains my greatest addiction.

One of ActiveRecord’s greatest assests is the way it seamlessly maps your database columns to methods, creating getters and setters, and then offers these interfaces to factory_bot. There is immediate, guaranteed consistency throughout the stack, because the strongly typed database acts as a single source of truth for what is and isn’t permissible. Naturally, Ruby being Ruby, it’s not perfect. If you remove a column from your database, it’s on you to grep through your code and remove references to it, but ActiveRecord models are so easy to test via factories that it’s usually easy enough to get things passing.

This consistency is what gets me. My experience with testing in JavaScript always required a lot of dilligence. In pre-TypeScript days, if I had an implied interface for a PersistedUser, it was on me to ensure that my factory (if I had one) matched the actual implementation in production. After I started working with TypeScript, it was a little bit better because I could manually build factories using exported interface defs, but it was missing the fluidity of factory_bot and its integration with ActiveRecord.

I started working with Rosie a few months ago. With an interface inspired by factory_bot, it felt awfully familiar, except not: its TypeScript definitions made heavy use of any and its use of a shared global state made it hard to improve. I ended up reworking the definitions to allow the use of generics to let the compiler know what interfaces you’re defining or building. You can see examples here. We’re left with something that feels remarkably like the Ruby version.

In practice, you’d do something like this:

interface PersistedUser {
  id: number;
  createdAt: number;
  updatedAt: number;
  name: string;
  age?: number;
  occupation?: string;
}

Factory.define<PersistedUser>('PersistedUser').attrs({
  id: 0,
  createdAt: () => moment().unix(),
  updatedAt: () => moment().unix(),
  name: () => `${Faker.name.firstName} ${Faker.name.lastName}`
}).sequence('id');

// elsewhere...

const user: PersistedUser = Factory.build<PersistedUser>('PersistedUser', { name: 'Chris Grigg' });

In the above example, assuming you have the right dependencies imported, everything will compile correctly. If occupation suddenly becomes a required key in PersistedUser, the factory definition will complain. If the createdAt or updatedAt values went from unix timestamps to ISO 8601 strings, it would complain. If I supply the wrong kind of object to the second argument of build (maybe if I say { name: 666 }), the compiler will reject it. Same goes for adding a key that doesn’t exist.

Tests are worthless if they do not accurately match the code being tests. Without TypeScript and Rosie, we put the burden of maintaining parity on the user or a separate validation framework, which is a real drag. Introducing this change is a holy grail for me and has improved my test coverage dramatically.

Wrapup

So there you have it: some of my favorite non-trivial uses for TypeScript. These patterns let you build and refactor with significantly more speed and confidence than one could in vanilla JavaScript. The next time someone tells you that they don’t need a compiler because it’s easy enough to keep variable types in their head, share some of this with them and see how it compares to their process.