Skip to content
This repository has been archived by the owner on Dec 13, 2018. It is now read-only.

Tips for Testing

Peter Hallam edited this page May 6, 2016 · 1 revision

Tips for how to test the changes you are making to Nuclide.

Running Tests

The Basics

To run tests, navigate to the directory that contains the spec folder that contains your tests. Then run apm test or npm test. Note that apm test or npm test will only run tests in the spec folder.

If you're patient, you can run all tests for all packages by running ./scripts/dev/test from the Tools/Nuclide directory.

Most of the time, you likely won't want to run all the tests in your spec folder, or even all the tests within your test file. You can append an f in front of the specific tests or test groups you want (it => fit, describe => fdescribe), and only those will run. In case you're curious, the "f" stands for "focus".

The linter should catch it, but be careful not to commit your fits or fdescribes!

IMPORTANT: Don't use # (space-hash) in your test descriptions. It disables that test.

apm or npm ?

Generally speaking, if your package could be treated as a Node package, you should use npm test to test it. Otherwise, if its relies on Atom's API and couldn't be required by other packages, it should be tested by apm.

We rely on jasmine to run npm test and we want to write tests in ES6 like we do in with apm test. So we need to configure the package's package.json in following way:

  {
  ...
  "devDependencies": {
    ...
    "nuclide-jasmine": "0.0.0"
    ...
  },
  "scripts": {
    "test": "node --harmony node_modules/.bin/jasmine-node-transpiled spec"
  }
  ...
}

Debugging

Sometimes, console.log() is all you need. But sometimes a single breakpoint inside your test is worth a thousand console.log() statements.

If you run tests via apm test/npm test on the command line, your console.log() statements will print to that console. To stop on breakpoints, though, use the spec window within Atom. You'll want to:

  1. Open Atom from the directory that contains your spec folder. (If you don't do this, you will see zero tests in the spec window.)
  2. Press cmd-option-ctrl-p to open the Atom spec window.
  3. Now, in the Atom spec window, press cmd-option-i to open the debugger console. Set breakpoints as desired. (You can also use debugger statements in code.)
  4. Reload/rerun the tests in the Atom spec window by pressing cmd-option-ctrl-l.

Debugging UI

If you are testing functionality that involves the Atom DOM, you may want to see the DOM being changed as the tests run. By default, the Atom spec window will not show the UI that is created, because it slows down the tests. You can make the UI visible by adding your element to the #jasmine-content div. #jasmine-content gets cleared out between specs.

  document.querySelector("#jasmine-content").appendChild(myElement);

Writing Tests

Atom uses Jasmine 1.3 as its test framework, so the Jasmine docs are a good place to start. Note that Atom has added some of its own built-in functions, such as waitsForPromise(). You can find other tweaks, such as custom matchers, defined in spec-helper.coffee. Where possible, we have ported tweaks such as waitsForPromise() to our Node test runner, as well, so you should rarely have to think about whether npm or apm will be used to run your test.

Nuclide Custom Matchers

The package nuclide-test-helpers defines custom Jasmine matchers, which can be added to tests using the following snippet:

beforeEach(function() {
  this.addMatchers(require('nuclide-test-helpers').matchers));
});

After that, you can use expect(...).diffJson(...); to do a structural diff on a JSON object, as well as expect(...).diffLines(...); to do a line-by-line diff of two blobs of text. Both of these custom matchers will print colored output to the terminal. These matchers can be used in cases where the default printout from Jasmine is inconvenient to read.

Writing Async Tests

The Jasmine way of writing async tests using a network of waitsFor() and runs() calls is garbage ([email protected]). Fortunately, we can combine async/await and waitsForPromise() to write async tests in a linear way. Recall that an async function always returns a Promise. Therefore, when all of your async code is Promise based, you can write your test linearly if you wrap the entire thing in an async function that is passed to waitsForPromise():

  describe('getDiagnostics()', () => {
    it('gets the file errors', () => {
      waitsForPromise(async () => {
        // A bunch of async stuff can be done at the start
        // of the function using await.
        var filePath = path.join(__dirname, 'fixtures', 'HackExample1.php');
        var fileContents = await readFile(filePath, 'utf8');
        var errors = await hackLanguage.getDiagnostics(filePath, fileContents);

        // Once the final async result is returned, assertions can be made.
        expect(errors.length).toBe(1);
        expect(errors[0].path).toBe(filePath);
        expect(errors[0].linter).toBe('hack');
        expect(errors[0].level).toBe('error');
        expect(errors[0].range.start).toEqual({ row : 14, column : 11 });
        expect(errors[0].line).toBe(14);
        expect(errors[0].col).toBe(11);
        expect(errors[0].message).toMatch(/\(Hack\) await.*async/);
      });
    });
  });

waitsForPromise takes on optional first argument {shouldReject: boolean , timeout: number} which defaults to {shouldReject: false, timeout: 0}. Setting shouldReject to true enables testing promise rejection.

Writing Tests for timing code (setTimeout, setInterval)

Since, we are using apm test to run our tests, atom has some helpers for testing time-based with setTimeout, setInterval in https://github.com/atom/atom/blob/master/spec/spec-helper.coffee. Hence, we can now use the function window.advanceClock(10); to advance the system's clock 10 milliseconds. Following is some test code advancing the clock to trigger timeouts.

  it('task timeout warning, then the result', () => {
      var warnHandler = logger.warn = jasmine.createSpy();
      // the worker will return the task result after 15 milliseconds
      workerReplyIn(15);
      var taskHandler = jasmine.createSpy();
      hackWorker.runWorkerTask('reply').then(taskHandler);
      window.advanceClock(11);
      // wait 1 more ms for the promise to get resolved (after advancing the clock)
      waits(1);
      runs(() => {
        expect(taskHandler.callCount).toBe(0);
        expect(warnHandler.callCount).toBe(1);
        expect(warnHandler.argsForCall[0][0]).toBe('Webworker is stuck in a job!');
        window.advanceClock(5); // advance the clock for the next thing to test
      });
      // wait 1 more ms for the promise to get resolved (after advancing the clock)
      waits(1);
      runs(() => {
        expect(taskHandler.callCount).toBe(1);
        expect(warnHandler.callCount).toBe(1);
        expect(taskHandler.argsForCall[0][0]).toBe('reply');
      });
    });

Writing Tests for code that incidentally uses timers (setTimeout, setInterval)

pkg/nuclide/jasmine/lib/faketimer.js replaces setTimeout/setInterval/clearTimeout/clearInterval with mocks which allow manual control of time. FakeTimer is installed by default before all of our tests. This enables testing time related code in a way which does not actually require any real wall clock time to pass. The mock clock is advanced with the advanceClock function.

For example:

    it('Example timeout test', () => {
        const callback = jasmine.createSpy(callback);

        setTimeout(callback, 100);

        expect(callback).not.toHaveBeenCalled();
        advanceClock(99);

        // still not called
        expect(callback).not.toHaveBeenCalled();

        advanceClock(1);
        expect(callback).toHaveBeenCalled();
     });

Note that the above test never actually spends any time idling waiting for a setTimeout.

If you absolutely must use real setTimeout/setInterval in your test rather than the mock, then you can call useRealClock(). This should be discouraged, as the mock clock can almost always be used, and will result in faster tests.

Tips for Spies

The Jasmine docs (linked below) include details for ordinary uses of spies, but here are a few cases that they don't cover. Take a look at the nuclide-test-helpers package for some useful functions.

Mocking module exports

If module Foo exports the function bar, and module Baz is under test and depends on Foo.bar:

spyOn(require('Foo'), 'bar');
uncachedRequire(require, 'Baz'); // defined in 'nuclide-test-helpers'

uncachedRequire will force a reload of the Baz module, which will pick up the mocked version of Foo.bar. Note that this can have interesting side effects, particularly if the module maintains state.

After you're done, run:

unspy(require('Foo'), 'bar'); // specific to atom, not in Jasmine
clearRequireCache(require, 'Baz'); // defined in 'nuclide-test-helpers'

This will hopefully get rid of the spy, but it's not perfect. It's possible that a module required by Baz could also require Foo and pick up the spy, but we still don't invalidate it here.

Be careful when you are doing this.

Mocking functions provided by getters

Jasmine cannot spy on functions provided by getters. To do so, use the spyOnGetterBalue function (defined in nuclide-test-helpers) in place of spyOn).

To make it work, it deletes the getter and replaces it with the value itself. So, if the getter is not idempotent it will cause problems - once again, use with care.

Getting Flow to like your tests

Flow cannot understand beforeEach, so if you declare variables inside a describe, and initialize them in beforeEach, Flow will complain that they may be undefined wherever they are used. To work around this, you can use invariant everywhere, but that gets messy quickly. You can also type them as any, but then you lose nearly all of the benefits that Flow can provide. I (nmote) believe that the following approach will get you the best of both worlds:

Declare the variable like this:

let foo: SomeType = (null: any);

This casts null to any, circumventing the type system, and allowing you to give the variable the type that it will actually have in the specs at runtime.

References

Clone this wiki locally