Duality of Testing
Tests serve the same functionality as the code under test, just with a different expectation of delivery. Conventional testing is often very incomplete—accounting for a small subset of predefined cases. This allows for the few included cases to be implemented in a much simpler way than the code for which it vouches.
Unit tests run in a supervised context, fail loudly, and are not required to be feature complete.
Trading complexity for completeness is not the only option, however. For more exhaustive testing, a model can be made of the problem space, such that the verification system is able to generate both setup and conclusion from information that would not be available to the computer during a real-world deployment. The tests can then be generated automatically and executed rapidly.
Another potential trade-off is that of performance. Heuristic systems could be pitted against the more reliable result of an existing, slower system. These systems have especially been useful in training machine learning algorithms to estimate the results of long form computation, as in the case of ray tracing.
Most importantly, the testing should be different enough from the code under test that a matching failure is unlikely to manifest in both systems simultaneously. If the biggest concern is the use of an external library, the tests should not use the same library to compute assertions.
There must also be sufficient comparisons between test system outcomes and code system outcomes that deviations are not overlooked. Manual assertions by a dedicated and mindful developer are often enough, but more strenuous, automated approaches are often much better. Jest snapshotting is an excellent example of a library capable of detecting minute changes for which the developer would not have directly asserting.