Parser Comparison Testing
A parser sucks in some text (or other binary) to populate a Semantic Model with data structures. In testing a parser, you could pass it some text and make assertions against the resulting Semantic Model, but that would rely heavily on the format of the Model, breaking encapsulation. Another option is to populate a second semantic model through more direct API calls, and compare the two Models for equivalence. This way, the test code is abstracted from changes in the Semantic Model, but the parser is still fully asserted.
The Semantic Model should provide a comparison function for this purpose, which should be built as simply as possible. The comparison should focus exclusively on meaningful differences. If whitespace is inconsequential in executing the Semantic Model, it should be omitted from comparison.
From Domain Specific Languages by Marin Fowler.