Error Monitoring in Test Driven DevelopmentPosted Apr 15, 2015 | 4 min. (661 words)
In the modern development shop, back and front end developers are expected to do a lot more than they ever did before: tooling, unit testing and even functional testing. At the same time, the release cycles are reduced and the number of feature requests are increasing. Is it possible to maintain quality and performance with all these new burdens?
The answer is yes. It is possible, and some teams are already doing it. How?
Great tools allow more to be done with less effort, especially when QA and operations teams are transitioned or reduced and the responsibilities of development starts growing quickly. But a tools first approach can quickly become a problem.
Tools solve nothing if the perception of how they will be adopted is wrong
For example, jumping on an analytics tool as a method to track errors is not only a misuse of the technology but has some serious technical and operational limitations. In this case, there will be long delays before errors can be identified, and the developer who needs to use the analytics tool will not have ownership.
Despite this, log analysis tools are often pitched as a way to advance the team, help in unit testing and handle bugs without additional work.
Both of these processes take quite a bit of energy. They are at the front end of development and on the first line of defense for bugs, but often developers don’t realize that effort spent here saves time and angry team members at their desk later.
For front-end developers working on new functionality, unit tests are not only annoying, but they also take up brain-cycles. While the premise around test-driven development is fantastic, it’s also a mental distraction, requiring the creativity to both code the feature and develop the methods of testing.
Detailed tests should be written for each function/class, but they require a great deal of coding and thinking about test cases. If done correctly, this can be a very complex process because developers naturally have a tendency to write tests so they pass—which obviously drives release managers crazy!
Test driven development (TDD) is not new, but it is as an addition to functional testing. If developers code unit tests, and then selenium code when features are complete, they have now crossed three different activities. Not surprisingly, this may require testing efforts that exceed the amount of time spent on development.
The key to performance in this scenario is to keep both tools and log analysis. Asking developers to do the testing and the QA team to facilitate and automate is extremely beneficial for catching bugs and responding to them more quickly. And by reducing the complexity of the test cases, time will be saved in the long run. One way to reduce complexity is to pick tools that have a great interface, notify you instantly on commits or releases and allow you to quickly prioritize issues without doing anything special.
Error monitoring—a line of code that never changes and is put in every error handling block—is a great example of this type of tool. The cloud interface associated with error monitoring identifies critical errors and pushes those errors out to users instantly. Common errors, the ones that pop up in every release but have little impact, will be low priority whereas a net new error type would be highlighted.
There is no way to avoid the burden that necessary testing early in the delivery pipeline puts on the developers. The issue must be addressed but not by choosing tools on a whim or making an existing tool do the job. Picking purpose-built tools that back up the increasing testing load will reduce the complexity of testing cases and allow developers the freedom to code rather than spend their time contemplating how they will catch and respond to bugs.
Raygun is the answer.
Want to get started with Raygun’s FREE trial and discover a new way to work? Sign up here.