All clients have unique development needs and different budgets. Given the best-case scenario and a client who understands the need for thorough testing, the following describes our software testing procedures:
Automated Deployments
By automatically deploying software builds across different stages of system environments (Development, QA, Staging, Production), we perform System testing. This ensures that the software works within different environments and that the deployment has everything it needs to function properly in the final stage, production. Let's break it down by individual stages to give you a better sense of what this all means.
- Development is where the developers do their work and their independent testing
- QA is where a dedicated QA resource will test the build
- Staging is where we do a smoke test, you do your beta testing and acceptance testing
- Production is where the live product resides
Developer Environment
In the development environment, the developer does their individual unit testing, the team does code review and automated process run regression tests.
Unit testing is the process of testing each component, and a group of components, independently to ensure they behave as expected. For a given set of inputs, the component(s) should return the same results. This can be done with code and the results can be evaluated automatically with a push of a button. This allows us to do automated regression testing.
We strive for 80-90% code coverage, meaning that 80-90% of the code that we write can be automatically tested to make sure that it continues to behave as expected with each new piece of code that we introduce.
Code review means that every line of code that is delivered to the build is reviewed by a peer on the team. This second set of eyes validates that the code is written well, is clear and does what the requirements specify. Doing a code review in this fashion helps find bugs early in the process and produces more quality code.
After code review, comments are logged in the source control system and the developer makes the appropriate adjustments. Then we come to continuous integration.
Continuous integration is the process of automatically testing and building the software with every push of the code to source control. Doing this ensures that the regression testing created by the automated use cases is run frequently. Bugs created by new software are identified immediately.
Quality Assurance Environment
Using an Agile Scrum methodology, we strive to deliver workable software with each two-week Sprint. This workable software can now be tested as a whole (Integration Testing) and tested to ensure that it meets the original requirements (Functional Testing).
A dedicated QA resource (not a software developer) works with the requirements to write a verbose test plan for every feature of a project. As these features are delivered, the QA resource can manually test each plan by following its steps.
It is possible to automate a lot of the functional and integration testing, but some level of manual testing is always required.
The QA resource then either passes the delivery or opens tickets with detailed failures for the developers to fix before the build is passed. If the build passes functional and integration testing, the build is moved to staging where it is prepped for delivery.
Staging Environment
This is a shared environment where we give the product one last look over before handing it off to the client for beta and user acceptance testing. Before making any delivery to our clients, the quality assurance individual
will do a manual smoke test to ensure major functionality is still working in the new environment that the software has been pushed to.
Early on with any project delivery, as an extra precaution, either the COO or the CEO will perform a smoke test to make sure that the project can be delivered to the client.
Once delivered to staging, it is the client’s responsibility to perform beta testing with a subset of their users, and ultimately user acceptance testing to sign off that the build is a success, and should be pushed to production where it will be used.
As the project becomes more mature, the frequency of deployments should be very regular and quick. Automated testing in prior stages should give us increased confidence that working functionality is not broken with each subsequent deployment.
Other Special Case Testing
In some cases, depending on the project, additional specific testing needs to be done.
- Usability Testing: This is when we make sure that the product being developed is intuitive to the user. If it is not, we can make adjustments to the user interface so that the application is more usable. Early on, usability testing can be done from paper, clickable mockups or a shell of the application. As the project progresses, usability comments should always be addressed in each revision of the software.
- Vulnerability Testing: All public-facing applications will be tested for vulnerabilities such as SQL injection and other hacks that can potentially allow the attacker access to data or the ability to corrupt data. Vulnerability testing can be done through automated test cases and by exploratory testing.
- Stress & Performance Testing: For applications that can be used by large numbers of people over a short amount of time, we can automate stress and performance testing. This ensures the project can handle larger than expected amounts of usage.