Category: Workflow
This article was written by Bartosz (Bart) Chrabski of SmarterProcess – with some minor contributions from yours truly.
You may remember an old western movie titled The Good, the Bad and the Ugly starring Clint Eastwood. testing can sometimes feel the same way.
In today’s dynamic product development arena requirements keep changing and evolving. There is a relationship between three dimensions – namely cost, quality and time.
Ideally, we would have sufficient budget, time, and – product teams would be able to implement a high-quality product. In practice, it’s rarely the case that projects have enough budget and time. Often projects run over budget and experience time constraints. As a result, testing efforts are rushed and the quality of the product suffers.
Verification and validation of products – software and hardware – are some of the most critical steps in the development process and typically consume 30% to 35% or more of the total cost and effort in most projects.
Testing is often the least planned part of the development lifecycle. This lack of rigor can lead to the delivery of lower quality products and applications, which have a negative impact on customer satisfaction.
This article outlines some of the most common challenges encountered in testing efforts and presents several recommended practices. These practices will improve the efficiency and accuracy of your testing processes.
Problems resulting from poor testing processes have many different sources, often from poor planning or execution of testing tasks.
Below are some of the causes we have observed in real world projects.
• Lack of an independent test team
Most small project teams do not have an independent test team. The people who act as testers are at the same time developers, engineers, or analysts – often focused on other responsibilities, which may lead to incomplete or ineffective testing. Depending upon your industry, regulations may require a separate dedicated testing team.
• Limited understanding of the testing process
In small projects, often a project manager also acts as a test manager and may lack of appropriate knowledge and skill to plan and execute test activities – and their responsibilities of the two roles may conflict.
• Poor test planning
Planning the testing process is rarely perfect – testing is often only done to the extent that time is available. Sometimes testing is like an exorcism designed to get rid of evil spirits in a project. Often the person responsible for planning the testing may not be familiar with the testing process and may miss important steps.
• Lack of qualified resources
It can be difficult to find staff with the motivation and skills to do the testing. Since locating the appropriate skills can be challenging, positions are often filled with inexperienced testers, which may lead to incomplete or ineffective testing.
• No test data (no data-driven testing)
In our experience clients often underestimate the importance of good test data. Frequently, test data does not cover all possible conditions occurring in the application. In some cases, no test data is delivered at all. The result is that not all scenarios will be tested, which leads to quality issues.
• Lackluster test environment or product configuration
Some practitioners involved in testing underestimate the importance of getting the test environment and its configuration set up correctly. It’s standard practice to treat development, test, and production systems differently – mostly because they have differing security, data, and privacy controls. Testing in production can lead to corrupted or invalid production data, leaked protected data, overloaded systems and more.
If there is a separate dedicated test environment it may not match the proposed production environment. The wider the gap between test and production, the greater the probability that the delivered product will have more defects. It is common for test teams to clone the production data and use it for testing purposes. This approach can be time-consuming, error-prone and may not meet the data protection policies.
• Poor release management
Many projects do not have a well-documented release management process for testing purposes. This lack of rigor may lead to inconsistencies. Often, we have seen situations where a patch designed to correct a problem injects new ones, which may lead the system to fail.
• Inadequate defect management
In small organizations defects are sometimes not tracked centrally or are manually tracked using spreadsheets or email. This approach leads to inaccuracies or failures to correct defects. Manual operations are also burdened with a considerable amount of work to maintain the process.
• No central repository of test cases
Many legacy products/systems may have been used for years and often a test case repository is not available or maintained. If they are maintained, they usually contain just the latest requested changes and not the complete functionality of the product/system. New team members will struggle to learn the full functionality and to perform tests without reference to past test cases. This can lead to incomplete and error prone testing.
• Incomplete regression tests
When test case repositories do exist they are often outdated or incomplete. As functionality changes test cases should also be maintained and updated to match those changes. Regression tests that are limited to new capability results in poor test coverage and subsequently leads to new defects being injected to a partially tested product.
• Limited testing automation
Often, repetitive testing is performed manually. Automation can aid in creating the test environment building as well as functional regression testing, load testing, coverage testing and release management.
We suggest you evaluate the automation you have available inhouse and from 3rd parties. Assess the value of using automation versus the cost and effort. Is this a product/system that will be around for a long time and the effort of automation will continue to be used? Or is this a short-term fix with a limited life?
• Lack of training
Members of the project teams may not be familiar with the tools that are available to them or may not be trained on how to use them. Consequently, although the organization has tools they are not used. Lack of tool usage/knowledge can result in the limited ability to effectively track errors, supervise the entire process, or define measures and metrics to manage system development effectively.
• Lack of knowledge of available methodologies
Many project teams lack documented and understood testing methodologies and processes. Lack of following existing methodologies/processes or the absence of a documented approach leads to an inefficient testing process. The adopted methodologies and methods do not always have to be written down in the form of official documents, but they must be understood and applied in practice.
• Measures and metrics
Often, organizations collect data on ongoing projects but sometimes do not analyze the testing processes to seek improvements. For example, if it is uncovered that poorly documented or understood requirements are creating testing errors, the need arises to improve requirements elicitation and documentation process. This postmortem analysis provides an opportunity for iterative improvement of the testing phase.
We recommend testing retrospectives to see what the team has learned and what can be done to improve the overall development process.
Best Practices
The following are the best market practices. These practices are recommendations that are not applicable in every test team and not in every organization. Based on our practical experience we suggest:
- Have an independent test team, whenever possible.
- Plan the participation of the test team at an early stage of product development. It’s important to have the test team participate in the analysis stage and aid in assessing the functional and non-functional requirements in terms of their validation capabilities and the associated testing workload.
- Define a strategy for testing the software with the customer.
- Provide for collaboration among engineers, architects, designers, project managers, developers and testers in planning activities related to the testing process.
- The preparation of test data should take place together with the construction of test cases. The data created should be subject to versioning and creating base lines.
- When test environments use production test data, special attention should be paid to disguising or modifying the test data in order to ensure compliance with data protection legislation.
- Create separate environments for testing and development. The test team should keep the test environment as compatible as possible with production.
- Focus on the preparation and verification of the test environment before the start of each test phase.
- Use tools to support configuration management, error tracking and requirements management to facilitate the work and increase its efficiency.
- Build and maintain a test case repository that can be accessed by the project team.
- Maintain traceability between tested functionality (requirements) and test cases in a matrix or other method. This provides information on the functionality being retested, limiting the time and scope of testing. Traceability relationships will reduce the number of regression tests by using the ability to track the relationship between requirements and test cases.
- Maintain a repository for unit tests; use component simulation tools to support team sharing.
- Use measures and metrics in the project to analyze the results. The data collected should help to improve the software development process and the efficiency of the team.
- In our experience, a good testing process is one of the most important activities to ensure the delivery of value to both the development team and client.
Regardless of the type of project, testing should be given special attention. Testing must be well planned, executed in a repeatable documented manner by qualified and trained people. Without this supervision it will be difficult to call tests effective.
The IBM Solution
IBM test management solutions can help you avoid common software development traps.
Lack of planning, lack of metrics, and collaboration with stakeholders, ineffective test management, and lack of test automation, all lead to problems. When we don’t measure how we’re doing and do not continually make improvements, the risk escalates and the project can get out of control.
IBM software test management solutions incorporate many best practices that help you avoid these common traps and enjoy the benefits. IBM Test Management is a collaborative, web-based, quality management solution that offers comprehensive test planning and test asset management from requirements to defects. It enables teams to seamlessly share information and use automation to speed project schedules and provides metrics for informed release decisions. It is available both on-premise or as a SaaS solution.
Key capabilities include:
- Communications support – Support communication among teams that are geographically dispersed using features such as event feeds, integrated chat, review, approval and automated traceability.
- Automation tools integration – IBM Test Manager integrates with many test automation tools including 3rd party tools, homegrown scripts and more. Execute tests with all kinds of tools and collect test results—all from a central location.
- Advanced reporting capabilities – Address the needs and concerns of quality managers, business analysts and release management using advanced reporting capabilities – making it easier to assess readiness for delivery.
- Comprehensive test plans – Provides test plans that clearly describe project quality goals and exit criteria, while tracking responsibilities and prioritized items for verification and validation.
- Risk-based testing – Provides risk-based testing for prioritizing the features and functions to be tested based on importance and likelihood or impact of failure, supporting risk management best practices.
- Requirement tools integration – Test Manager works with the IBM Requirements DOORS Next Generation tool. You can link test cases and mark them as suspect whenever requirements are modified.
In a prior LinkedIn post titled “Regulated Agile” Hmm, what’s that? I discussed the “need for speed” in traditional financial services institutions (FSIs) as they in an effort to provide differentiated products. The competitive market tilts in favor of FinTechs since they’re not encumbered with legacy infrastructure and regulation. In their competitive fight the FSIs have adopted various agile practices – lean, XP, scrum, iterative, etc. – yet get stuck with a hybrid process often referred to as Wagile or WaterfallScrum. Development teams have embraced and benefited from agile practices the most. Accordingly, agility has been frequently defined as ability to rapidly deliver acceptable code within incremental time frames. The frequently used acronym of MVP (“Minimum Viable Product“) reflects the notion of getting something to market quickly, and in turn connotes agility and efficiency.
With annual expenditures to debug software failures at 620 million developer hours and $61 billion dollars per a Cambridge University study one must wonder if we are racing past the facts. How effective are we, if our efforts require such massive rework? Where is the trade-off between efficiency and effectiveness? How do we balance the “need for speed” with the needs beyond the development team, i.e. customer centricity, compliance and audit?
Need implies requirements, including the needs of the many who are afterthoughts in the software development and delivery process. Software and System Requirements are multi-dimensional in that “Requirements Management” spans elicitation, review, analysis, validation, enhancement, versioning.
Furthermore, “Requirements Management” implies a hierarchical dimension since a project requirement is most often a component of composite application which is delivered as a product or program. In that composite applications reflect the collection of code, they equally reflect the collection of requirements. Herein comes the concept of systems thinking. Today’s Financial Services IT systems are so complex that they require multiple teams to develop and manage them.
As lean, agile developers we are trained to value “working software over comprehensive documentation” (one of the four values in the Agile Manifesto) and per lean principles we must identify and remove waste, i.e. manual practices. Automated testing and code promotion have effectively eliminated tremendous amounts of manual effort in the Software Development Life Cycle (SDLC) yet other manual activities such as documentation system remain.
It befuddles me that significant roles, e.g. audit and enterprise architecture, are 1) often overseen and undervalued and 2) seen as antithetical to agile implementations; after all, their deliverables memorialize systems and provide the basis for the “learning organization,“ a concept of lean software development. Regardless of the expressed need and adherence to an institutionalized practice of documenting systems, key stakeholder (compliance / audit) require documentation, especially the “functional lineage” or traceability. Traceability implies the linkage and dynamic association of requirements to code (bases) to test and project should be classified and coordinated to programs for fulfillment. The vertical alignment, if you will, is the association of system components to the required and desired product according to functional and non-functional requirements. Can we pass the Sarbanes Oxley 404 litmus test: was this piece of code was used at this point in time to produce this deliverable (report that will be signed by the CEO for the attestation of earnings)? How can we do that in an agile stream?
A robust requirements management tool, such as DOORS Next, organizes requirements – regardless if they are elicited through traditional or agile practices – into collections but more importantly fosters traceability and re-use. These two factors serve as a very powerful way to balance the need for speed with the information needs of all stakeholders.
The financial services sector relies heavily on documents and documentation. Documents such as the 17 CFR Part 23, NY DFS Rule 500 or Proposed Rule Rel. No. 33-10515; Small Entity Compliance Guide dictate what we must do to fulfill industry mandates as we serve our customers. As evidence we must produce documentation showing that we have implemented the mandate. How great would it be if we could decompose these monolithic documents in components and dynamically reconstruct the evidence back into documents within our agile process? What if we reconstituted requirements or better yet adhere to the concept of “requirement reconstituted.“
The attached graphic shows how DOORS Next, a robust requirements management tool, can ingest machine readable test into a collection of requirements to be worked.
The outline structure of these documents facilitates the transformation into a collection (paragraph) and a requirement (i.e. sentence). Each requirement then reflects a series of attributes, state, traceability, link relationships and most importantly ownership. State is a critical in determining the completed work and open tasks to reach a state of compliance. Work can be prioritized, scheduled and managed through Kanban boards. All information is displayed in a configurable dashboard and tags support the dynamic search. That which is most compelling is the ability to produce reports at any point in time and version them.
Moreover, a robust requirements management tool like DOORS Next can visually represent the association and fulfillment of a specific requirement or a collection of requirements. The graphic shows the functional traceability and specifically how regulation has been decomposed and fulfilled. The green checkmarks show the fulfillment; if any component changes icon will dynamically change. Of course, there are tabular views and reports.
Effective adoption requires a shift in the mindset that organizations must move in unison by adopting automation technology and the discipline associated with training. No one player gets to the Super Bowl; rather, disciplined teams do. Winning teams embody star players, who know their role and responsibility to others, unified in a common goal. If you would like to explore this topic further or learn how to manifest these concepts within your organization let’s connect on LinkedIn.
MIT Technology Review
To cope with rising product complexity and crushing amounts of data, teams must embrace the cloud to maintain a competitive edge.