The FOSSology team believes that quality must be “built-in” through good engineering practices, a good definition of done and a stable/master philosophy. Quality should not be “bolted-on” later through a lot of testing.
To provide a product that not only meets its requirements but also is maintainable and supportable over time, the team pursues solutions that are engineered, not just programmed. The QA infrastructure provides a framework for engineered solutions to be built, for quality to be tested and validated at every step, and for the product to be able to be released as frequently as needed.
For more information on test automation, see FOSSology Automation.
Requirements for new or improved FOSSology features come from various stakeholders like internal organisation groups (primarily the OSRB) and external users. The requirements are captured and tracked in the issue tracker of the GitHub project. This allows everyone to review and comment on the FOSSology issues as they are documented and implemented.
Testing occurs not only to make sure the mainstream use cases function appropriately but also the corner cases (like invalid data) get handled appropriately. It ensures expected errors are trapped and the system produces expected behavior. (Click for more information re: Testing Basics)
The FOSSology team uses several types of tests to achieve its software quality goals. The continuous integration (CI) environment exercises these tests, except manual system tests, on a regular basis.
Unit tests ensure that a code unit meets its design and behaves as intended. | * Focused on testing the code unit (the atomic piece of code) in isolation, not the interactions with other code, with as few dependencies as possible * Should test all possible inputs (valid and invalid), functions, and error cases of the code unit * Do not depend on external systems to be installed/set up * Standalone, can be run within a sandbox * Are fast and easy to run through “push-button” automation * Developers should ensure all unit tests (old and new) pass in their sandbox before checking in code * Are the basis of measuring code coverage
Ensure that a set of code units work together correctly. | - Focus on testing the interaction of several units together, not just one unit; typically, they test specific features or code paths * There is no user interaction; the tests are automated, repeatable, and/or scripted * Are more involved, they usually take longer than unit tests to run * Run both in and out of sandbox; external systems such as database connections and repo may be required to be installed/set up * From the command line, they test all CLI options and invalid data * Tests are wrapped by and interact with the test harness
Ensure the evolving code can always be packaged and installed for both source and package installs and on all supported platforms. | - Tests the packaging and installation processes for both package- and source-installs * Includes installation and testing across all supported platforms using VMs * Depends on external systems to be set up * Tests the installed version of FOSSology outside a sandbox using the functional tests
Ensures that the system behaves correctly from the “user level”; final method of testing before release. | - Prep for “release”, trying to catch defects not found in the above areas * Focus on manually exercising the system via documented intructions in ways that users typically do * Are the basis of ensuring that user features behave as expected * Focus at user level for (1) installation (2) UI (3) analysis * Tested within “deployment” environment with complete system dependencies installed
Ensures adequate FOSSology performance | - Stress tests attempt to load the system up and ensure it can perform adequately. - DB performance testing identifies areas for improvement through code/query/schema re-factoring. - Apache HTTP server performance - UI usability & response time
Type | Description | Used in Developer Sandbox? | Incorporated into CI? | Frequency |
---|---|---|---|---|
Key Areas of Focus | ||||
Unit tests | Yes | Yes | Every Commit | |
Functional tests | Yes | Yes | Multiple times per day, not every commit | |
Packaging and Installation tests | No | Yes | Daily | |
System tests | No | No | Every iteration or release | |
Important Areas of Future Focus | ||||
Environmental/Configuration tests | Ensure configuration/environment is set up correctly. | No | Yes | Daily |
Performance/Load tests | Ensures adequate FOSSology performance | No | TBD | Regularly |
Cluster tests | Ensures FOSSology can be successfully installed and configured as a cluster. Validates the scheduler/agent communications in a cluster. | No | TBD | TBD |
Migration tests | Ensures FOSSology can be successfully migrated from one version to another. | No | TBD | TBD |
The FOSSology source code is organized so that each agent has all the needed components for its operation in its source tree. This includes the unit and functional tests for that agent and associated Makefiles to drive the necessary test creation and execution. See Module Structure for details.
Each module has a high-level Makefile and lower-level Makefiles to drive testing, whether it be unit or functional, with associated _test_ targets. If both unit and functional testing is desired, the high-level Makefile is invoked from the module's root directory which in turn calls the lower-level Makefiles to run the tests. If only one type of testing is desired, the lower-level Makefile is invoked directly with its _test_ target and only that test is executed. In this way, a developer has complete control over what gets run in his sandbox, and the Continuous Integration server has the necessary granularity to run the desired tests whenever it wants.
TBD * A basic database that the tests can query with known-good data * A common test repo that the tests can use
The test database/repo should not have any dependencies on any system value or configuration outside of the sandbox; in other words, all information to make the tests work should either be checked into the sandbox that gets created or created by the high-level test target as a matter of initiation. There shouldn’t be any dependencies on any other system file to run within the sandbox.
All tests produce output in junit format for test reporting; if one of the existing test frameworks are not used, then whatever is used must produce a xml report in junit format. The definition for the junit report format can be found on the fossology wiki at: http://fossology.org/test:junit_format.
It is strongly suggested that one of the existing test frameworks be used to create tests. If an existing test framework is not chosen, a suggested method to produce junit report formats is to wrap all test cases in PHUnit. By running PHPUnit with –log-junit flag, the results will be in junit format.
--log-junit <file>
All tests should be documented using doxygen. Test documentation is needed for a number of reasons. * Tests are code, code should be documented. For example, what does the test test? * Test documentation can serve as the test plan, so either a small test plan is needed or none at all depending on how well the test documentation is done. * Tests can have an API, the API should be documented so other tests can use it.
The documentation for the tests can be found at “Test Documentation”:http://fotest.fc.hp.com/~markd/testDocs/Docs/index.html. These documents are updated frequently as part of the continuous integration.
Many of the tests are executed with the CI. Moreover a server system should be installed publicly for the community project: This server can be used for carrying out system tests.