Quality Assurance
What is Quality Assurance?
Quality assurance (QA) is the process of ensuring that the CollectionSpace application is working properly. Some QA test are automated, while others are done manually. The process of manually testing features and functionality is called user acceptance testing. The sections below describe how the CSpace program team and community of implementers carry out QA testing for each release.
How To
Prior to a release, we encourage the entire community to participate in QA testing. It is very easy to help out:
- Go to the task list for the upcoming release. (link in announcement email)
- Enter your name into the spreadsheet next to one or more tests.
- Follow the test plan for that task via the QA server. (link on task list)
- Following a test plan is like following a recipe, just complete each task as specified.
- If you do not already have a JIRA account, sign up for one via http://issues.collectionspace.org.
- If you encounter a bug, search JIRA to see if it already exists; if not, file a new JIRA issue. Not sure? Email the talk list.
- Make sure you're testing in a supported environment - the most recent version of Chrome, Firefox, Safari, or Edge. IE will not work.
Filing JIRA issues:
- Make sure you have a JIRA account.
- Search for the bug to make sure it does not exist - try a few different keywords for your search. If you have too many search results, use the filters to limit your results to bugs that are for the CollectionSpace (Drydock) project and for which the status is Open, In Progress, or Reopened.
- Select the correct project and issue type. The project is CollectionSpace (Drydock).
- Bug: something that does not work according to the test plan or which interferes with your ability to use CollectionSpace
- Improvement: something new or improved you'd like to see CSpace do
- Write a summary that is concise and descriptive.
- Helpful summary: Hitting save on Cataloging record returns 503 error status.
- Unhelpful summary: Hitting save on Cataloging record doesn't work.
- Affects version = the version current being testing, styled like v4.5, v5.0, etc.
- Environment = the browser and OS you are using, e.g. Chrome on Mac OXS, Firefox on Windows 8
- Description - be sure to include:
- An overview of the issue, can be your summary again
- A step by step description of how to reproduce - this is the most important part!
- A description of what you expected
- A description of what actually happened
- Screenshots are helpful but optional
While you are testing, please keep this in mind:
- The test plan represents the minimum requirement for acceptance - any experiments that extend the parameters of the test are very welcome.
- Does everything (visually) look right? If not, consider this a bug if it interferes with your work, or file an improvement request if not.
- Is there anything that would enhance the user experience? If yes, consider this an improvement request.
- Is there anything that should be tested, but which is not? If yes, file a bug for this and assign to the program manager.
- Have you noticed an inconsistency / error in the test plan? Send a note to the talk list and it will be confirmed and corrected.