LIGO Document M1000211-v2

Subsystem-Level and System-Level Testing Requirements

Document #:
LIGO-M1000211-v2
Document type:
M - Management or Policy
Other Versions:
LIGO-M1000211-v3
10 Sep 2014, 12:03
Abstract:
Subsystem-Level and System-Level Testing Requirements
Files in Document:
Other Files:
Keywords:
testing requirements
Notes and Changes:
Draft issued broadly for comment and input before finalizing.
Intended changes to the current draft:

1) Add text to make it clear that all subsystem test plans/procedures and test reports are reviewed & approved by Systems Engineering. The 'mechanics' and procedure for this review & approval need to be described.

2) Section 4a: "...the following exceptions are permissible: those functions which require integration into the system to demonstrate, e.g. offloading from one system to another;..." Add "only if it is not too burdensome to simulate the other system."

3) Complete sections 5.3, 5.4 and 5.5 on System Level Testing, System Acceptance Review and Handoff to Operations.

4) Section 4: I will eliminate the distinction between "device/component" and "module/unit". These categories (generally) represent the lowest level of comprehensive testing that is performed. (The exception being photodiode and LED screening that is sometimes performed before manufacture.)

5) Section 3.2, SVN: Change to indicate that each subsystem has its own SVN with subsystem provided code for generating control laws and filters (not infrastructure or core software).

6) Section 3.2, SVN: Change Table 2, SVN Organization to be consistent with the extant SEI SVN

Questions asked, Comment made:

1) what happens with parts after long storage times? Do they get re-tested? Yes, as indicated in section 5.1 and Figure 3.

2) Are there expiration dates? No.

3) What are the long term test requirements. There is no distinction between long term and short term test requirements. The requirement is to demonstrate by test all subsystem requirements that can be demonstrated by test. The subsystem groups have the responsibility/freedom to define what testing gets done at what stage/phase, subject to Systems approval.

4) any guidance for the writers of test documents on acceptable practices to derive the acceptable ranges? Is it OK if the standards change wildly from subsystem to subsystem? The acceptable ranges for parameters tested should relate to the required values for these parameters consistent with the design. We are not, for example, looking to impose any additional margin at this point. In some cases it may be difficult to derive an acceptable range, in which case engineering judgment should be used. In other cases, outliers should be eliminated, e.g. beyond 2 sigma from mean in a distribution.

5) Section 2.1.c calls for incrementing version number on a test procedure -- this is done automatically by the DCC. This is only the case if the writer chooses to keep the same number for the test procedure. The point of this section is that even if the test is substantially re-designed, if it has the same test goal (e.g. the acceptance of unit N), then it must have the same originally issued DCC number, not a new one.

6) Section 2.1.d: if the test is not fully automated, it will be very hard to review it later or prove to outside committees that is was surely done perfectly. Care must be taken that mission critical systems have fully traceable tests that are understandable for outsiders. Although we agree with the sentiment expressed here, we are not imposing full automation of all testing, just "in whatever measure is productive". Whether automated or not, it is important that the test be documented well enough that we (LIGO Lab and the LSC internally) can repeat our own tests and fully understand them in case questions or discrepancies arise.

7) Section 2.2.b calls for test results separate from the procedure. Why? It is kind to humans to have only one document to look up. Don't assume that people will have to read hundreds of the same kinds of documents. Is there a good reason for this paragraph in the modern era? One can always extract the information from a decent well written and long electronic document into a database. Clarity. Otherwise, each test can be tweaked, modified and evolved with each iteration. It would all be documented, but the only way a reader would know if to re-read each test procedure for each test that they review. If the test procedure is stand-alone and version controlled, then for any particular tests the reader can see which version(s) of the test procedure was used.

8) Section 2.2.e (which defines review and approval of test reports): Please consider the setup of a professional testing/documentation assistance team. This requires more discussion. What is it that the assistance team would do?

9) Section 2.4a: "...the following exceptions are permissible: those functions which require integration into the system to demonstrate, e.g. offloading from one system to another;..." ...unless it is not too burdensome to simulate the other system? it is a wide loophole as it is written? OK

10) Section 2.4.c: How do you verify that something was destroyed before installation if the damage is invisible to the eye? (e.g., contamination) A credible mechanism for the damage or degradation must be posited first. For example, one might reasonably fear contamination if the unit was improperly stored, or the storage container was compromised. In this case additional testing would be warranted (such as FTIR testing if contaminated.

11) Section 2.4.c: Should we have pre-installation tests for items that are hard or impossible to fix after installed? Yes, that is the point of both Phase 1 and Phase 2 testing.

12) Section 3.1: Do we need an official and fully comprehensive aLIGO glossary published now? Already published -- see [https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?docid=2298 M080375]

13) Section 3.2: is SVN supported for the next 10 years? Do we have evidence of that? SVN is the most prevalent version control system in use today for software, recently replacing CVS. It will be around for a very long time.

14) Section 4: For SEI, I am not sure about what goes in "module". Is it mostly electronic boards? The distinction between device/component, module/unit and sub-assembly may be semantics only. For SEI, I would classify as follows:

device/component
module/unit
sub-assembly
major assembly
capacitive position sensor (ADE)
GS-13
L-4C
PSI actuator
etc.
circuit board (if installed into a crate backplane)

electronics chassis (with potentially multiple circuit boards)
pod assembly
HAM-ISI
BSC-ISI
L-4C
inductive position sensor (Kaman)
servo-valve
etc.
circuit board (if installed into a crate backplane)

electronics chassis (with potentially multiple circuit boards)
hydraulic actuator
pump station
HEPI

In fact, I think that in the next revision of M1000211, I will eliminate the distinction between "device/component" and "module/unit". These categories (generally) represent the lowest level of comprehensive testing that is performed. (The exception being photodiode and LED screening that is sometimes performed before manufacture.)


DCC Version 3.4.3, contact Document Database Administrators