My WebLink
|
Help
|
About
|
Sign Out
Search
TSTool_UserManual_10-21-00
CWCB
>
Decision Support Systems
>
DayForward
>
TSTool_UserManual_10-21-00
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
8/28/2013 2:08:13 PM
Creation date
8/28/2013 2:02:12 PM
Metadata
Fields
Template:
Decision Support Systems
Title
TSTool User's Manual - 10.21.00
Description
User's Manual for version 10.21.00
Decision Support - Doc Type
Software Documentation
Date
7/14/2013
DSS Category
DMI Utilities
Contract/PO #
C153966A
Prepared By
Riverside Technology, Inc.
Jump to thumbnail
< previous set
next set >
There are no annotations on this page.
Document management portal powered by Laserfiche WebLink 9 © 1998-2015
Laserfiche.
All rights reserved.
/
242
PDF
Print
Pages to print
Enter page numbers and/or page ranges separated by commas. For example, 1,3,5-12.
After downloading, print the document using a PDF reader (e.g. Adobe Reader).
Download electronic document
Show annotations
View images
View plain text
TSTool Documentation Quality Control <br />3. Generate or read time series data . The NewPatternTimeSeries() command is used in <br />the example to create a time series of repeating values. This is a useful technique because it <br />allows full control over the initial data and minimi zes the number of files associated with the test. <br />Synthetic data are often appropriate for simple tests. If the test requires more complicated data, <br />then time series can be read from a DateValue or other time series file. For example, if <br />functionality of another software program is being implemented in TSTool, the data file from the <br />original software may be used. <br />4. Process the time series using the command being tested . In the example, the <br />FillInterpolate() command is being tested. In many cases, a single command can be <br />used in this step. However, in some cases, it is necessary to use multip le commands. This is OK <br />as long as each command or the sequence is suffici ently tested with appropriate test cases. <br />5. Write the results . The resulting time series are written to a standard format. The DateValue <br />format is useful for general testing because it cl osely represents all time series properties. Note <br />that two write commands are used in the example – one writes the expected results and the other <br />writes the results from the current test. The e xpected results should only be written when the <br />creator of the test has confirmed that it contains verified values. In the example, the command to <br />write expected results is commented out because th e results were previously generated. Some <br />commands do not process time series; therefore, the WriteProperty() and <br />WriteTimeSeriesProperty() commands can be used to write processor properties (e.g., <br />global output period) and time series properties (e .g., data limits). Additional properties will be <br />enabled as the software is enhanced. <br />6. Compare the expected results and the current results . The example uses the <br />CompareFiles() command to compare the DateValues f iles generated for the expected and <br />current results. This command omits comment lines in the comparison because file headers often <br />change due to dynamic comments with date/time. If the software is functioning as expected, the <br />data lines in the file will exactly match. The example illustrates that if the files are different, a <br />warning will be generated because of the WarnIfDifferent=True parameter. Other options <br />for comparing results include: <br />a. Use the CompareTimeSeries() command. This command expects to find matching <br />time series and will compare data values to a precision. For example, read one time <br />series from a DateValue file and then compare with the current time series in memory. <br />Using this command avoids potential issues w ith the DateValue or other file formats <br />changing over time (and requiring the expected r esults to be reverified); however, doing a <br />file comparison is often easier to troubleshoot because a graphical difference program can <br />visually illustrate differences th at need to be evaluated. <br />b. If testing a read/write command, compare the results with the original data file . For <br />example, if the test case is to verify that a certain file format is properly read, then there <br />will generally also be a corresponding write co mmand. The test case can then consist of <br />a command to read the file, a command to write the results, and a comparison command <br />to compare the two files. This may not work if the header of the file uses comment lines <br />that are not recognized by the CompareFiles() command. <br /> <br />If the example command file is opened and run in TS Tool, it will produce time series results, the log file, <br />and the output file. If the expected and current results are the same, no errors will be indicated. However, <br />if the files are different, a warning indicator will be shown in the command list area of the main window <br />next to the CompareFiles() command. <br /> <br />General guidelines for defining test cases are as follows. Following these conventions will allow the test <br />cases to be incorporated into the full test suite. <br /> <br />• Define the test case in a folder matching the command name. <br /> Quality Control - 3 119
The URL can be used to link to this page
Your browser does not support the video tag.