or read from other model input files. # Read time series needed to perform the AnalyzeNetworkPointFlow() tests. # Use data from HydroBase to provide realistic input. # First read the
<br />network table ReadTableFromExcel(TableID="Network1",InputFile="Network1.xlsx",ExcelColumnNames=FirstRowInRange) # Get the list of streamflow gages and associated time series # Free()
<br />CopyTable(TableID="Network1",NewTableID="StreamflowStationList",IncludeColumns="NodeID", 61
<br />AnalyzeNetworkPointFlow() Command TSTool Documentation Command Reference – AnalyzeNetworkPointFlow () -12 ColumnMap="NodeID:StreamGageID",ColumnFilters="NodeType:StreamGage") ReadTimeSeriesList(Table
<br />ID="StreamflowStationList",LocationColumn="StreamGageID",DataSource="DWR,USGS", DataType="Streamflow",Interval="Day",DataStore="HydroBase",IfNotFound=Warn) WriteDateValue(OutputFile="Network1-StreamG
<br />age-Streamflow.dv",MissingValue=NaN,TSList=AllMatchingTSID, TSID="*.*.Streamflow.Day.*") # Get the list of diversion stations and associated time series Free() CopyTable(TableID="Network1",NewTableID
<br />="DiversionStationList",IncludeColumns="NodeID", ColumnMap="NodeID:DiversionID",ColumnFilters="NodeType:Diversion") ReadTimeSeriesList(TableID="DiversionStationList",LocationColumn="DiversionID",Data
<br />Source="DWR", DataType="DivTotal",Interval="Day",DataStore="HydroBase",IfNotFound=Warn) WriteDateValue(OutputFile="Network1-Diversion-DivTotal.dv",MissingValue=NaN,TSList=AllMatchingTSID,
<br />TSID="*.*.DivTotal.Day.*") # Get the list of diversion return stations and associated time series Free() CopyTable(TableID="Network1",NewTableID="DiversionReturnStationList",IncludeColumns="NodeID",
<br />ColumnMap="NodeID:DiversionID",ColumnFilters="NodeType:Return") ReadTimeSeriesList(TableID="DiversionReturnStationList",LocationColumn="DiversionID",DataSource="DWR", DataType="DivTotal",Interval="Da
<br />y",DataStore="HydroBase",IfNotFound=Warn) WriteDateValue(OutputFile="Network1-Return-DivTotal.dv",MissingValue=NaN,TSList=AllMatchingTSID,TSID="*.*.DivTotal.Day.*") The second command
<br />file performs the point flow analysis. This example is from a TSTool test and fills missing data with a simple approach in order to ensure that no missing values are included in the
<br />analysis. A single command file that combines the two command file examples also could be used. # Test analyzing a simple network for point flows StartLog(LogFile="Results/Test_AnalyzeNetworkPointFlo
<br />w.TSTool.log") # Read the network ReadTableFromExcel(TableID="Network1",InputFile="Data\Network1.xlsx",Worksheet="Network1", ExcelColumnNames=FirstRowInRange) # Read the time series
<br />associated with network nodes (pregenerated) # Fill diversion time series with zeros so there is something to analyze # Fill stream gage time series with repeat forward and backward
<br />SetInputPeriod(InputStart="1950-01-01",InputEnd="2013-12-31") ReadDateValue(InputFile="Data\Network1-Diversion-DivTotal.dv") ReadDateValue(InputFile="Data\Network1-Return-DivTotal.dv")
<br />FillConstant(TSList=AllMatchingTSID,TSID="*.*.DivTotal.*.*",ConstantValue=0) ReadDateValue(InputFile="Data\Network1-StreamGage-Streamflow.dv") FillRepeat(TSList=AllMatchingTSID,TSID="*.*.Streamflow.*
<br />.*",FillDirection=Backward) FillRepeat(TSList=AllMatchingTSID,TSID="*.*.Streamflow.*.*",FillDirection=Forward) CheckTimeSeries(CheckCriteria="Missing") # Analyze the network point flow.
<br />AnalyzeNetworkPointFlow(TableID="Network1",NodeIDColumn="NodeID",NodeNameColumn="NodeName", NodeTypeColumn="NodeType",NodeDistanceColumn="NodeDist",NodeWeightColumn="NodeWeight", DownstreamNodeIDColu
<br />mn="DownstreamNodeID",NodeAddTypes="Return",NodeAddDataTypes="DivTotal", NodeSubtractTypes="Diversion",NodeSubtractDataTypes="DivTotal",NodeOutflowTypes="StreamGage", NodeOutflowDataTypes="Streamflow
<br />",NodeFlowThroughTypes="InstreamFlow",Interval=Day, AnalysisStart="1950-01-01",AnalysisEnd="2012-12-31",Units="CFS",GainMethod="Distance", OutputTableID="Results") 62
<br />Command Reference – AppendFile() -1 Command Reference: AppendFile() Append 1+ files to another file Version 10.12.00, 2012-10-12 The AppendFile() command appends one or more files to
<br />another file. All or only matching lines from input files can be transferred. This command is useful for appending multiple data files into a single file that can be read by TSTool.
<br />The following dialog is used to edit the command and illustrates the syntax for the command. AppendFile AppendFile() Command Editor 63
<br />AppendFile() Command TSTool Documentation Command Reference – AppendFile() -2 The command syntax is as follows: AppendFile(Parameter=Value,…) Command Parameters Parameter Description
<br />Default InputFile The name of one or more files to delete, using the following conventions: • No * in name – match one file. • Filename of *– match all files in input directory (working
<br />directory by default). • Filename of *.ext – match all files with extension More options may be supported in the future when TSTool is updated to use Java 1.7+. None – must be specified.
<br />OutputFile The output file that will be appended to. The file is created if it does not exist. Use the RemoveFile() command to remove the old file. None – must be specified. IncludeText
<br />A regular expression pattern to include text. This uses the Java regular expressions syntax (see http://en.wikipedia.org/wiki/Regular_expression). Transfer all lines. IfNotFound Indicate
<br />action if the file is not found, one of: • Ignore – ignore the missing file (do not warn). • Warn – generate a warning (use this if the file truly is expected and a missing file is a
<br />cause for concern). • Fail – generate a failure (use this if the file truly is expected and a missing file is a cause for concern). Warn The following table lists regular expression
<br />examples: InputText Regular Expression Description .*\Q-\E.* Match lines that start with any character, end with any character, and contain a dash. The \Q and \E characters are special
<br />characters to start and end a quoted character, and are necessary because the dash has special meaning in a regular expression. 64
<br />Command Reference – AppendTable () -1 Command Reference: AppendTable() Append one table to another table Version 10.21.00, 2013-06-28 The AppendTable() command appends rows from one
<br />table to another table. For appended rows: • values in columns that are not matched are set to null in the receiving table • values in columns where the data types do not match are set
<br />to null in the receiving table The following dialog is used to edit the command and illustrates the syntax of the command. AppendTable AppendTable() Command Editor The command syntax
<br />is as follows: AppendTable(Parameter=Value,…) Command Parameters Parameter Description Default TableID The identifier for the original table, to which records will be appended. None
<br />– must be specified. AppendTableID The identifier for the table from which to append. None – must be specified. IncludeColumns The names of columns to append from AppendTableID, separated
<br />by commas. See also ColumnMap to indicate how to map column names in the append table to the first table (necessary if the column names don’t match). Append all of the columns from AppendTableID
<br />that match columns in TableID. ColumnMap The map of the append table columns to the first table’s columns, necessary when column names are not If no map, append table column names in
<br />65
<br />AppendTable() Command TSTool Documentation Command Reference – AppendTable() -2 Parameter Description Default the same: AppendColumn1:OriginalColumn1, AppendColumn2:OriginalColumn2 IncludeColumns
<br />must have the same name in the first table. ColumnFilters Filters that limit the number of rows being appended from the append table, using the syntax: FilterColumn1:FilterPattern1,
<br />FilterColumn2:FilterPattern2 Patterns can use * to indicate wildcards for matches. Only string values can be checked (other data types are converted to strings for comparison). Comparisons
<br />are case-independent. All patterns must be matched in order to append the row. In the future a command may be added to perform queries on tables, similar to SQL for databases. No filtering.
<br />The following figures show the input tables and results (modified first table) corresponding to the parameters shown in the editor dialog figure above. Note that the column names for
<br />“Table2” have a “2”. AppendTable_Table1 Table Corresponding to TableID in Command Editor AppendTable_Table2 Table Corresponding to AppendTableID in Command Editor AppendTable_Table1
<br />Table Corresponding to Results from Parameters in Command Editor 66
<br />Command Reference – ARMA() -1 Command Reference: ARMA() Lag and attenuate a time series using AutoRegressive Moving Average Version 10.13.00, 2012-10-25 The ARMA() command lags and attenuates
<br />a time series (e.g., to route a streamflow time series downstream). This approach preserves the “mass” of the data. The general equation for ARMA is: Ot = a1∗Ot − 1 + a2∗Ot − 2+...+ap∗Ot
<br />− p + b0∗It + b1∗ It − 1+...+bq∗ It − q Where: t = time step Ot = output value at time t It = input value at time t a, b = ARMA coefficients and the p and q values indicate the degree
<br />of the equation: ARMA(p,q). The ARMA coefficients are determined by analyzing historical data and may be developed using a data interval that is different than the data interval of the
<br />time series that is being manipulated. The coefficients are typically computed by an external analysis program (TSTool does not perform this function). The time series to process can
<br />have any interval. The a and b coefficients are listed in the dialog from left-most to right-most in the equation. Note that there are p a-coefficients and (q + 1) b-coefficients (because
<br />there is a b-coefficient at time t0). The interval used to compute the ARMA coefficients can be different from the data interval but the data and ARMA intervals must be divisible by
<br />a common interval. The ARMA algorithm is executed as follows: 1. The data and ARMA intervals are checked and if they not the same, the data are expanded by duplicating each value into
<br />a temporary array. For example, if the data interval is 6Hour and the ARMA interval is 2Hour, each data value is expanded to three data values (2Hour values). If the data interval is
<br />6Hour and the ARMA interval is 10Hour, each data value is expanded to three data values (2Hour values). 2. The ARMA equation is applied at each point in the expanded data array. However,
<br />because the ARMA coefficients were developed using a specific interval, only the data values at the ARMA interval are used in the equation. For example, if the expanded data array has
<br />2Hour data and the ARMA interval is 10Hour, then every fifth value will be used (e.g., t corresponds to the “current” value and t – 1 corresponds to the fifth value before the current
<br />value). Because the ARMA algorithm depends on a number of previous terms in both the input and output, there will be missing terms at the beginning of the data array and in cases where
<br />missing data periods are encountered. Ideally ARMA will be applied to filled data and only the initial conditions will be an issue. In this case the output period should ideally be less
<br />than the total period so that the initial part of the routed time series can be ignored. In cases where O values are missing, the algorithm first tries to use the I values. If any values
<br />needed for the result are missing, the result is set to missing. 3. The final results are converted to a data interval that matches the original input, if necessary. If the original
<br />data interval and the ARMA interval are the same, no conversion is necessary. For example, if the original data interval is 6Hour and the ARMA interval is 10Hour, then the expanded data
<br />67
<br />ARMA() Command TSTool Documentation Command Reference – ARMA() -2 interval will be 2Hour. Consequently, three sequential expanded values are averaged to obtain the final 6Hour time series.
<br />The following dialog is used to edit the command and illustrates the command syntax. ARMA ARMA() Command Editor The command syntax is as follows: ARMA(Parameter=Value,…) Command Parameters
<br />Parameter Description Default TSList Indicates the list of time series to be processed, one of: • AllMatchingTSID – all time series that match the TSID (single TSID or TSID with wildcards)
<br />will be modified. • AllTS – all time series before the command. • EnsembleID – all time series in the ensemble will be modified. • FirstMatchingTSID – the first time series that matches
<br />the AllTS 68
<br />TSTool Documentation ARMA() Command Command Reference – ARMA() -3 Parameter Description Default TSID (single TSID or TSID with wildcards) will be modified. • LastMatchingTSID – the last
<br />time series that matches the TSID (single TSID or TSID with wildcards) will be modified. • SelectedTS – the time series are those selected with the SelectTimeSeries() command. TSID The
<br />time series identifier or alias for the time series to be modified, using the * wildcard character to match multiple time series. Required if TSList=*TSID. EnsembleID The ensemble to
<br />be modified, if processing an ensemble. Required if TSList= EnsembleID. ARMA Interval The ARMA interval to use in the analysis None – must be specified. a a coefficients. Optional. b
<br />b coefficients. None – must be specified. A sample command file to process streamflow data from the USGS is as follows: SetOutputPeriod(OutputStart="1936-01-01",OutputEnd="1936-03-31")
<br />ReadUsgsNwisRdb(InputFile="Data/G03596000.rdb",Alias=Original) Copy(TSID="Original",NewTSID="03596000.USGS.Streamflow.Day.Routed",Alias=Routed) ARMA(TSList=AllMatchingTSID,TSID="Routed",ARMAInterval=
<br />2Hour,a="0.7325, -0.3613,0.1345,0.5221,-0.2500,0.1381,-0.2643,0.0558",b="0.0263,0.0116, -0.0146,-0.0081,0.0127,0.0798,0.0727,0.0523,0.0599") 69
<br />ARMA() Command TSTool Documentation Command Reference – ARMA() -4 The following figure shows the original and routed time series. ARMA_graph Example Graph Showing Original and ARMA-Routed
<br />Time Series 70
<br />TSTool Documentation ARMA() Command Command Reference – ARMA() -5 The Cumulate() command can be used to verify mass balance of the original and routed time series (see the Cumulate()
<br />command discussion below). For example, insert a Cumulate() command near the end of a command file. The following figure shows the time series from the previous graph, this time as cumulative
<br />time series. ARMA_graph_cumulative Example Graph Showing Original and ARMA-Routed Time Series as Cumulative Values 71
<br />ARMA() Command TSTool Documentation Command Reference – ARMA() -6 This page is intentionally blank. 72
<br />Command Reference: Blend() Append a Time Series to the End of Another Time Series Version 08.15.00, 2008-05-01 The Blend()command blends one time series into another, extending the first
<br />time series period if necessary. This is typically used for combining time series for a station that has been renamed or to blend historic and real-time data. The second (independent
<br />time series) will ALWAYS override the first time series. See also the SetFromTS() and Add() commands. The Blend() command ensures that single data values are used whereas Add() will
<br />add values if more than one value is available at the same date/time. The SetFromTS() does not extend the period. The following dialog is used to edit the command and illustrates the
<br />syntax of the command. Blend Blend() Command Editor 73 Command Reference – Blend() -1
<br />Blend() Command TSTool Documentation The command syntax is as follows: Blend(Parameter=Value,…) Command Parameters Parameter Description Default TSID The time series identifier or alias
<br />for the time series to be modified. None – must be specified. IndependentTSID The time series identifier or alias for the time series to be blended to the first time series. None – must
<br />be specified. BlendMethod The method used to blend the data, one of: • BlendAtEnd, resulting in the main time series having the other time series attached to the end of its period. None
<br />– must be specified. Currently only BlendAtEnd is recognized. A sample command file to blend two time series from the State of Colorado’s HydroBase database is as follows: # 08236000
<br />-ALAMOSA RIVER ABOVE TERRACE RESERVOIR 08236000.DWR.Streamflow.Month~HydroBase # 08236500 -ALAMOSA RIVER BELOW TERRACE RESERVOIR 08236500.DWR.Streamflow.Month~HydroBase Blend(TSID=”08236000.DWR.Strea
<br />mflow.Month”, IndependentTSID=”08236500.DWR.Streamflow.Month”, BlendMethod=”BlendAtEnd”) Command Reference – Blend() -2 74
<br />Command Reference – CalculateTimeSeriesStatistic () -1 Command Reference: CalculateTimeSeriesStatistic() Calculate time series statistic Version 10.18.00, 2013-02-21 The CalculateTimeSeriesStatistic(
<br />) command calculates a statistic for a time series (typically a single value, but may have multiple output values) and optionally adds the result to a table. Multiple time series can
<br />be processed. The sample from each time series consists of data values for the full period or a shorter analysis period if specified for the command. Missing values typically are ignored
<br />unless significant for the statistic (e.g., Statistic=MissingCount). The following dialog is used to edit the command and illustrates the command syntax. Most statistics do not require
<br />additional input; however, those that do utilize the Value* parameters to specify additional information. CalculateTimeSeriesStatiistic CalculateTimeSeriesStatistic() Command Editor
<br />75
<br />CalculateTimeSeriesStatistic() Command TSTool Documentation Command Reference – CalculateTimeSeriesStatistic () -2 The command syntax is as follows: CalculateTimeSeriesStatistic(Parameter=Value,…)
<br />Command Parameters Parameter Description Default TSList Indicates the list of time series to be processed, one of: • AllMatchingTSID – all time series that match the TSID (single TSID
<br />or TSID with wildcards). • AllTS – all time series before the command. • EnsembleID – all time series in the ensemble. • FirstMatchingTSID – the first time series that matches the TSID
<br />(single TSID or TSID with wildcards). • LastMatchingTSID – the last time series that matches the TSID (single TSID or TSID with wildcards). • SelectedTS – the time series selected with
<br />the SelectTimeSeries() command. AllTS TSID The time series identifier or alias for the time series to be processed, using the * wildcard character to match multiple time series. Required
<br />if TSList=*TSID. EnsembleID The ensemble to be processed, if processing an ensemble. Required if TSList=EnsembleID. Statistic Statistic to compute as shown in the Statistic Details table
<br />below. None – must be specified. Value1 Input data required by the statistic. Currently the dialog does not check the value for correctness – it is checked when the statistic is computed.
<br />See Statistic Details table below. Value2 Input data required by the statistic. Currently the dialog does not check the value for correctness – it is checked when the statistic is computed.
<br />See Statistic Details table below. Value3 Input data required by the statistic. Currently the dialog does not check the value for correctness – it is checked when the statistic is computed.
<br />See Statistic Details table below. AnalysisStart The date/time to start analyzing data. Full period is analyzed. AnalysisEnd The date/time to end analyzing data. Full period is analyzed.
<br />Analysis WindowStart The calendar date/time for the analysis start within each year. Specify using the format MM, MM-DD, MM-DD hh, or MM-DD hh:mm, consistent with the time series interval
<br />precision. A year of 2000 will be used internally to parse the date/time. Use this parameter to limit data processing within the year, for example to analyze only a season. The analysis
<br />window has only been enabled for Count, GECount, GTCount, LECount, LTCount, Max, Analyze the full year. 76
<br />TSTool Documentation CalculateTimeSeriesStatistic () Command Command Reference – CalculateTimeSeriesStatistic () -3 Parameter Description Default Min, MissingCount, MissingPercent, NonmissingCount,
<br />and NonmissingPercent statistics. Analysis WindowEnd Specify date/time for the analysis end within each year. See AnalysisWindowStart for details. Analyze the full year. TableID Identifier
<br />for table that receives the statistic. An existing table can be specified. If not found, a new table will be created. Optional – table output is not required. TableTSIDColumn Table column
<br />name that is used to look up the time series. If a matching TSID is not found, a row will be added to the table. If a TSID is found, the statistic cell value for the time series is modified.
<br />Optional – table output is not required. TableTSIDFormat The specification to format the time series identifier to insert into the TSID column. Use the format choices and other characters
<br />to define a unique identifier. Time series alias if available, or the time series identifier. TableStatistic Column Table column name to receive the statistic value. If not found in
<br />the table, a new column is added automatically. Optional – table output is not required. The following table provides additional information about specific statistics, in particular
<br />to describe how the statistic is computed and whether additional input needs to be provided with Value command parameters. Statistic Details Statistic Description Required Values Count
<br />Number of data values total, including missing and non-missing. DeficitMax Maximum deficit value (where deficit is mean minus value). DeficitMean Mean deficit value (where deficit is
<br />mean minus value). DeficitMin Minimum deficit value (where deficit is mean minus value). DeficitSeqLengthMax Maximum number of sequential intervals where each value is less than the
<br />mean (for example maximum drought length). DeficitSeqLengthMean Mean number of sequential intervals where each value is less than the mean (for example mean drought length). DeficitSeqLengthMin
<br />Minimum number of sequential intervals where each value is less than the mean (for example minimum drought length). DeficitSeqMin Maximum sum of sequential values where each value is
<br />less than the mean (for example maximum drought water volume). DeficitSeqMean Mean of the sum of sequential values where each 77
<br />CalculateTimeSeriesStatistic() Command TSTool Documentation Command Reference – CalculateTimeSeriesStatistic () -4 Statistic Description Required Values value is less than the mean (for
<br />example mean drought water volume). DeficitSeqMin Minimum sum of sequential values where each value is less than the mean (for example minimum drought water volume). GECount Count of
<br />values greater than or equal to Value1. Value1 – criteria to check GTCount Count of values greater than Value1. Value1 – criteria to check Lag-1AutoCorrelation Autocorrelation between
<br />values and the those that follow in the next time step, given by: rk = Σi=1N-k(Yi -Ymean)(Yi + k -Ymean) Σi=1N(Yi -Ymean)2 Last Last non-missing value. LECount Count of values less than
<br />or equal to Value1. Value1 – criteria to check LTCount Count of values less than Value1. Value1 – criteria to check Max Maximum value. Mean Mean value. Min Minimum value. MissingCount
<br />Number of missing values. MissingPercent Percent of values that are missing. MissingSeqLengthMax Maximum number of sequential values that are missing. NonmissingCount Number of non-missing
<br />values. NonmissingPercent Percent of values that are not missing. NqYY This statistic is typically used to evaluate the return period of low flows and is implemented only for daily data.
<br />The N indicates the number of daily values to be averaged and YY indicates the return interval. For example, 7q10 indicates the flow corresponding to the 10-year recurrence interval
<br />for minimum average daily flow (for 7 days) in a year. This statistic is computed as follows, using 7q10 as an example: 1. Determine the number of years to be analyzed (from analysis
<br />period command parameters or time series data). 2. For each year, loop through each day from January 1 to December 31. Compute an average flow by averaging 7 days, in this case with
<br />3 values on each side of the current day and including the current day. If at the end of the year, use 3 values from adjoining years. The number of missing data allowed is controlled
<br />by the Value3 command parameter. Value1 – specify the number of daily values to be averaged. Currently this must be an odd number to allow bracketing the current day. Value2 – specify
<br />the return interval (e.g., 10). Value3 – specify the number of 78
<br />TSTool Documentation CalculateTimeSeriesStatistic () Command Command Reference – CalculateTimeSeriesStatistic () -5 Statistic Description Required Values 3. For the year, save the minimum
<br />7-day average. 4. Utilize the minimum values for all years, with log-Pearson Type III distribution, to determine the value for the 10-year recurrence interval. See http://pubs.usgs.gov/sir/2008/5126/
<br />section3.html for a description of NqYY and “Hydrology for Engineers, 3rd Edition,” Linsley, Kohler, Paulhus for a description of log-Pearson Type III distribution. missing values allowed
<br />in the average (e.g., 0 for most rigorous analysis). It may be useful to set this value if, for example, a single daily value is available in the time series, for example entered on
<br />the first day of the month. Skew Skew coefficient, as follows: Cs = N Σi=1N(Yi -Ymean)3 (n – 1)(n – 2)s3 where s = standard deviation StdDev Standard deviation. SurplusMin Maximum surplus
<br />value (where surplus is value minus mean). SurplusMean Mean surplus value (where surplus is value minus mean). SurplusMin Minimum surplus value (where surplus is value minus mean). SurplusSeqLengthMa
<br />x Maximum number of sequential intervals where each value is greater than the mean (for example maximum water surplus length). SurplusSeqLengthMean Mean number of sequential intervals
<br />where each value is greater than the mean (for example mean water surplus length). SurplusSeqLengthMin Minimum number of sequential intervals where each value is greater than the mean
<br />(for example minimum water surplus length). SurplusSeqMin Maximum sum of sequential values where each value is greater than the mean (for example maximum water surplus volume). SurplusSeqMean
<br />Mean of the sum of sequential values where each value is greater than the mean (for example mean water surplus volume). SurplusSeqMin Minimum sum of sequential values where each value
<br />is greater than the mean (for example minimum water surplus volume). Total Total of values. TrendOLS Ordinary least squares analysis is used to compute results that are named TableStatisticColumn
<br />79
<br />CalculateTimeSeriesStatistic() Command TSTool Documentation Command Reference – CalculateTimeSeriesStatistic () -6 Statistic Description Required Values with appended _Intercept, _Slope,
<br />and _R2. Variance Variance. The following example illustrates how to use the command to compute the 7q10 statistic for daily flow: ReadDateValue(Alias=”linsley”,InputFile="Data\linsley.dv")
<br />NewTable(TableID="Table1",Columns="TSID,string;7q10,double") CalculateTimeSeriesStatistic(Statistic="NqYY",Value1=7,Value2=10,Value3=6, TableID="Table1",TableTSIDColumn="TSID",TableStatisticColumn="7
<br />q10") WriteTableToDelimitedFile(TableID="Table1", OutputFile="Results/Test_CalculateTimeSeriesStatistic_7q10_linsley_out.csv") 80
<br />Command Reference: ChangeInterval() Create new time series by changing the input time series data interval Version 10.10.01, 2011-04-18 The ChangeInterval() command creates new time
<br />series by changing the data interval of each input time series. A list of one or more time series or an ensemble of time series can be processed. The majority of the original header
<br />data (e.g., description, units) are copied to the new time series; however, the new interval will be used for data management and in the new time series identifier. Time series data
<br />values have a time scale of instantaneous, accumulated (e.g., volume), or mean. Changing the interval also can result in a change in the time scale (e.g., converting instantaneous values
<br />to a mean value). Currently, the time scale for input and output time series is NOT automatically determined from the data type and interval and must be specified as ACCM, MEAN, or INST.
<br />Instantaneous values are recorded at the date/time of the value and typically apply to small intervals (e.g. minute and hour). For mean and accumulated time series, the date/time for
<br />each value is at the end of the interval for which the value applies. Irregular time series have a date/time precision and a scale appropriate for the data. For example, irregular minute
<br />time series may be used for instantaneous temperature or accumulated precipitation. Irregular day time series may be used for “instantaneous” reservoir level. For regular time series,
<br />the data intervals must align so that each larger interval aligns with the end-points of the corresponding smaller intervals (e.g., the ends of 6-hour intervals align with the daily
<br />
|