SAGE Facility MUSTANG metrics Web Service Documentation

total_latency Total latency metric

Summary

Total latency is the time in seconds since a channel’s most recent data sample from the data center disk buffer was acquired in the field. It is the sum of the data and feed latencies. MUSTANG measures total latency every four hours. The normal operating value for total latency is largely station-dependent, so it can vary widely. Channels with lower sample rates generally have longer total latencies since it takes longer to fill a telemetry packet or SEED record.

Uses

Large or steadily increasing total latency may indicate a problem with a station, its communication hardware, the data feed or at the data center. Interpreting total data latency along with the data and feed latency measurements can help pinpoint the source of delay.

Data Analyzed

Traces – one N.S.L.C (Network.Station.Location.Channel) per measurement
Window – zero; each measurement represents an instantaneous measurement
Data SourceIRIS BUD near-real time data cache

SEED Channel Types – All Time Series Channels

Algorithm

  • For a specified N.S.L.C, request the current
    • Td: acquisition time of the latest sample,
    • Tm: time that the latency metric measurement is calculated.
  • Calculate and report the total latency:
    total_latency = Tm – Td
    

Metric Values Returned

value – total latency in seconds
target – the trace analyzed, labeled as N.S.L.C.Q (Network.Station.Location.Channel.Quality)
start – Tm: time that the latency measurement was taken (UTC)
end – same as start
lddate – date/time the measurement was made and loaded into the MUSTANG database (UTC)

Notes

Data that was not archived as real-time data will not have latency metrics. All data in the real time feeds have SEED quality indicators of ‘R’ or ‘D’.

Contact

See Also

Updated