Currently, Glassnode metrics as available in the following resolutions:
1 month (API param:
1 week (
1 day (
1 hour (
10 minutes (
Resolution means the frequency at which the data is being updated, as well as the time window over which a metric is being aggregated.
For instance, the number of active addresses on a
24h resolution shows the total number of active addresses over 24 hours (the time window). In addition, a new data point is provided once a day (the frequency). The same mechanism is applied analogously to all other resolutions.
For metrics that show the current state of the blockchain rather than an aggregate over a window (e.g. the number of addresses with a non-zero balance), the value refers to the time at the end of the interval.
Note: All metrics are always available at a
24h resolution. Whether other resolutions are available for a particular metric, depends on the metric and your plan. For a full overview of which metric is currently avaiable a what resolution, please refer to our Metrics Catalogue.
All metric timestamps are in
UTC and always refer to the start of an interval.
2019-05-01 --> Includes data from
2019-05-01 00:00 UTC to
2019-05-31 23:59 UTC (i.e. May 2019)
2019-05-13 --> Includes data from
2019-05-13 00:00 UTC to
2019-05-19 23:59 UTC (i.e. Week 20)
2019-05-13--> Includes data from
2019-05-13 00:00 UTC to
2019-05-13 23:59 UTC
2019-05-13 10:00 UTC --> Includes data from
2019-05-13 10:00 UTC to
2019-05-13 10:59 UTC
10 Min resolution:
2019-05-13 10:20 UTC --> Includes data from
2019-05-13 10:20 UTC to
2019-05-13 10:29 UTC
Glassnode metric updaters are triggered at the end of the resolution interval:
1 month resolution: runs once on the first day of the calendar month at
1 week resolution: runs once every Monday at
1 day resolution: runs every 10 minutes from
00:00 UTC to
3:00 UTC (e.g.
1 hour resolution: runs once every hour at minute
10 minute resolution: runs at every 10th minute (e.g.
Currently, the computation of a metric usually takes between 1 and 6 minutes. Therefore requesting the data 10 minutes after the updaters run, should be safe to make sure the latest datatpoint is included.
There are exceptions to the the above:
Daily computation of ERC20 distribution metrics takes up to 30 minutes.
Daily computation of entity-related metrics (exchange metrics, miner metrics, entity-adjusted metrics) uses advanced clustering algorithms that can take up to 2 hours.
The latest released metric datapoint can be subject to change.
Before a block is included into the computation of a metric we wait until a block is confirmed:
BTC: 1 block confirmation
ETH: 12 blocks confirmation
BCH: 1 block confirmation
LTC: 6 blocks confirmation
The reason to wait for block confirmations is to reduce the amount of data changes, e.g. by reducing the probability of initial inclusion of orphan blocks.
Nonetheless, waiting for block confirmations leads to the fact that not always all blocks relevant to the last interval are included, and therefore the computation of the last datapoint of a metric can be initially incomplete.
Example for a Bitcoin metric at a
00:00 UTC the metric updater will be triggered in order to create the last day's datapoint. Because we wait for a 1 block confirmation before the data of that block is included in the computation of the metric, at this moment there might be still a block missing for that last interval (the previous day in this case). In the next metric updater execution (in case of daily metrics that's 10 minutes later, see above) that last block (if confirmed by a subsequent block) will be finally included into the last datapoint and therefore the value of that last datapoint might slightly change. Depending on the block interval, the latest datapoint might also be only updated and complete at later metric updater executions.
BTC, BCH, and LTC all do not enforce monotonic timestamp sequences (e.g. block #10 can have a timestamp that is earlier than the timestamp of block #9). This happens only very rarely, but it means that there is a possibility that a new block is added to the blockchain with a timestamp that is earlier than the last block. If this happens, the datapoint of the interval where this timestamp resides in, will change.