# Introduction

Glean is a modern approach for a telemetry library and is part of the Glean project.

There are two implementations of Glean, with support for 5 different programming languages in total. Both implementations strive to contain the same features with similar, but idiomatic APIs.

Unless clearly stated otherwise, regard the text in this book as valid for both clients and all the supported programming languages and environments.

### The Glean SDK

The Glean SDK is an implementation of Glean in Rust, with language bindings for Kotlin, Python, Rust and Swift.

For development documentation on the Glean SDK, refer to the Glean SDK development book.

To report issues or request changes on the Glean SDK, file a bug in Bugzilla in Data Platform & Tools :: Glean: SDK.

### Glean.js

Glean.js is an implementation of Glean in Javascript. Currently, it only has support for usage in web extensions.

For development documentation on Glean.js, refer to the Glean.js development documentation.

To report issues or request changes on Glean.js, file a bug in Bugzilla in Data Platform & Tools :: Glean.js.

Note Glean.js is still in development and does not provide all the features the Glean SDK does. Feature parity will be worked on after initial validation. Do not hesitate to file a bug if you want to use Glean.js and is missing some key Glean feature.

## Sections

### Using Glean

In this section we describe how to use Glean in your own libraries and applications. It explains the first steps of integrating Glean into your project, choosing the right metric type for you, debugging products that use Glean and Glean's built-in error reporting mechanism. If you want to start using Glean to report data, this is the section you should read.

### Metric Types

This sections lists all the metric types provided by Glean, with examples on how to define them and record data using them. Before diving into Glean's metric types details, don't forget to read the Choosing a metric type page.

### Pings

This section goes through what is a ping and how to define custom pings. A Glean client may provide off-the-shelf pings, such as the metrics or baseline pings. In this section, you will also find the descriptions and the schedules of each of these pings.

### Appendix

#### Glossary

In this book we use a lot of Glean specific terminology. In the glossary, we go through many of the terms used throughout this book and describe exactly what we mean when we use them.

#### Changelog

This section contains detailed notes about changes in Glean, per release.

#### This Week in Glean

“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.

## Contact

To contact the Glean team you can:

• Find us in the #glean channel on chat.mozilla.org.
• Send an email to glean-team@mozilla.com.
• The Glean SDK team is: :janerik, :dexter, :travis, :mdroettboom, :gfritzsche, :chutten, :brizental.

Glean.js and the Glean SDK Source Code is subject to the terms of the Mozilla Public License v2.0. You can obtain a copy of the MPL at https://mozilla.org/MPL/2.0/.

# Using the Glean SDK

In this chapter we describe how to use the Glean SDK in your own libraries and applications.

## Glean integration checklist

The Glean integration checklist can help to ensure your Glean SDK-using product is meeting all of the recommended guidelines.

Products (applications or libraries) using the Glean SDK to collect telemetry must:

1. Integrate the Glean SDK into the build system. Since the Glean SDK does some code generation for your metrics at build time, this requires a few more steps than just adding a library.

2. Include the markdown-formatted documentation generated from the metrics.yaml and pings.yaml files in the project's documentation.

3. Go through data review process for all newly collected data.

4. Ensure that telemetry coming from automated testing or continuous integration is either not sent to the telemetry server or tagged with the automation tag using the sourceTag feature.

5. File a data engineering bug to enable your product's application id.

Additionally, applications (but not libraries) must:

1. Provide a way for users to turn data collection off (e.g. providing settings to control Glean.setUploadEnabled()). The exact method used is application-specific.

## Usage

#### Setting up the dependency

The Glean SDK is published on maven.mozilla.org. To use it, you need to add the following to your project's top-level build file, in the allprojects block (see e.g. Glean SDK's own build.gradle):

repositories {
maven {
url "https://maven.mozilla.org/maven2"
}
}

Each module that uses Glean SDK needs to specify it in its build file, in the dependencies block. Add this to your Gradle configuration:

implementation "org.mozilla.components:service-glean:{latest-version}"

Important: the {latest-version} placeholder in the above link should be replaced with the version of Android Components used by the project.

The Glean SDK is released as part of android-components. Therefore, it follows android-components' versions. The android-components release page can be used to determine the latest version.

For example, if version 33.0.0 is used, then the include directive becomes:

implementation "org.mozilla.components:service-glean:33.0.0"

Size impact on the application APK: the Glean SDK APK ships binary libraries for all the supported platforms. Each library file measures about 600KB. If the final APK size of the consuming project is a concern, please enable ABI splits.

#### Requirements

• Python >= 3.6.

#### Setting up the dependency

The Glean SDK can be consumed through Carthage, a dependency manager for macOS and iOS. For consuming the latest version of the Glean SDK, add the following line to your Cartfile:

github "mozilla/glean" "{latest-version}"

Important: the {latest-version} placeholder should be replaced with the version number of the latest Glean SDK release. You can find the version number on the release page.

Then check out and build the new dependency:

carthage update --platform iOS

#### Integrating with the build system

For integration with the build system you can follow the Carthage Quick Start steps.

1. After building the dependency one drag the built .framework binaries from Carthage/Build/iOS into your application's Xcode project.

2. On your application targets' Build Phases settings tab, click the + icon and choose New Run Script Phase. If you already use Carthage for other dependencies, extend the existing step. Create a Run Script in which you specify your shell (ex: /bin/sh), add the following contents to the script area below the shell:

/usr/local/bin/carthage copy-frameworks

3. Add the path to the Glean framework under "Input Files":

$(SRCROOT)/Carthage/Build/iOS/Glean.framework 4. Add the paths to the copied framework to the "Output Files":$(BUILT_PRODUCTS_DIR)/$(FRAMEWORKS_FOLDER_PATH)/Glean.framework #### Combined usage with application-services If your application uses both the Glean SDK and application-services you can use a combined release to reduce the memory usage and startup time impact. In your Cartfile require only application-services, e.g.: github "mozilla/application-services" ~> "{latest-version}" Important: the {latest-version} placeholder should be replaced with the version number of the latest application-services release. You can find the version number on the release page. Then check out and build the new dependency: carthage update --platform iOS We recommend using a virtual environment for your work to isolate the dependencies for your project. There are many popular abstractions on top of virtual environments in the Python ecosystem which can help manage your project dependencies. The Glean SDK Python bindings currently have prebuilt wheels on PyPI for Windows (i686 and x86_64), Linux/glibc (x86_64) and macOS (x86_64). For other platforms, including *BSD or Linux distributions that don't use glibc, such as Alpine Linux, the glean_sdk package will be built from source on your machine. This requires that Cargo and Rust are already installed. The easiest way to do this is through rustup. Once you have your virtual environment set up and activated, you can install the Glean SDK into it using:$ python -m pip install glean_sdk

The Glean SDK Python bindings make extensive use of type annotations to catch type related errors at build time. We highly recommend adding mypy to your continuous integration workflow to catch errors related to type mismatches early.

TODO. To be implemented in bug 1643568.

All metrics that your project collects must be defined in a metrics.yaml file.

Important: as stated before, any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK.

In order for the Glean SDK to generate an API for your metrics, two Gradle plugins must be included in your build:

The Glean Gradle plugin is distributed through Mozilla's Maven, so we need to tell your build where to look for it by adding the following to the top of your build.gradle:

buildscript {
repositories {
// Include the next clause if you are tracking snapshots of android components
maven {
url "https://snapshots.maven.mozilla.org/maven2"
}
maven {
url "https://maven.mozilla.org/maven2"
}

dependencies {
}
}
}

Important: as above, the {android-components-version} placeholder in the above link should be replaced with the version number of android components used in your project.

The JetBrains Python plugin is distributed in the Gradle plugin repository, so it can be included with:

plugins {
id "com.jetbrains.python.envs" version "0.0.26"
}

Right before the end of the same file, we need to apply the Glean Gradle plugin. Set any additional parameters to control the behavior of the Glean Gradle plugin before calling apply plugin.

// Optionally, set any parameters to send to the plugin.
ext.gleanGenerateMarkdownDocs = true

Note: Earlier versions of Glean used a Gradle script (sdk_generator.gradle) rather than a Gradle plugin. Its use is deprecated and projects should be updated to use the Gradle plugin as described above.

Note: The Glean Gradle plugin has limited support for offline builds of applications that use the Glean SDK.

The metrics.yaml file is parsed at build time and Swift code is generated. Add a new metrics.yaml file to your Xcode project.

Follow these steps to automatically run the parser at build time:

https://raw.githubusercontent.com/mozilla/glean/{latest-release}/glean-core/ios/sdk_generator.sh

Important: as above, the {latest-version} placeholder should be replaced with the version number of Glean SDK release used in this project.

3. On your application targets' Build Phases settings tab, click the + icon and choose New Run Script Phase. Create a Run Script in which you specify your shell (ex: /bin/sh), add the following contents to the script area below the shell:

bash $PWD/sdk_generator.sh Note: If you are using the combined release of application-services and the Glean SDK you need to set the namespace to MozillaAppServices, e.g.: bash$PWD/sdk_generator.sh --glean-namespace MozillaAppServices

4. Add the path to your metrics.yaml and (optionally) pings.yaml under "Input files":

$(SRCROOT)/{project-name}/metrics.yaml$(SRCROOT)/{project-name}/pings.yaml

5. Add the path to the generated code file to the "Output Files":

$(SRCROOT)/{project-name}/Generated/Metrics.swift Important: The parser now generates a single file called Metrics.swift (since Glean v31.0.0). 6. If you are using Git, add the following lines to your .gitignore file: .venv/ {project-name}/Generated This will ignore files that are generated at build time by the sdk_generator.sh script. They don't need to be kept in version control, as they can be re-generated from your metrics.yaml and pings.yaml files. Important information about Glean and embedded extensions: Metric collection is a no-op in application extensions and Glean will not run. Since extensions run in a separate sandbox and process from the application, Glean would run in an extension as if it were a completely separate application with different client ids and storage. This complicates things because Glean doesn’t know or care about other processes. Because of this, Glean is purposefully prevented from running in an application extension and if metrics need to be collected from extensions, it's up to the integrating application to pass the information to the base application to record in Glean. For Python, the metrics.yaml file must be available and loaded at runtime. If your project is a script (i.e. just Python files in a directory), you can load the metrics.yaml using: from glean import load_metrics metrics = load_metrics("metrics.yaml") # Use a metric on the returned object metrics.your_category.your_metric.set("value") If your project is a distributable Python package, you need to include the metrics.yaml file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename() to get the filename at runtime. from glean import load_metrics from pkg_resources import resource_filename metrics = load_metrics(resource_filename(__name__, "metrics.yaml")) # Use a metric on the returned object metrics.your_category.your_metric.set("value") ### Automation steps #### Documentation The documentation for your application or library's metrics and pings are written in metrics.yaml and pings.yaml. However, you should also provide human-readable markdown files based on this information, and this is a requirement for Mozilla projects using the Glean SDK. For other languages and platforms, this transformation is done automatically as part of the build. However, for Python the integration to automatically generate docs is an additional step. The Glean SDK provides a commandline tool for automatically generating markdown documentation from your metrics.yaml and pings.yaml files. To perform that translation, run glean_parser's translate command: python3 -m glean_parser translate -f markdown -o docs metrics.yaml pings.yaml To get more help about the commandline options: python3 -m glean_parser translate --help We recommend integrating this step into your project's documentation build. The details of that integration is left to you, since it depends on the documentation tool being used and how your project is set up. #### Metrics linting Glean includes a "linter" for metrics.yaml and pings.yaml files called the glinter that catches a number of common mistakes in these files. As part of your continuous integration, you should run the following on your metrics.yaml and pings.yaml files: python3 -m glean_parser glinter metrics.yaml pings.yaml A new build target needs to be added to the project csproj file in order to generate the metrics and pings APIs from the registry files (e.g. metrics.yaml, pings.yaml). <Project> <!-- ... other directives ... --> <Target Name="GleanIntegration" BeforeTargets="CoreCompile"> <ItemGroup> <!-- Note that the two files are not required: Glean will work just fine with just the 'metrics.yaml'. A 'pings.yaml' is only required if custom pings are defined. Please also note that more than one metrics file can be added. --> <GleanRegistryFiles Include="metrics.yaml" /> <GleanRegistryFiles Include="pings.yaml" /> </ItemGroup> <!-- This is what actually runs the parser. --> <GleanParser RegistryFiles="@(GleanRegistryFiles)" OutputPath="$(IntermediateOutputPath)Glean" Namespace="csharp.GleanMetrics" />

<!--
And this adds the generated files to the project, so that they can be found by
the compiler and Intellisense.
-->
<ItemGroup>
<Compile Include="$(IntermediateOutputPath)Glean\**\*.cs" /> </ItemGroup> </Target> </Project> This is using the Python 3 interpreter found in PATH under the hood. The GLEAN_PYTHON environment variable can be used to provide the location of the Python 3 interpreter. ### Adding custom pings Please refer to the custom pings documentation. Important: as stated before, any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK. ### Parallelism All of the Glean SDK's target languages use a separate worker thread to do most of its work, including any I/O. This thread is fully managed by the Glean SDK as an implementation detail. Therefore, users should feel free to use the Glean SDK wherever it is most convenient, without worrying about the performance impact of updating metrics and sending pings. Since the Glean SDK performs disk and networking I/O, it tries to do as much of its work as possible on separate threads and processes. Since there are complex trade-offs and corner cases to support Python parallelism, it is hard to design a one-size-fits-all approach. #### Default behavior When using the Python bindings, most of the Glean SDK's work is done on a separate thread, managed by the Glean SDK itself. The Glean SDK releases the Global Interpreter Lock (GIL) for most of its operations, therefore your application's threads should not be in contention with the Glean SDK's worker thread. The Glean SDK installs an atexit handler so that its worker thread can cleanly finish when your application exits. This handler will wait up to 30 seconds for any pending work to complete. By default, ping uploading is performed in a separate child process. This process will continue to upload any pending pings even after the main process shuts down. This is important for commandline tools where you want to return control to the shell as soon as possible and not be delayed by network connectivity. #### Cases where subprocesses aren't possible The default approach may not work with applications built using PyInstaller or similar tools which bundle an application together with a Python interpreter making it impossible to spawn new subprocesses of that interpreter. For these cases, there is an option to ensure that ping uploading occurs in the main process. To do this, set the allow_multiprocessing parameter on the glean.Configuration object to False. #### Using the multiprocessing module Additionally, the default approach does not work if your application uses the multiprocessing module for parallelism. The Glean SDK can not wait to finish its work in a multiprocessing subprocess, since atexit handlers are not supported in that context. Therefore, if the Glean SDK detects that it is running in a multiprocessing subprocess, all of its work that would normally run on a worker thread will run on the main thread. In practice, this should not be a performance issue: since the work is already in a subprocess, it will not block the main process of your application. ### Testing metrics In order to make testing metrics easier 'out of the box', all metrics include a set of test API functions in order to facilitate unit testing. These include functions to test whether a value has been stored, and functions to retrieve the stored value for validation. For more information, please refer to Unit testing Glean metrics. # The General API The Glean SDK has a minimal API available on its top-level Glean object. This API allows one to enable and disable upload, register custom pings and set experiment data. Important: The Glean SDK should only be initialized from the main application, not individual libraries. If you are adding Glean SDK support to a library, you can safely skip this section. ## The API The Glean SDK provides a general API that supports the following operations. See below for language-specific details. OperationDescriptionNotes initializeConfigure and initialize the Glean SDK.Initializing the Glean SDK setUploadEnabledEnable or disable Glean collection and upload.Enabling and disabling Metrics registerPingsRegister custom pings generated from pings.yaml.Custom pings setExperimentActiveIndicate that an experiment is running.Using the Experiments API setExperimentInactiveIndicate that an experiment is no longer running..Using the Experiments API ## Initializing the Glean SDK The following steps are required for applications using the Glean SDK, but not libraries. Note: The initialize function must be called, even if telemetry upload is disabled. Glean needs to perform maintenance tasks even when telemetry is disabled, and because Glean does this as part of its initialization, it is required to always call the initialize function. Otherwise, Glean won't be able to clean up collected data, disable queuing of pre-init tasks, or perform other required operations. This does not apply to special builds where telemetry is disabled at build time. In that case, it is acceptable to not call initialize at all. Note: The Glean SDK does not support use across multiple processes, and must only be initialized on the application's main process. Initializing in other processes is a no-op. Additionally, Glean must be initialized on the main (UI) thread of the applications main process. Failure to do so will throw an IllegalThreadStateException. An excellent place to initialize Glean is within the onCreate method of the class that extends Android's Application class. import org.mozilla.yourApplication.GleanMetrics.GleanBuildInfo import org.mozilla.yourApplication.GleanMetrics.Pings class SampleApplication : Application() { override fun onCreate() { super.onCreate() // If you have custom pings in your application, you must register them // using the following command. This command should be omitted for // applications not using custom pings. Glean.registerPings(Pings) // Initialize the Glean library. Glean.initialize( applicationContext, // Here, settings() is a method to get user preferences, specific to // your application and not part of the Glean SDK API. uploadEnabled = settings().isTelemetryEnabled, buildInfo = GleanBuildInfo.buildInfo ) } } Once initialized, if uploadEnabled is true, the Glean SDK will automatically start collecting baseline metrics and sending its pings, according to their respective schedules. If uploadEnabled is false, any persisted metrics, events and pings (other than first_run_date and first_run_hour) are cleared, and subsequent calls to record metrics will be no-ops. The Glean SDK should be initialized as soon as possible, and importantly, before any other libraries in the application start using Glean. Library code should never call Glean.initialize, since it should be called exactly once per application. Note: if the application has the concept of release channels and knows which channel it is on at run-time, then it can provide the Glean SDK with this information by setting it as part of the Configuration object parameter of the Glean.initialize method. For example: Glean.initialize( applicationContext, uploadEnabled = setting().isTelemetryEnabled, configuration = Configuration(channel = "beta"), buildInfo = GleanBuildInfo.buildInfo ) Note: When the Glean SDK is consumed through Android Components, it is required to configure an HTTP client to be used for upload. For example: // Requires org.mozilla.components:concept-fetch import mozilla.components.concept.fetch.Client // Requires org.mozilla.components:lib-fetch-httpurlconnection. // This can be replaced by other implementations, e.g. lib-fetch-okhttp // or an implementation from browser-engine-gecko. import mozilla.components.lib.fetch.httpurlconnection.HttpURLConnectionClient import mozilla.components.service.glean.config.Configuration import mozilla.components.service.glean.net.ConceptFetchHttpUploader val httpClient = ConceptFetchHttpUploader(lazy { HttpURLConnectionClient() as Client }) val config = Configuration(httpClient = httpClient) Glean.initialize( context, uploadEnabled = true, configuration = config, buildInfo = GleanBuildInfo.buildInfo ) Note: The Glean SDK does not support use across multiple processes, and must only be initialized on the application's main process. An excellent place to initialize Glean is within the application(_:) method of the class that extends the UIApplicationDelegate class. import Glean import UIKit @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { // If you have custom pings in your application, you must register them // using the following command. This command should be omitted for // applications not using custom pings. Glean.shared.registerPings(GleanMetrics.Pings) // Initialize the Glean library. Glean.shared.initialize( // Here, Settings is a method to get user preferences specific to // your application, and not part of the Glean SDK API. uploadEnabled = Settings.isTelemetryEnabled ) } } Once initialized, if uploadEnabled is true, the Glean SDK will automatically start collecting baseline metrics and sending its pings, according to their respective schedules. If uploadEnabled is false, any persisted metrics, events and pings (other than first_run_date and first_run_hour) are cleared, and subsequent calls to record metrics will be no-ops. The Glean SDK should be initialized as soon as possible, and importantly, before any other libraries in the application start using Glean. Library code should never call Glean.shared.initialize, since it should be called exactly once per application. Note: if the application has the concept of release channels and knows which channel it is on at run-time, then it can provide the Glean SDK with this information by setting it as part of the Configuration object parameter of the Glean.shared.initialize method. For example: Glean.shared.initialize(Configuration(channel: "beta")) The main control for the Glean SDK is on the glean.Glean singleton. The Glean SDK should be initialized as soon as possible, and importantly, before any other libraries in the application start using Glean. Library code should never call Glean.initialize, since it should be called exactly once per application. from glean import Glean Glean.initialize( application_id="my-app-id", application_version="0.1.0", upload_enabled=True, ) Once initialized, if upload_enabled is true, the Glean SDK will automatically start collecting baseline metrics. If upload_enabled is false, any persisted metrics, events and pings (other than first_run_date and first_run_hour) are cleared, and subsequent calls to record metrics will be no-ops. Additional configuration is available on the glean.Configuration object, which can be passed into Glean.initialize(). Unlike Android and Swift, the Python bindings do not automatically send any pings. See the custom pings documentation about adding custom pings and sending them. The main control for the Glean SDK is on the GleanInstance singleton. The Glean SDK should be initialized as soon as possible, and importantly, before any other libraries in the application start using Glean. Library code should never call Glean.initialize, since it should be called exactly once per application. using static Mozilla.Glean.Glean; GleanInstance.Initialize( applicationId: "my.app.id", applicationVersion: "0.1.1", uploadEnabled: true, configuration: new Configuration(), dataDir: gleanDataDir ); ## Behavior when uninitialized Metric recording that happens before the Glean SDK is initialized is queued and applied at initialization. To avoid unbounded memory growth the queue is bounded (currently to a maximum of 100 tasks), and further recordings are dropped. The number of recordings dropped, if any, is recorded in the glean.error.preinit_tasks_overflow metric. Custom ping submission will not fail before initialization. Collection and upload of the custom ping is delayed until the Glean SDK is initialized. Built-in pings are only available after initialization. ## Enabling and disabling metrics Glean.setUploadEnabled() should be called in response to the user enabling or disabling telemetry. Note: If called before Glean.initialize() the call to Glean.setUploadEnabled() will be ignored. Set the initial state using uploadEnabled on Glean.initialize(). Glean.shared.setUploadEnabled() should be called in response to the user enabling or disabling telemetry. Note: If called before Glean.shared.initialize() the call to Glean.shared.setUploadEnabled() will be ignored. Set the initial state using uploadEnabled on Glean.shared.initialize(). Glean.set_upload_enabled() should be called in response to the user enabling or disabling telemetry. Note: If called before Glean.initialize() the call to Glean.set_upload_enabled() will be ignored. Set the initial state using upload_enabled on Glean.initialize(). GleanInstance.SetUploadEnabled() should be called in response to the user enabling or disabling telemetry. Note: If called before GleanInstance.initialize() the call to GleanInstance.SetUploadEnabled() will be ignored. Set the initial state using uploadEnabled on GleanInstance.initialize(). The application should provide some form of user interface to call this method. When going from enabled to disabled, all pending events, metrics and pings are cleared, except for first_run_date and first_run_hour. When re-enabling, core Glean metrics will be recomputed at that time. # Adding new metrics ## Table of Contents ## Process overview When adding a new metric, the process is: • Consider the question you are trying to answer with this data, and choose the metric type and parameters to use. • Add a new entry to metrics.yaml. • Add code to your project to record into the metric by calling the Glean SDK. Important: Any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK. ## Choosing a metric type The following is a set of questions to ask about the data being collected to help better determine which metric type to use. ### Is it a single measurement? If the value is true or false, use a boolean metric. If the value is a string, use a string metric. For example, to record the name of the default search engine. Beware: string metrics are exceedingly general, and you are probably best served by selecting the most specific metric for the job, since you'll get better error checking and richer analysis tools for free. For example, avoid storing a number in a string metric --- you probably want a counter metric instead. If you need to store multiple string values in a metric, use a string list metric. For example, you may want to record the list of other Mozilla products installed on the device. For all of the metric types in this section that measure single values, it is especially important to consider how the lifetime of the value relates to the ping it is being sent in. Since these metrics don't perform any aggregation on the client side, when a ping containing the metric is submitted, it will contain only the "last known" value for the metric, potentially resulting in data loss. There is further discussion of metric lifetimes below. ### Are you counting things? If you want to know how many times something happened, use a counter metric. If you are counting a group of related things, or you don't know what all of the things to count are at build time, use a labeled counter metric. If you need to know how many times something happened relative to the number of times something else happened, use a rate metric. If you need to know when the things being counted happened relative to other things, consider using an event. ### Are you measuring time? If you need to record an absolute time, use a datetime metric. Datetimes are recorded in the user's local time, according to their device's real time clock, along with a timezone offset from UTC. Datetime metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. If you need to record how long something takes you have a few options. If you need to measure the total time spent doing a particular task, look to the timespan metric. Timespan metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. Note that this metric should only be used to measure time on a single thread. If multiple overlapping timespans are measured for the same metric, an invalid state error is recorded. If you need to measure the relative occurrences of many timings, use a timing distribution. It builds a histogram of timing measurements, and is safe to record multiple concurrent timespans on different threads. If you need to know the time between multiple distinct actions that aren't a simple "begin" and "end" pair, consider using an event. ### Do you need to know the order of events relative to other events? If you need to know the order of actions relative to other actions, such as, the user performed tasks A, B, and then C, and this is meaningfully different from the user performing tasks A, C and then B, (in other words, the order is meaningful beyond just the fact that a set of tasks were performed), use an event metric. Important: events are the most expensive metric type to record, transmit, store and analyze, so they should be used sparingly, and only when none of the other metric types are sufficient for answering your question. ## For how long do you need to collect this data? Think carefully about how long the metric will be needed, and set the expires parameter to disable the metric at the earliest possible time. This is an important component of Mozilla's lean data practices. When the metric passes its expiration date (determined at build time), it will automatically stop collecting data. When a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com to the notification_emails addresses associated with the metric. At that time, the metric should be removed, which involves removing it from the metrics.yaml file and removing uses of it in the source code. Removing a metric does not affect the availability of data already collected by the pipeline. If the metric is still needed after its expiration date, it should go back for another round of data review to have its expiration date extended. Important: Ensure that telemetry alerts are received and are reviewed in a timely manner. Expired metrics don't record any data, extending or removing a metric should be done in time. Consider adding both a group email address and an individual who is responsible for this metric to the notification_emails list. ## When should the Glean SDK automatically clear the measurement? The lifetime parameter of a metric defines when its value will be cleared. There are three lifetime options available: • ping (default): The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application. • application: The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake---using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application. • user: Reach out to the Glean team before using this.. The metric is part of the user's profile and will live as long as the profile lives. This is often not the best choice unless the metric records a value that really needs to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id, the day the product was first executed. It is rare to use this lifetime outside of some metrics that are built in to the Glean SDK. While lifetimes are important to understand for all metric types, they are particularly important for the metric types that record single values and don't aggregate on the client (boolean, string, labeled_string, string_list, datetime and uuid), since these metrics will send the "last known" value and missing the earlier values could be a form of unintended data loss. ### A lifetime example Let's work through an example to see how these lifetimes play out in practice. Let's suppose we have a user preference, "turbo mode", which defaults to false, but the user can turn it to true at any time. We want to know when this flag is true so we can measure its affect on other metrics in the same ping. In the following diagram, we look at a time period that sends 4 pings across two separate runs of the application. We assume here, that like the Glean SDK's built-in metrics ping, the developer writing the metric isn't in control of when the ping is submitted. In this diagram, the ping measurement windows are represented as rectangles, but the moment the ping is "submitted" is represented by its right edge. The user changes the "turbo mode" setting from false to true in the first run, and then toggles it again twice in the second run. • A. Ping lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. It is included in the first ping after being recorded (Ping 2), which causes it to be cleared. • B. Ping lifetime, set on init and change: The default value is included in Ping 1, and the changed value is included in Ping 2, which causes it to be cleared. It therefore misses Ping 3, but when the application is started, it is recorded again and it is included in Ping 4. However, this causes it to be cleared again and it is not in Ping 5. • C. Application lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. After the value is changed, it is included in Pings 2 and 3, but then due to application restart it is cleared, so it is not included until the value is manually toggled again. • D. Application, set on init and change: The default value is included in Ping 1, and the changed value is included in Pings 2 and 3. Even though the application startup causes it to be cleared, it is set again, and all subsequent pings also have the value. • E. User, set on change: The default value is missing from Ping 1, but since user lifetime metrics aren't cleared unless the user profile is reset (e.g. on Android, when the product is uninstalled), it is included in all subsequent pings. • F. User, set on init and change: Since user lifetime metrics aren't cleared unless the user profile is reset, it is included in all subsequent pings. This would be true even if the "turbo mode" preference were never changed again. Note that for all of the metric configurations, the toggle of the preference off and on during Ping 4 is completely missed. If you need to create a ping containing one, and only one, value for this metric, consider using a custom ping to create a ping whose lifetime matches the lifetime of the value. ### What if none of these lifetimes are appropriate? If the timing at which the metric is sent in the ping needs to closely match the timing of the metrics value, the best option is to use a custom ping to manually control when pings are sent. This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty. ## What should this new metric be called? Metric names have a maximum length of 30 characters. ### Reuse names from other applications There's a lot of value using the same name for analogous metrics collected across different products. For example, BigQuery makes it simple to join columns with the same name across multiple tables. Therefore, we encourage you to investigate if a similar metric is already being collected by another product. If it is, there may be an opportunity for code reuse across these products, and if all the projects are using the Glean SDK, it's easy for libraries to send their own metrics. If sharing the code doesn't make sense, at a minimum we recommend using the same metric name for similar actions and concepts whenever possible. ### Make names unique within an application Metric identifiers (the combination of a metric's category and name) must be unique across all metrics that are sent by a single application. This includes not only the metrics defined in the app's metrics.yaml, but the metrics.yaml of any Glean SDK-using library that the application uses, including the Glean SDK itself. Therefore, care should be taken to name things specifically enough so as to avoid namespace collisions. In practice, this generally involves thinking carefully about the category of the metric, more than the name. Note: Duplicate metric identifiers are not currently detected at build time. See bug 1578383 for progress on that. However, the probe_scraper process, which runs nightly, will detect duplicate metrics and e-mail the notification_emails associated with the given metrics. ### Be as specific as possible More broadly, you should choose the names of metrics to be as specific as possible. It is not necessary to put the type of the metric in the category or name, since this information is retained in other ways through the entire end-to-end system. For example, if defining a set of events related to search, put them in a category called search, rather than just events or search_events. The events word here would be redundant. ## What if none of these metric types is the right fit? The current set of metrics the Glean SDK supports is based on known common use cases, but new use cases are discovered all the time. Please reach out to us on #glean:mozilla.org. If you think you need a new metric type, we have a process for that. ## How do I make sure my metric is working? The Glean SDK has rich support for writing unit tests involving metrics. Writing a good unit test is a large topic, but in general, you should write unit tests for all new telemetry that does the following: • Performs the operation being measured. • Asserts that metrics contain the expected data, using the testGetValue API on the metric. • Where applicable, asserts that no errors are recorded, such as when values are out of range, using the testGetNumRecordedErrors API. In addition to unit tests, it is good practice to validate the incoming data for the new metric on a pre-release channel to make sure things are working as expected. ## Adding the metric to the metrics.yaml file The metrics.yaml file defines the metrics your application or library will send. They are organized into categories. The overall organization is: # Required to indicate this is a metrics.yaml file$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

toolbar:
click:
type: event
description: |
Event to record toolbar clicks.
- CHANGE-ME@example.com
bugs:
- https://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
expires: 2019-06-01  # <-- Update to a date in the future

double_click:
...

Categories can have . characters to provide extra structure, for example category.subcategory, as long as the total length doesn't exceed 40 characters.

Metric names have a maximum length of 30 characters.

The details of the metric parameters are described in metric parameters.

The metrics.yaml file is used to generate code in the target language (e.g. Kotlin, Swift, ...) that becomes the public API to access your application's metrics.

## Using the metric from your code

The reference documentation for each metric type goes into detail about using each metric type from your code.

Note that all Glean metrics are write-only. Outside of unit tests, it is impossible to retrieve a value from the Glean SDK's database. While this may seem limiting, this is required to:

• enforce the semantics of certain metric types (e.g. that Counters can only be incremented).
• ensure the lifetime of the metric (when it is cleared or reset) is correctly handled.

### Capitalization

One thing to note is that we try to adhere to the coding conventions of each language wherever possible, to the metric name in the metrics.yaml (which is in snake_case) may be changed to some other case convention, such as camelCase, when used from code.

Category and metric names in the metrics.yaml are in snake_case, but given the Kotlin coding standards defined by ktlint, these identifiers must be camelCase in Kotlin. For example, the metric defined in the metrics.yaml as:

views:
...

is accessible in Kotlin as:

import org.mozilla.yourApplication.GleanMetrics.Views

Category and metric names in the metrics.yaml are in snake_case, but given the Swift coding standards defined by swiftlint, these identifiers must be camelCase in Swift. For example, the metric defined in the metrics.yaml as:

views:
...

is accessible in Kotlin as:

Category and metric names in the metrics.yaml are in snake_case, which matches the PEP8 standard, so no translation is needed for Python.

TODO. To be implemented in this bug.

# Metric parameters

## Required metric parameters

• type: Required. Specifies the type of a metric, like "counter" or "event". This defines which operations are valid for the metric, how it is stored and how data analysis tooling displays it. See the list of supported metric types.

Important: Once a metric is released in a product, its type should not be changed. If any data was collected locally with the older type, and hasn't yet been sent in a ping, recording data with the new type may cause any old persisted data to be lost for that metric. See this comment for an extended explanation of the different scenarios.

• description: Required. A textual description of the metric for humans. It should describe what the metric does, what it means for analysts, and its edge cases or any other helpful information.

The description field may contain markdown syntax.

• notification_emails: Required. A list of email addresses to notify for important events with the metric or when people with context or ownership for the metric need to be contacted. For example when a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com to the notification_emails addresses associated with the metric. Consider adding both a group email address and an individual who is responsible for this metric.

• bugs: Required. A list of bugs (e.g. Bugzilla or GitHub) that are relevant to this metric. For example, bugs that track its original implementation or later changes to it.

Each entry should be the full URL to the bug in an issue tracker. The use of numbers alone is deprecated and will be an error in the future.

• data_reviews: Required. A list of URIs to any data collection reviews responses relevant to the metric.

• expires: Required. When the metric is set to expire. After a metric expires, an application will no longer collect or send data related to it. May be one of the following values:

• <build date>: An ISO date yyyy-mm-dd in UTC on which the metric expires. For example, 2019-03-13. This date is checked at build time. Except in special cases, this form should be used so that the metric automatically "sunsets" after a period of time. Emails will be sent to the notification_emails addresses when the metric is about to expire. Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.
• never: This metric never expires.
• expired: This metric is manually expired.

## Optional metric parameters

• lifetime: Defines the lifetime of the metric. Different lifetimes affect when the metrics value is reset.

• ping (default): The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.
• application: The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake---using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.
• user: Reach out to the Glean team before using this.. The metric is part of the user's profile and will live as long as the profile lives. This is often not the best choice unless the metric records a value that really needs to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id, the day the product was first executed. It is rare to use this lifetime outside of some metrics that are built in to the Glean SDK.
• send_in_pings: Defines which pings the metric should be sent on. If not specified, the metric is sent on the "default ping", which is the events ping for events and the metrics ping for everything else. Most metrics don't need to specify this unless they are sent on custom pings.

• disabled: (default: false) Data collection for this metric is disabled. This is useful when you want to temporarily disable the collection for a specific metric without removing references to it in your source code. Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.

• version: (default: 0) The version of the metric. A monotonically increasing integer value. This should be bumped if the metric changes in a backward-incompatible way.

• data_sensitivity: (default: []) A list of data sensitivity categories that the metric falls under. There are four data collection categories related to data sensitivity defined in Mozilla's data collection review process:

• Category 1: Technical Data: (technical) Information about the machine or Firefox itself. Examples include OS, available memory, crashes and errors, outcome of automated processes like updates, safe browsing, activation, version #s, and build id. This also includes compatibility information about features and APIs used by websites, add-ons, and other 3rd-party software that interact with Firefox during usage.

• Category 2: Interaction Data: (interaction) Information about the user’s direct engagement with Firefox. Examples include how many tabs, add-ons, or windows a user has open; uses of specific Firefox features; session length, scrolls and clicks; and the status of discrete user preferences.

• Category 3: Web activity data: (web_activity) Information about user web browsing that could be considered sensitive. Examples include users’ specific web browsing history; general information about their web browsing history (such as TLDs or categories of webpages visited over time); and potentially certain types of interaction data about specific webpages visited.

• Category 4: Highly sensitive data: (highly_sensitive) Information that directly identifies a person, or if combined with other data could identify a person. Examples include e-mail, usernames, identifiers such as google ad id, apple id, Firefox account, city or country (unless small ones are explicitly filtered out), or certain cookies. It may be embedded within specific website content, such as memory contents, dumps, captures of screen data, or DOM data.

# Unit testing Glean metrics

In order to support unit testing inside of client applications using the Glean SDK, a set of testing API functions have been included. The intent is to make the Glean SDK easier to test 'out of the box' in any client application it may be used in. These functions expose a way to inspect and validate recorded metric values within the client application but are restricted to test code only through visibility annotations (@VisibleForTesting(otherwise = VisibleForTesting.NONE) for Kotlin, internal methods for Swift). (Outside of a testing context, Glean APIs are otherwise write-only so that it can enforce semantics and constraints about data).

To encourage using the testing API, it is also possible to generate testing coverage reports to show which metrics in your project are tested.

## General test API method semantics

Using the Glean SDK's unit testing API requires adding Robolectric 4.0 or later as a testing dependency. In Gradle, this can be done by declaring a testImplementation dependency:

dependencies {
testImplementation "org.robolectric:robolectric:4.3.1"
}

In order to prevent issues with async calls when unit testing the Glean SDK, it is important to put the Glean SDK into testing mode by applying the JUnit GleanTestRule to your test class. When the Glean SDK is in testing mode, it enables uploading and clears the recorded metrics at the beginning of each test run. The rule can be used as shown below:

@RunWith(AndroidJUnit4::class)
class ActivityCollectingDataTest {
// Apply the GleanTestRule to set up a disposable Glean instance.
// Please note that this clears the Glean data across tests.
@get:Rule
val gleanRule = GleanTestRule(ApplicationProvider.getApplicationContext())

@Test
fun checkCollectedData() {
// The Glean SDK testing API can be called here.
}
}

This will ensure that metrics are done recording when the other test functions are used.

To check if a value exists (i.e. it has been recorded), there is a testHasValue() function on each of the metric instances:

assertTrue(GleanMetrics.Search.defaultSearchEngineUrl.testHasValue())

To check the actual values, there is a testGetValue() function on each of the metric instances. It is important to check that the values are recorded as expected, since many of the metric types may truncate or error-correct the value. This function will return a datatype appropriate to the specific type of the metric it is being used with:

assertEquals("https://example.com/search?", GleanMetrics.Search.defaultSearchEngineUrl.testGetValue())

Note that each of these functions has its visibility limited to the scope of unit tests by making use of the @VisibleForTesting annotation, so the IDE should complain if you attempt to use them inside of client code.

NOTE: There's no automatic test rule for Glean tests implemented.

In order to prevent issues with async calls when unit testing the Glean SDK, it is important to put the Glean SDK into testing mode. When the Glean SDK is in testing mode, it enables uploading and clears the recorded metrics at the beginning of each test run.

Activate it by resetting Glean in your test's setup:

@testable import Glean
import XCTest

class GleanUsageTests: XCTestCase {
override func setUp() {
Glean.shared.resetGlean(clearStores: true)
}

// ...
}

This will ensure that metrics are done recording when the other test functions are used.

To check if a value exists (i.e. it has been recorded), there is a testHasValue() function on each of the metric instances:

XCTAssertTrue(GleanMetrics.Search.defaultSearchEngineUrl.testHasValue())

To check the actual values, there is a testGetValue() function on each of the metric instances. It is important to check that the values are recorded as expected, since many of the metric types may truncate or error-correct the value. This function will return a datatype appropriate to the specific type of the metric it is being used with:

XCTAssertEqual("https://example.com/search?", try GleanMetrics.Search.defaultSearchEngineUrl.testGetValue())

Note that each of these functions is marked as internal, you need to import Glean explicitly in test mode:

@testable import Glean

It is generally a good practice to "reset" the Glean SDK prior to every unit test that uses the Glean SDK, to prevent side effects of one unit test impacting others. The Glean SDK contains a helper function glean.testing.reset_glean() for this purpose. It has two required arguments: the application ID, and the application version. Each reset of the Glean SDK will create a new temporary directory for Glean to store its data in. This temporary directory is automatically cleaned up the next time the Glean SDK is reset or when the testing framework finishes.

The instructions below assume you are using pytest as the test runner. Other test-running libraries have similar features, but are different in the details.

Create a file conftest.py at the root of your test directory, and add the following to reset Glean at the start of every test in your suite:

import pytest
from glean import testing

@pytest.fixture(name="reset_glean", scope="function", autouse=True)
def fixture_reset_glean():
testing.reset_glean(application_id="my-app-id", application_version="0.1.0")

To check if a value exists (i.e. it has been recorded), there is a test_has_value() function on each of the metric instances:

# ...

assert metrics.search.search_engine_url.test_has_value()

To check the actual values, there is a test_get_value() function on each of the metric instances. It is important to check that the values are recorded as expected, since many of the metric types may truncate or error-correct the value. This function will return a datatype appropriate to the specific type of the metric it is being used with:

assert (
"https://example.com/search?" ==
metrics.search.default_search_engine_url.test_get_value()
)

TODO. To be implemented in bug 1648448.

## Testing metrics for custom pings

In order to test metrics where the metric is included in more than one ping, the test functions take an optional pingName argument (ping_name in Python). This is the name of the ping that the metric is being sent in, such as "events" for the events ping, or "metrics" for the metrics ping. This could also be a custom ping name that the metric is being sent in. In most cases you should not have to supply the ping name to the test function and can just use the default which is the "default" ping that this metric is sent in. You should only need to provide a pingName if the metric is being sent in more than one ping in order to identify the correct metric store.

You can call the testHasValue() and testGetValue() functions with pingName like this:

GleanMetrics.Foo.uriCount.testHasValue("customPing")
GleanMetrics.Foo.uriCount.testGetValue("customPing")

## Example of using the test API

Here is a longer example to better illustrate the intended use of the test API:

// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record(
mapOf(
BrowserEngagement.clickKeys.font to "Courier"
)
)

// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()

// Check if we collected any events into the 'click' metric
assertTrue(BrowserEngagement.click.testHasValue())

// Retrieve a snapshot of the recorded events
val events = BrowserEngagement.click.testGetValue()

// Check if we collected all 3 events in the snapshot
assertEquals(3, events.size)

// Check extra key/value for first event in the list
assertEquals("Courier", events.elementAt(0).extra["font"])

Here is a longer example to better illustrate the intended use of the test API:

// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record([.font: "Courier"])

// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()

// Check if we collected any events into the 'click' metric
XCTAssertTrue(BrowserEngagement.click.testHasValue())

// Retrieve a snapshot of the recorded events
let events = try! BrowserEngagement.click.testGetValue()

// Check if we collected all 3 events in the snapshot
XCTAssertEqual(3, events.count)

// Check extra key/value for first event in the list
XCTAssertEqual("Courier", events[0].extra?["font"])

Here is a longer example to better illustrate the intended use of the test API:

# Record a metric value with extra to validate against

# Check if we collected any events into the 'click' metric
assert metrics.url.visit.test_has_value()

# Retrieve a snapshot of the recorded events
assert 1 == metrics.url.visit.test_get_value()

TODO. To be implemented in bug 1648448.

## Generating testing coverage reports

Glean can generate coverage reports to track which metrics are tested in your unit test suite.

There are three steps to integrate it into your continuous integration workflow: recording coverage, post-processing the results, and uploading the results.

### Recording coverage

Glean testing coverage is enabled by setting the GLEAN_TEST_COVERAGE environment variable to the name of a file to store results. It is good practice to set it to the absolute path to a file, since some testing harnesses (such as cargo test) may change the current working directory.

## Adding or changing metric types

Glean has a well-defined process for requesting changes to existing metric types or suggesting the implementation of new metric types:

1. Glean consumers need to file a bug in the Data platforms & tools::Glean Metric Types component, filling in the provided form;
2. The triage owner of the Bugzilla component prioritizes this within 6 business days and kicks off the decision making process.
3. Once the decision process is completed, the bug is closed with a comment outlining the decision that was made.

# Boolean

Booleans are used for simple flags, for example "is a11y enabled"?.

## Configuration

Say you're adding a boolean to record whether a11y is enabled on the device. First you need to add an entry for the boolean to the metrics.yaml file:

flags:
a11y_enabled:
type: boolean
description: >
Records whether a11y is enabled on the device.
...

## API

import org.mozilla.yourApplication.GleanMetrics.Flags

Flags.a11yEnabled.set(System.isAccesibilityEnabled())

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Flags

// Was anything recorded?
assertTrue(Flags.a11yEnabled.testHasValue())
// Does it have the expected value?
assertTrue(Flags.a11yEnabled.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Flags;

Flags.INSTANCE.a11yEnabled.set(System.isAccessibilityEnabled());

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Flags;

// Was anything recorded?
assertTrue(Flags.INSTANCE.a11yEnabled.testHasValue());
// Does it have the expected value?
assertTrue(Flags.INSTANCE.a11yEnabled.testGetValue());
Flags.a11yEnabled.set(self.isAccessibilityEnabled)

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssertTrue(Flags.a11yEnabled.testHasValue())
// Does the counter have the expected value?
XCTAssertTrue(try Flags.a11yEnabled.testGetValue())

metrics.flags.a11y_enabled.set(is_accessibility_enabled())

There are test APIs available too:

# Was anything recorded?
assert metrics.flags.a11y_enabled.test_has_value()
# Does it have the expected value?
assert True is metrics.flags.a11y_enabled.test_get_value()
using static Mozilla.YourApplication.GleanMetrics.FlagsOuter;

Flags.a11yEnabled.Set(System.IsAccessibilityEnabled());

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.FlagsOuter;

// Was anything recorded?
Assert.True(Flags.a11yEnabled.TestHasValue());
// Does it have the expected value?
Assert.True(Flags.a11yEnabled.TestGetValue());

#![allow(unused)]
fn main() {
use glean_metrics;

flags::a11y_enabled.set(system.is_accessibility_enabled());
}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean_metrics;

// Was anything recorded?
assert!(flags::a11y_enabled.test_get_value(None).is_some());
// Does it have the expected value?
assert!(flags::a11y_enabled.test_get_value(None).unwrap());
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::flags::a11y_enabled.Set(false);

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

ASSERT_EQ(false, mozilla::glean::flags::a11y_enabled.TestGetValue().value());

Note: JS APIs are currently only available in Firefox Desktop. General JavaScript support is coming soon via the Glean.js project.

Glean.flags.a11yEnabled.set(false);

There are test APIs available too:

Assert.equal(false, Glean.flags.a11yEnabled.testGetValue());

• None.

## Examples

• Is a11y enabled?

• None.

# Labeled Booleans

Labeled booleans are used to record different related boolean flags.

## Configuration

For example, you may want to record a set of flags related to accessibility (a11y).

accessibility:
features:
type: labeled_boolean
description: >
a11y features enabled on the device. ...
labels:
- high_contrast
...

Note: removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

## API

Now you can use the labeled boolean from the application's code:

import org.mozilla.yourApplication.GleanMetrics.Accessibility
Accessibility.features["high_contrast"].set(isHighContrastEnabled())

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Accessibility
// Was anything recorded?
assertTrue(Accessibility.features["high_contrast"].testHasValue())
// Do the booleans have the expected values?
assertEquals(False, Accessibility.features["high_contrast"].testGetValue())
// Did we record any invalid labels?
assertEquals(0, Accessibility.features.testGetNumRecordedErrors(ErrorType.InvalidLabel))
Accessibility.features["high_contrast"].set(isHighContrastEnabled())

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Accessibility.features["high_contrast"].testHasValue())
// Do the booleans have the expected values?
XCTAssertEqual(false, try Accessibility.features["high_contrast"].testGetValue())
// Were there any invalid labels?
XCTAssertEqual(0, Accessibility.features.testGetNumRecordedErrors(.invalidLabel))

)
metrics.accessibility.features["high_contrast"].set(
is_high_contrast_enabled()
)

There are test APIs available too:

# Was anything recorded?
assert metrics.accessibility.features["high_contrast"].test_has_value()
# Do the booleans have the expected values?
assert not metrics.accessibility.features["high_contrast"].test_get_value()
# Did we record any invalid labels?
assert 0 == metrics.accessibility.features.test_get_num_recorded_errors(
ErrorType.INVALID_LABEL
)
using static Mozilla.YourApplication.GleanMetrics.Accessibility;

Accessibility.features["high_contrast"].Set(isHighContrastEnabled());

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Accessibility;
// Was anything recorded?
Assert.True(Accessibility.features["high_contrast"].TestHasValue());
// Do the booleans have the expected values?
Assert.Equal(false, Accessibility.features["high_contrast"].TestGetValue());
// Did we record any invalid labels?
Assert.Equal(0, Accessibility.features.TestGetNumRecordedErrors(ErrorType.InvalidLabel));

#![allow(unused)]
fn main() {
use glean_metrics;

accessibility::features.get("high_contrast").set(is_high_contrast_enabled());
}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean::ErrorType;

use glean_metrics;

// Was anything recorded?
assert!(accessibility::features.get("high_contrast").test_get_value(None).is_some());
// Do the booleans have the expected values?
assert!(!accessibility::features.get("high_contrast").test_get_value(None).unwrap());
// Did we record any invalid labels?
assert_eq!(
1,
accessibility::features.test_get_num_recorded_errors(
ErrorType::InvalidLabel
)
);
}

## Limits

• Labels must conform to the label formatting regular expression.

• Labels support lowercase alphanumeric characters; they additionally allow for dots (.), underscores (_) and/or hyphens (-).

• Each label must have a maximum of 60 bytes, when encoded as UTF-8.

• If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

• The number of labels specified in the metrics.yaml is limited to 100.

• If the labels aren't specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

## Examples

• Record a related set of boolean flags.

## Recorded Errors

• invalid_label: If the label contains invalid characters. Data is still recorded to the special label __other__.

• invalid_label: If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

# Counter

Used to count how often something happens, say how often a certain button was pressed. A counter always starts from 0. Each time you record to a counter, its value is incremented. Unless incremented by a positive value, a counter will not be reported in pings.

IMPORTANT: When using a counter metric, it is important to let the Glean metric do the counting. Using your own variable for counting and setting the counter yourself could be problematic because it will be difficult to reset the value at the exact moment that the value is sent in a ping. Instead, just use counter.add to increment the value and let Glean handle resetting the counter.

If you find that you need to control the actual value sent in the ping, you may be measuring something, not just counting something, and a Quantity metric may be a better choice.

## Configuration

Say you're adding a new counter for how often the refresh button is pressed. First you need to add an entry for the counter to the metrics.yaml file:

controls:
refresh_pressed:
type: counter
description: >
Counts how often the refresh button is pressed.
...

## API

import org.mozilla.yourApplication.GleanMetrics.Controls

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Controls

// Was anything recorded?
assertTrue(Controls.refreshPressed.testHasValue())
// Does the counter have the expected value?
assertEquals(6, Controls.refreshPressed.testGetValue())
// Did the counter record a negative value?
assertEquals(
1, Controls.refreshPressed.testGetNumRecordedErrors(ErrorType.InvalidValue)
)
import org.mozilla.yourApplication.GleanMetrics.Controls;

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Controls;

// Was anything recorded?
assertTrue(Controls.INSTANCE.refreshPressed.testHasValue());
// Does the counter have the expected value?
assertEquals(6, Controls.INSTANCE.refreshPressed.testGetValue());
// Did the counter record a negative value?
assertEquals(
1, Controls.INSTANCE.refreshPressed.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Controls.refreshPressed.testHasValue())
// Does the counter have the expected value?
XCTAssertEqual(6, try Controls.refreshPressed.testGetValue())
// Did the counter record a negative value?
XCTAssertEqual(1, Controls.refreshPressed.testGetNumRecordedErrors(.invalidValue))

There are test APIs available too:

# Was anything recorded?
assert metrics.controls.refresh_pressed.test_has_value()
# Does the counter have the expected value?
assert 6 == metrics.controls.refresh_pressed.test_get_value()
# Did the counter record a negative value?
from glean.testing import ErrorType
assert 1 == metrics.controls.refresh_pressed.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Controls;

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Controls;

// Was anything recorded?
Assert.True(Controls.refreshPressed.TestHasValue());
// Does the counter have the expected value?
Assert.Equal(6, Controls.refreshPressed.TestGetValue());
// Did the counter record a negative value?
Assert.Equal(
1, Controls.refreshPressed.TestGetNumRecordedErrors(ErrorType.InvalidValue)
);

#![allow(unused)]
fn main() {
use glean_metrics;

}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean::ErrorType;

use glean_metrics;

// Was anything recorded?
assert!(controls::refresh_pressed.test_get_value(None).is_some());
// Does the counter have the expected value?
assert_eq!(6, controls::refresh_pressed.test_get_value(None).unwrap());
// Did the counter record an negative value?
assert_eq!(
1,
controls::refresh_pressed.test_get_num_recorded_errors(
ErrorType::InvalidValue
)
);
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does the counter have the expected value?
ASSERT_EQ(6, mozilla::glean::controls::refresh_pressed.TestGetValue().value());
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are currently only available in Firefox Desktop. General JavaScript support is coming soon via the Glean.js project.

There are test APIs available too:

Assert.equal(6, Glean.controls.refreshPressed.testGetValue());

## Limits

• Only increments, saturates at the largest value that can be represented as a 32-bit signed integer (2147483647).

## Examples

• How often was a certain button was pressed?

## Recorded errors

• invalid_value: If the counter is incremented by 0 or a negative value.

# Labeled Counters

Labeled counters are used to record different related counts that should sum up to a total.

## Configuration

For example, you may want to record a count of different types of crashes for your Android application, such as native code crashes and uncaught exceptions:

stability:
crash_count:
type: labeled_counter
description: >
Counts the number of crashes that occur in the application. ...
labels:
- uncaught_exception
- native_code_crash
...

Note: removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

## API

Now you can use the labeled counter from the application's code:

import org.mozilla.yourApplication.GleanMetrics.Stability

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Stability
// Was anything recorded?
assertTrue(Stability.crashCount["uncaught_exception"].testHasValue())
assertTrue(Stability.crashCount["native_code_crash"].testHasValue())
// Do the counters have the expected values?
assertEquals(1, Stability.crashCount["uncaught_exception"].testGetValue())
assertEquals(3, Stability.crashCount["native_code_crash"].testGetValue())
// Were there any invalid labels?
assertEquals(0, Stability.crashCount.testGetNumRecordedErrors(ErrorType.InvalidLabel))

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Stability.crashCount["uncaught_exception"].testHasValue())
XCTAssert(Stability.crashCount["native_code_crash"].testHasValue())
// Do the counters have the expected values?
XCTAssertEqual(1, try Stability.crashCount["uncaught_exception"].testGetValue())
XCTAssertEqual(3, try Stability.crashCount["native_code_crash"].testGetValue())
// Were there any invalid labels?
XCTAssertEqual(0, Stability.crashCount.testGetNumRecordedErrors(.invalidLabel))

# Adds 1 to the "uncaught_exception" counter.
# Adds 3 to the "native_code_crash" counter.

There are test APIs available too:

# Was anything recorded?
assert metrics.stability.crash_count["uncaught_exception"].test_has_value()
assert metrics.stability.crash_count["native_code_crash"].test_has_value()
# Do the counters have the expected values?
assert 1 == metrics.stability.crash_count["uncaught_exception"].test_get_value()
assert 3 == metrics.stability.crash_count["native_code_crash"].test_get_value()
# Were there any invalid labels?
assert 0 == metrics.stability.crash_count.test_get_num_recorded_errors(
ErrorType.INVALID_LABEL
)
using static Mozilla.YourApplication.GleanMetrics.Stability;

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Stability;
// Was anything recorded?
Assert.True(Stability.crashCount["uncaught_exception"].TestHasValue());
Assert.True(Stability.crashCount["native_code_crash"].TestHasValue());
// Do the counters have the expected values?
Assert.Equal(1, Stability.crashCount["uncaught_exception"].TestGetValue());
Assert.Equal(3, Stability.crashCount["native_code_crash"].TestGetValue());
// Were there any invalid labels?
Assert.Equal(0, Stability.crashCount.TestGetNumRecordedErrors(ErrorType.InvalidLabel));

#![allow(unused)]
fn main() {
use glean_metrics;

}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean::ErrorType;

use glean_metrics;
// Was anything recorded?
assert!(stability::crash_count.get("uncaught_exception").test_get_value().is_some());
assert!(stability::crash_count.get("native_code_crash").test_get_value().is_some());
// Do the counters have the expected values?
assert_eq!(1, stability::crash_count.get("uncaught_exception").test_get_value().unwrap());
assert_eq!(3, stability::crash_count.get("native_code_crash").test_get_value().unwrap());
// Were there any invalid labels?
assert_eq!(
0,
stability::crash_count.test_get_num_recorded_errors(
ErrorType::InvalidLabel
)
);
}

## Limits

• Labels must conform to the label formatting regular expression.

• Labels support lowercase alphanumeric characters; they additionally allow for dots (.), underscores (_) and/or hyphens (-).

• Each label must have a maximum of 60 bytes, when encoded as UTF-8.

• If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

• The number of labels specified in the metrics.yaml is limited to 100.

• If the labels aren't specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

## Examples

• Record the number of times different kinds of crashes occurred.

## Recorded Errors

• invalid_label: If the label contains invalid characters. Data is still recorded to the special label __other__.

• invalid_label: If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

# Strings

This allows recording a Unicode string value with arbitrary content.

Note: Be careful using arbitrary strings and make sure they can't accidentally contain identifying data (like directory paths or user input).

Note: This is does not support recording JSON blobs - please get in contact with the Telemetry team if you're missing a type.

## Configuration

Say you're adding a metric to find out what the default search in a browser is. First you need to add an entry for the metric to the metrics.yaml file:

search.default:
name:
type: string
description: >
The name of the default search engine.
...

## API

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Record a value into the metric.
SearchDefault.name.set("duck duck go")
// If it changed later, you can record the new value:
SearchDefault.name.set("wikipedia")

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Was anything recorded?
assertTrue(SearchDefault.name.testHasValue())
// Does the string metric have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
assertEquals("wikipedia", SearchDefault.name.testGetValue())
// Was the string truncated, and an error reported?
assertEquals(1, SearchDefault.name.testGetNumRecordedErrors(ErrorType.InvalidValue))
import org.mozilla.yourApplication.GleanMetrics.SearchDefault;

// Record a value into the metric.
SearchDefault.INSTANCE.name.set("duck duck go");
// If it changed later, you can record the new value:
SearchDefault.INSTANCE.name.set("wikipedia");

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Was anything recorded?
assertTrue(SearchDefault.INSTANCE.name.testHasValue());
// Does the string metric have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
assertEquals("wikipedia", SearchDefault.INSTANCE.name.testGetValue());
// Was the string truncated, and an error reported?
assertEquals(
1,
SearchDefault.INSTANCE.name.testGetNumRecordedErrors(
ErrorType.InvalidValue
)
);
// Record a value into the metric.
SearchDefault.name.set("duck duck go")
// If it changed later, you can record the new value:
SearchDefault.name.set("wikipedia")

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(SearchDefault.name.testHasValue())
// Does the string metric have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
XCTAssertEqual("wikipedia", try SearchDefault.name.testGetValue())
// Was the string truncated, and an error reported?
XCTAssertEqual(1, SearchDefault.name.testGetNumRecordedErrors(.invalidValue))

# Record a value into the metric.
metrics.search_default.name.set("duck duck go")
# If it changed later, you can record the new value:
metrics.search_default.name.set("wikipedia")

There are test APIs available too:

# Was anything recorded?
assert metrics.search_default.name.test_has_value()
# Does the string metric have the expected value?
# IMPORTANT: It may have been truncated -- see "Limits" below
assert "wikipedia" == metrics.search_default.name.test_get_value()
# Was the string truncated, and an error reported?
assert 1 == metrics.search_default.name.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.SearchDefault;

// Record a value into the metric.
SearchDefault.name.Set("duck duck go");
// If it changed later, you can record the new value:
SearchDefault.name.Set("wikipedia");

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.SearchDefault;

// Was anything recorded?
Assert.True(SearchDefault.name.TestHasValue());
// Does the string metric have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
Assert.Equal("wikipedia", SearchDefault.name.TestGetValue());
// Was the string truncated, and an error reported?
Assert.Equal(
1,
SearchDefault.name.TestGetNumRecordedErrors(
ErrorType.InvalidValue
)
);

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::search_default::name.Set("wikipedia"_ns);

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have the expected value?
ASSERT_STREQ(
"wikipedia",
mozilla::glean::search_default::name.TestGetValue().value().get()
);
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are currently only available in Firefox Desktop. General JavaScript support is coming soon via the Glean.js project.

Glean.searchDefault.name.set("wikipedia");

There are test APIs available too:

Assert.equal("wikipedia", Glean.searchDefault.name.testGetValue());
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Limits

• Fixed maximum string length: 100. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.

## Examples

• Record the operating system name with a value of "android".

• Recording the device model with a value of "SAMSUNG-SGH-I997".

## Recorded errors

• invalid_overflow: if the string is too long. (Prior to Glean 31.5.0, this recorded an invalid_value).

# Labeled Strings

Labeled strings record multiple Unicode string values, each under a different label.

## Configuration

For example to record which kind of error occurred in different stages of a login process - "RuntimeException" in the "server_auth" stage or "invalid_string" in the "enter_email" stage:

errors_by_stage:
type: labeled_string
description: Records the error type, if any, that occur in different stages of the login process.
labels:
- server_auth
- enter_email
...

## API

Now you can use the labeled string from the application's code:

There are test APIs available too:

// Was anything recorded?

// Were there any invalid labels?

There are test APIs available too:

@testable import Glean

// Was anything recorded?

// Were there any invalid labels?

There are test APIs available too:

# Was anything recorded?

# Were there any invalid labels?
ErrorType.INVALID_LABEL
)

There are test APIs available too:

// Was anything recorded?

// Were there any invalid labels?

#![allow(unused)]
fn main() {
use glean_metrics;

}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean::ErrorType;

use glean_metrics;

// Was anything recorded?

// Were there any invalid labels?
assert_eq!(
0,
ErrorType::InvalidLabel
)
);
}

## Limits

• Labels must conform to the label formatting regular expression.

• Labels support lowercase alphanumeric characters; they additionally allow for dots (.), underscores (_) and/or hyphens (-).

• Each label must have a maximum of 60 bytes, when encoded as UTF-8.

• If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

• The number of labels specified in the metrics.yaml is limited to 100.

• If the labels aren't specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

## Examples

• What kind of errors occurred at each step in the login process?

## Recorded Errors

• invalid_label: If the label contains invalid characters. Data is still recorded to the special label __other__.

• invalid_label: If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

# String List

Strings lists are used for recording a list of Unicode string values, such as the names of the enabled search engines.

Note: Be careful using arbitrary strings and make sure they can't accidentally contain identifying data (like directory paths or user input).

## Configuration

First you need to add an entry for the counter to the metrics.yaml file:

search:
engines:
type: string_list
description: >
Records the name of the enabled search engines.
...

## API

import org.mozilla.yourApplication.GleanMetrics.Search

// Add them one at a time
engines.forEach {
}

// Set them in one go
Search.engines.set(engines)

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Search

// Was anything recorded?
assertTrue(Search.engines.testHasValue())
// Does it have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
// Were any of the values too long, and thus an error was recorded?
assertEquals(1, Search.engines.testGetNumRecordedErrors(ErrorType.InvalidValue))
// Add them one at a time
for engine in engines {
}

// Set them in one go
Search.engines.set(engines)

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Search.engines.testHasValue())
// Does it have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
// Were any of the values too long, and thus an error was recorded?
XCTAssertEqual(1, Search.engines.testGetNumRecordedErrors(.invalidValue))

# Add them one at a time
for engine in engines:

# Set them in one go
metrics.search.engines.set(engines)

There are test APIs available too:

# Was anything recorded?
assert metrics.search.engines.test_has_value()
# Does it have the expected value?
# IMPORTANT: It may have been truncated -- see "Limits" below
# Were any of the values too long, and thus an error was recorded?
assert 1 == metrics.search.engines.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Search;

// Set a string array into the metric.
Search.engines.Set(new string[] { "Google", "DuckDuckGo" });
// Add another string into the metric.

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Search;

// Was anything recorded?
Assert.True(Search.engines.TestHasValue());
// Does the string list metric have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
var snapshot = Search.engines.TestGetValue();
Assert.Equal(3, Search.engines.Length);
Assert.Equal("DuckDuckGo", Search.engines[1]);
Assert.Equal("Baidu", Search.engines[2]);
// Was the string truncated, and an error reported?
Assert.Equal(
1,
Search.engines.TestGetNumRecordedErrors(
ErrorType.InvalidValue
)
);
use glean_metrics;

// Add them one at a time
engines.iter().for_each(|x|
);

// Set them in one go
search::engines.set(engines)

There are test APIs available too:

use glean::ErrorType;
use glean_metrics;

// Was anything recorded?
assert!(search::engines.test_get_value(None).is_some());
// Does it have the expected value?
// IMPORTANT: It may have been truncated -- see "Limits" below
assert_eq!(
search::engines.test_get_value(None)
);
// Were any of the values too long, and thus an error was recorded?
assert_eq!(
0,
search::engines.test_get_num_recorded_errors(ErrorType::InvalidValue)
);

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have the expected values?
nsTArray<nsCString> list = mozilla::glean::search::engines.TestGetValue();
ASSERT_TRUE(list.Contains("wikipedia"_ns));
ASSERT_TRUE(list.Constains("duck duck go"_ns));
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

There are test APIs available too:

const engines = Glean.search.engines.testGetValue();
Assert.ok(engines.includes("wikipedia"));
Assert.ok(engines.includes("duck duck go"));
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Limits

• Fixed maximum string length: 50. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.

• Fixed maximum list length: 20 items. Additional strings are dropped.

## Examples

• The names of the enabled search engines.

## Recorded errors

• invalid_overflow: if the string is too long. (Prior to Glean 31.5.0, this recorded an invalid_value).

• invalid_value: if the list is too long

# Timespan

Timespans are used to make a measurement of how much time is spent in a particular task.

To measure the distribution of multiple timespans, see Timing Distributions. To record absolute times, see Datetimes.

It is not recommended to use timespans in multiple threads, since calling start or stop out of order will be recorded as an invalid_state error.

## Configuration

Timespans have a required time_unit parameter to specify the smallest unit of resolution that the timespan will record. The allowed values for time_unit are:

• nanosecond
• microsecond
• millisecond
• second
• minute
• hour
• day

Consider the resolution that is required by your metric, and use the largest possible value that will provide useful information so as to not leak too much fine-grained information from the client. It is important to note that the value sent in the ping is truncated down to the nearest unit. Therefore, a measurement of 500 nanoseconds will be truncated to 0 microseconds.

Say you're adding a new timespan for the time spent logging into the app. First you need to add an entry for the counter to the metrics.yaml file:

auth:
type: timespan
description: >
Measures the time spent logging in.
time_unit: millisecond
...

## API

import org.mozilla.yourApplication.GleanMetrics.Auth

// ...
}

// ...
}

// ...
}

For convenience one can measure the time of a function or block of code:

}

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Auth

// Was anything recorded?
// Does the timer have the expected value
// Was the timing recorded incorrectly?
import org.mozilla.yourApplication.GleanMetrics.Auth;

// ...
}

// ...
}

// ...
}

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Auth;

// Was anything recorded?
// Does the timer have the expected value
// Was the timing recorded incorrectly?
assertEquals(
1,
ErrorType.InvalidValue
)
);
// ...
}

// ...
}

// ...
}

For convenience one can measure the time of a function or block of code:

}

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

@testable import Glean

// Was anything recorded?
// Does the timer have the expected value
// Was the timing recorded incorrectly?

# ...

# ...

# ...

The Python bindings also have a context manager for measuring time:

# ... Do the login ...

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

# Was anything recorded?
# Does the timer have the expected value
# Was the timing recorded incorrectly?
assert 1 == metrics.auth.local_time.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Auth;

{
// ...
}

{
// ...
}

{
// ...
}

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Auth;

// Was anything recorded?
// Does the timer have the expected value
// Was the timing recorded incorrectly?

#![allow(unused)]
fn main() {
// ...
}

// ...
}

// ...
}
}

The time reported in the telemetry ping will be timespan recorded during the lifetime of the ping.

There are test APIs available too:

#![allow(unused)]
fn main() {

// Was anything recorded?
// Was the timing recorded incorrectly?
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

PR_Sleep(PR_MillisecondsToInterval(10));

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have an expected values?
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

await sleep(10);

There are test APIs available too:

// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Raw API

Note: The raw API was designed to support a specific set of use-cases. Please consider using the higher level APIs listed above.

It's possible to explicitly set the timespan value, in nanoseconds. This API should only be used if your library or application requires recording times in a way that can not make use of start/stop/cancel.

The raw API will not overwrite a running timer or existing timespan value.

import org.mozilla.yourApplication.GleanMetrics.HistorySync

val duration = SyncResult.status.syncs.took.toLong()
HistorySync.setRawNanos(duration)
let duration = SyncResult.status.syncs.took.toLong()
HistorySync.setRawNanos(duration)
import org.mozilla.yourApplication.GleanMetrics.HistorySync

val duration = SyncResult.status.syncs.took.toLong()
HistorySync.setRawNanos(duration)

TODO. To be implemented in bug 1648442.

The raw API is not supported in Rust. See bug 1680225.

## Limits

• Timings are recorded in nanoseconds.

• On Android, the SystemClock.elapsedRealtimeNanos() function is used, so it is limited by the accuracy and performance of that timer. The time measurement includes time spent in sleep.

• On iOS, the mach_absolute_time function is used, so it is limited by the accuracy and performance of that timer. The time measurement does not include time spent in sleep.

• On Python 3.7 and later, time.monotonic_ns() is used. On earlier versions of Python, time.monotonics() is used, which is not guaranteed to have nanosecond resolution.

• On other platforms it uses time::precise_time_ns, which uses a high-resolution performance counter in nanoseconds provided by the underlying platform.

## Examples

• How much time is spent rendering the UI?

## Recorded errors

• invalid_value
• If recording a negative timespan.
• invalid_state
• If starting a timer while a previous timer is running.
• If stopping a timer while it is not running.
• If trying to set a raw timespan while a timer is running.
• If trying to record a timespan again while a previous value is still stored.

# Timing Distribution

Timing distributions are used to accumulate and store time measurement, for analyzing distributions of the timing data.

To measure the distribution of single timespans, see Timespans. To record absolute times, see Datetimes.

Timing distributions are recorded in a histogram where the buckets have an exponential distribution, specifically with 8 buckets for every power of 2. That is, the function from a value $$x$$ to a bucket index is:

$\lfloor 8 \log_2(x) \rfloor$

This makes them suitable for measuring timings on a number of time scales without any configuration.

Note Check out how this bucketing algorithm would behave on the Simulator

Timings always span the full length between start and stopAndAccumulate. If the Glean upload is disabled when calling start, the timer is still started. If the Glean upload is disabled at the time stopAndAccumulate is called, nothing is recorded.

Multiple concurrent timespans in different threads may be measured at the same time.

Timings are always stored and sent in the payload as nanoseconds. However, the time_unit parameter controls the minimum and maximum values that will recorded:

• nanosecond: 1ns <= x <= 10 minutes
• microsecond: 1μs <= x <= ~6.94 days
• millisecond: 1ms <= x <= ~19 years

Overflowing this range is considered an error and is reported through the error reporting mechanism. Underflowing this range is not an error and the value is silently truncated to the minimum value.

Additionally, when a metric comes from GeckoView (the geckoview_datapoint parameter is present), the time_unit parameter specifies the unit that the samples are in when passed to Glean. Glean will convert all of the incoming samples to nanoseconds internally.

## Configuration

If you wanted to create a timing distribution to measure page load times, first you need to add an entry for it to the metrics.yaml file:

pages:
type: timing_distribution
description: >
Counts how long each page takes to load
...

## API

Now you can use the timing distribution from the application's code. Starting a timer returns a timer ID that needs to be used to stop or cancel the timer at a later point. Multiple intervals can be measured concurrently. For example, to measure page load time on a number of tabs that are loading at the same time, each tab object needs to store the running timer ID.

import mozilla.components.service.glean.GleanTimerId
import org.mozilla.yourApplication.GleanMetrics.Pages

val timerId : GleanTimerId

fun onPageStart(e: Event) {
}

}

For convenience one can measure the time of a function or block of code:

}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the pageLoad example above, at this point the metric should have a sum == 11 and a count == 2:

import org.mozilla.yourApplication.GleanMetrics.Pages

// Was anything recorded?

// Get snapshot.

// Usually you don't know the exact timing values, but how many should have been recorded.
assertEquals(1L, snapshot.count)

// Assert that no errors were recorded.
import mozilla.components.service.glean.GleanTimerId;
import org.mozilla.yourApplication.GleanMetrics.Pages;

GleanTimerId timerId;

void onPageStart(Event e) {
}

}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the pageLoad example above, at this point the metric should have a sum == 11 and a count == 2:

import org.mozilla.yourApplication.GleanMetrics.Pages;

// Was anything recorded?

// Get snapshot.

// Usually you don't know the exact timing values, but how many should have been recorded.
assertEquals(1L, snapshot.getCount);

// Assert that no errors were recorded.
assertEquals(
0,
ErrorType.InvalidValue
)
);
import Glean

var timerId : GleanTimerId

func onPageStart() {
}

}

For convenience one can measure the time of a function or block of code:

}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the pageLoad example above, at this point the metric should have a sum == 11 and a count == 2:

@testable import Glean

// Was anything recorded?

// Get snapshot.

// Usually you don't know the exact timing values, but how many should have been recorded.
XCTAssertEqual(1, snapshot.count)

// Assert that no errors were recorded.

class PageHandler:
def __init__(self):
self.timer_id = None

def on_page_start(self, event):
# ...

# ...

The Python bindings also have a context manager for measuring time:

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the page_load example above, at this point the metric should have a sum == 11 and a count == 2:

# Was anything recorded?

# Get snapshot.

# Usually you don't know the exact timing values, but how many should have been recorded.
assert 1 == snapshot.count

# Assert that no errors were recorded.
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Pages;

var timerId;

void onPageStart(Event e) {
}

}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the pageLoad example above, at this point the metric should have a sum == 11 and a count == 2:

using static Mozilla.YourApplication.GleanMetrics.Pages;

// Was anything recorded?

// Get snapshot.

// Usually you don't know the exact timing values, but how many should have been recorded.
Assert.Equal(1, snapshot.Values.Count);

// Assert that no errors were recorded.

#![allow(unused)]
fn main() {
use glean_metrics;

fn on_page_start() {
}

}
}

There are test APIs available too.

#![allow(unused)]
fn main() {
use glean::ErrorType;
use glean_metrics;

// Was anything recorded?

// Assert no errors were recorded.
let errors = [
ErrorType::InvalidValue,
ErrorType::InvalidState,
ErrorType::InvalidOverflow
];
for error in errors {
}
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

PR_Sleep(PR_MillisecondsToInterval(10));

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have an expected values?
ASSERT_TRUE(data.sum > 0);
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

await sleep(10);

There are test APIs available too:

// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Limits

• Timings are recorded in nanoseconds.

• On Android, the SystemClock.elapsedRealtimeNanos() function is used, so it is limited by the accuracy and performance of that timer. The time measurement includes time spent in sleep.

• On iOS, the mach_absolute_time function is used, so it is limited by the accuracy and performance of that timer. The time measurement does not include time spent in sleep.

• On Python 3.7 and later, time.monotonic_ns() is used. On earlier versions of Python, time.monotonics() is used, which is not guaranteed to have nanosecond resolution.

• In Rust, time::precise_time_ns() is used.

• The maximum timing value that will be recorded depends on the time_unit parameter:

• nanosecond: 1ns <= x <= 10 minutes
• microsecond: 1μs <= x <= ~6.94 days
• millisecond: 1ms <= x <= ~19 years Longer times will be truncated to the maximum value and an error will be recorded.

## Examples

• How long does it take a page to load?

## Recorded errors

• invalid_value: If recording a negative timespan.
• invalid_state: If a non-existing/stopped timer is stopped again.
• invalid_overflow: If recording a time longer than the maximum for the given unit.

## Simulator

### Properties

Note The data provided, is assumed to be in the configured time unit. The data recorded, on the other hand, is always in nanoseconds. This means that, if the configured time unit is not nanoseconds, the data will be transformed before being recorded. Notice this, by using the select field above to change the time unit and see the mean of the data recorded changing.

# Memory Distribution

Memory distributions are used to accumulate and store memory sizes.

Memory distributions are recorded in a histogram where the buckets have an exponential distribution, specifically with 16 buckets for every power of 2. That is, the function from a value $$x$$ to a bucket index is:

$\lfloor 16 \log_2(x) \rfloor$

This makes them suitable for measuring memory sizes on a number of different scales without any configuration.

Note Check out how this bucketing algorithm would behave on the Simulator

## Configuration

Memory distributions have a required memory_unit parameter, which specifies the unit the incoming memory size values are recorded in. The units are the power-of-2 units, so "kilobyte" is more correctly a "kibibyte".

- kilobyte == 2^10 ==         1,024 bytes
- megabyte == 2^20 ==     1,048,576 bytes
- gigabyte == 2^30 == 1,073,741,824 bytes

If you wanted to create a memory distribution to measure the amount of heap memory allocated, first you need to add an entry for it to the metrics.yaml file:

memory:
heap_allocated:
type: memory_distribution
description: >
The heap memory allocated
memory_unit: kilobyte
...

## API

Now you can use the memory distribution from the application's code.

Note The data provided to the accumulate method is in the configured memory unit specified in the metrics.yaml file. The data recorded, on the other hand, is always in bytes.

For example, to measure the distribution of heap allocations:

import org.mozilla.yourApplication.GleanMetrics.Memory

fun allocateMemory(nbytes: Int) {
// ...
Memory.heapAllocated.accumulate(nbytes / 1024)
}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the heapAllocated example above, at this point the metric should have a sum == 11 and a count == 2:

import org.mozilla.yourApplication.GleanMetrics.Memory

// Was anything recorded?
assertTrue(Memory.heapAllocated.testHasValue())

// Get snapshot
val snapshot = Memory.heapAllocated.testGetValue()

// Does the sum have the expected value?
assertEquals(11, snapshot.sum)

// Usually you don't know the exact memory values, but how many should have been recorded.
assertEquals(2L, snapshot.count)

// Did this record a negative value?
assertEquals(1, Memory.heapAllocated.testGetNumRecordedErrors(ErrorType.InvalidValue))
func allocateMemory(nbytes: UInt64) {
// ...
Memory.heapAllocated.accumulate(nbytes / 1024)
}

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the heapAllocated example above, at this point the metric should have a sum == 11 and a count == 2:

@testable import Glean

// Was anything recorded?
XCTAssert(Memory.heapAllocated.testHasValue())

// Get snapshot
let snapshot = try! Memory.heapAllocated.testGetValue()

// Does the sum have the expected value?
XCTAssertEqual(11, snapshot.sum)

// Usually you don't know the exact memory values, but how many should have been recorded.
XCTAssertEqual(2, snapshot.count)

// Did this record a negative value?
XCTAssertEqual(1, Memory.heapAllocated.testGetNumRecordedErrors(.invalidValue))

def allocate_memory(nbytes):
# ...
metrics.memory.heap_allocated.accumulate(nbytes / 1024)

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

Continuing the heapAllocated example above, at this point the metric should have a sum == 11 and a count == 2:

# Was anything recorded?

# Get snapshot
snapshot = metrics.memory.heap_allocated.test_get_value()

# Does the sum have the expected value?
assert 11 == snapshot.sum

# Usually you don't know the exact memory values, but how many should have been recorded.
assert 2 == snapshot.count

# Did this record a negative value?
assert 1 == metrics.memory.heap_allocated.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Memory;

fun allocateMemory(ulong nbytes) {
// ...
Memory.heapAllocated.Accumulate(nbytes / 1024);
}

There are test APIs available too. For convenience, properties Sum and Count are exposed to facilitate validating that data was recorded correctly.

Continuing the heapAllocated example above, at this point the metric should have a Sum == 11 and a Count == 2:

using static Mozilla.YourApplication.GleanMetrics.Memory;

// Was anything recorded?
Assert.True(Memory.heapAllocated.TestHasValue());

// Get snapshot
var snapshot = Memory.heapAllocated.TestGetValue();

// Does the sum have the expected value?
Assert.Equal(11, snapshot.Sum);

// Usually you don't know the exact memory values, but how many should have been recorded.
Assert.Equal(2L, snapshot.Count);

// Did this record a negative value?
Assert.Equal(1, Memory.heapAllocated.TestGetNumRecordedErrors(ErrorType.InvalidValue));

#![allow(unused)]
fn main() {
use glean_metrics;

fn allocate_memory(bytes: u64) {
memory::heap_allocated.accumulate(bytes / 1024);
}
}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean::{DistributionData, ErrorType};
use glean_metrics;

// Was anything recorded?
assert!(memory::heap_allocated.test_get_value(None).is_some());

// Is the sum as expected?
let data = memory::heap_allocated.test_get_value(None).unwrap();
assert_eq!(11, data.sum)
// The actual buckets and counts live in data.values.

// Were there any errors?
assert_eq!(1, memory::heap_allocated.test_get_num_recorded_errors(InvalidValue));

}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::memory::heap_allocated.Accumulate(bytes / 1024);

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have the expected value?
ASSERT_EQ(11 * 1024, mozilla::glean::memory::heap_allocated.TestGetValue().value().sum);
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

Glean.memory.heapAllocated.accumulate(bytes / 1024);

There are test APIs available too:

const data = Glean.memory.heapAllocated.testGetValue();
Assert.equal(11 * 1024, data.sum);
// Does it have the right number of samples?
Assert.equal(1, Object.entries(data.values).reduce(([bucket, count], sum) => count + sum, 0));
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Limits

• The maximum memory size that can be recorded is 1 Terabyte (240 bytes). Larger sizes will be truncated to 1 Terabyte.

## Examples

• What is the distribution of the size of heap allocations?

## Recorded errors

• invalid_value: If recording a negative memory size.
• invalid_value: If recording a size larger than 1TB.

## Simulator

### Properties

Note The data provided, is assumed to be in the configured memory unit. The data recorded, on the other hand, is always in bytes. This means that, if the configured memory unit is not byte, the data will be transformed before being recorded. Notice this, by using the select field above to change the memory unit and see the mean of the data recorded changing.

# UUID

UUIDs are used to record values that uniquely identify some entity, such as a client id.

## Configuration

You first need to add an entry for it to the metrics.yaml file:

user:
client_id:
type: uuid
description: >
A unique identifier for the client's profile
...

## API

Now that the UUID is defined in metrics.yaml, you can use the metric to record values in the application's code.

import org.mozilla.yourApplication.GleanMetrics.User

User.clientId.generateAndSet() // Generate a new UUID and record it
User.clientId.set(UUID.randomUUID())  // Set a UUID explicitly

There are test APIs available too.

import org.mozilla.yourApplication.GleanMetrics.User

// Was anything recorded?
assertTrue(User.clientId.testHasValue())
// Was it the expected value?
assertEquals(uuid, User.clientId.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.User;

User.INSTANCE.clientId.generateAndSet(); // Generate a new UUID and record it
User.INSTANCE.clientId.set(UUID.randomUUID());  // Set a UUID explicitly

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.User;

// Was anything recorded?
assertTrue(User.INSTANCE.clientId.testHasValue());
// Was it the expected value?
assertEquals(uuid, User.INSTANCE.clientId.testGetValue());
User.clientId.generateAndSet() // Generate a new UUID and record it
User.clientId.set(UUID())  // Set a UUID explicitly

There are test APIs available too.

@testable import Glean

// Was anything recorded?
XCTAssert(User.clientId.testHasValue())
// Was it the expected value?
XCTAssertEqual(uuid, try User.clientId.testGetValue())
import uuid

# Generate a new UUID and record it
metrics.user.client_id.generate_and_set()
# Set a UUID explicitly
metrics.user.client_id.set(uuid.uuid4())

There are test APIs available too.

# Was anything recorded?
assert metrics.user.client_id.test_has_value()
# Was it the expected value?
assert uuid == metrics.user.client_id.test_get_value()
using static Mozilla.YourApplication.GleanMetrics.User;

User.clientId.GenerateAndSet(); // Generate a new UUID and record it
User.clientId.Set(System.Guid.NewGuid()); // Set a UUID explicitly

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.User;

// Was anything recorded?
Assert.True(User.clientId.TestHasValue());
// Was it the expected value?
Assert.Equal(uuid, User.clientId.TestGetValue());

#![allow(unused)]
fn main() {
use glean_metrics;
use uuid::Uuid;

user::client_id.generate_and_set(); // Generate a new UUID and record it
user::client_id.set(Uuid::new_v4()); // Set a UUID explicitly
}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean_metrics;

let u = Uuid::new_v4();
user::client_id.set(u);
// Was anything recorded?
assert!(user::client_id.test_get_value(None).is_some());
// Does it have the expected value?
assert_eq!(u, user::client_id.test_get_value(None).unwrap());
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

// Generate a new UUID and record it.
mozilla::glean::user::client_id.GenerateAndSet();
// Set a specific value.
nsCString kUuid("decafdec-afde-cafd-ecaf-decafdecafde");
mozilla::glean::user::client_id.Set(kUuid);

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have an expected values?
ASSERT_STREQ(kUuid.get(), mozilla::glean::user::client_id.TestGetValue().value().get());
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are currently only available in Firefox Desktop. General JavaScript support is coming soon via the Glean.js project.

// Generate a new UUID and record it.
Glean.user.clientId.generateAndSet();
// Set a specific value.
const uuid = "decafdec-afde-cafd-ecaf-decafdecafde";
Glean.user.clientId.set(uuid);

There are test APIs available too:

Assert.equal(Glean.user.clientId.testGetValue(), uuid);
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

• None.

## Examples

• A unique identifier for the client.

## Recorded errors

• invalid_value: if the value is set to a string that is not a UUID (only applies for dynamically-typed languages, such as Python).

# Datetime

Datetimes are used to record an absolute date and time, for example the date and time that the application was first run.

The device's offset from UTC is recorded and sent with the Datetime value in the ping.

To record a single elapsed time, see Timespan. To measure the distribution of multiple timespans, see Timing Distributions.

## Configuration

Datetimes have a required time_unit parameter to specify the smallest unit of resolution that the metric will record. The allowed values for time_unit are:

• nanosecond
• microsecond
• millisecond
• second
• minute
• hour
• day

Carefully consider the required resolution for recording your metric, and choose the coarsest resolution possible.

You first need to add an entry for it to the metrics.yaml file:

install:
first_run:
type: datetime
time_unit: day
description: >
Records the date when the application was first run
...

## API

import org.mozilla.yourApplication.GleanMetrics.Install

Install.firstRun.set() // Records "now"
Install.firstRun.set(Calendar(2019, 3, 25)) // Records a custom datetime

There are test APIs available too.

import org.mozilla.yourApplication.GleanMetrics.Install

// Was anything recorded?
assertTrue(Install.firstRun.testHasValue())
// Was it the expected value?
// NOTE: Datetimes always include a timezone offset from UTC, hence the
// "-05:00" suffix.
assertEquals("2019-03-25-05:00", Install.firstRun.testGetValueAsString())
// Was the value invalid?
assertEquals(0, Install.firstRun.testGetNumRecordedErrors(ErrorType.InvalidValue))
import org.mozilla.yourApplication.GleanMetrics.Install;

Install.INSTANCE.firstRun.set(); // Records "now"
Install.INSTANCE.firstRun.set(Calendar(2019, 3, 25)); // Records a custom datetime

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Install;

// Was anything recorded?
assertTrue(Install.INSTANCE.firstRun.testHasValue());
// Was it the expected value?
// NOTE: Datetimes always include a timezone offset from UTC, hence the
// "-05:00" suffix.
assertEquals("2019-03-25-05:00", Install.INSTANCE.firstRun.testGetValueAsString());
// Was the value invalid?
assertEquals(0, Install.INSTANCE.firstRun.testGetNumRecordedErrors(ErrorType.InvalidValue));
Install.firstRun.set() // Records "now"

let dateComponents = DateComponents(
calendar: Calendar.current,
year: 2004, month: 12, day: 9, hour: 8, minute: 3, second: 29
)
Install.firstRun.set(dateComponents.date!) // Records a custom datetime

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Install.firstRun.testHasValue())
// Does the datetime have the expected value?
XCTAssertEqual(6, try Install.firstRun.testGetValue())
// Was the value invalid?
XCTAssertEqual(0, Install.firstRun.getNumRecordedErrors(.invalidValue))
import datetime

# Records "now"
metrics.install.first_run.set()
# Records a custom datetime
metrics.install.first_run.set(datetime.datetime(2019, 3, 25))

There are test APIs available too.

# Was anything recorded?
assert metrics.install.first_run.test_has_value()

# Was it the expected value?
# NOTE: Datetimes always include a timezone offset from UTC, hence the
# "-05:00" suffix.
assert "2019-03-25-05:00" == metrics.install.first_run.test_get_value_as_str()
# Was the value invalid?
assert 0 == metrics.install.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Install;

// Records "now"
Install.firstRun.Set();
// Records a custom datetime
Install.firstRun.Set(new DateTimeOffset(2018, 2, 25, 11, 10, 0, TimeZone.CurrentTimeZone.BaseUtcOffset));

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Install;

// Was anything recorded?
Assert.True(Install.firstRun.TestHasValue());
// Was it the expected value?
// NOTE: Datetimes always include a timezone offset from UTC, hence the
// "-05:00" suffix.
Assert.Equal("2019-03-25-05:00", Install.firstRun.TestGetValueAsString());
// Was the value invalid?
Assert.Equal(0, Install.firstRun.TestGetNumRecordedErrors(ErrorType.InvalidValue));
use glean::ErrorType;
use glean_metrics;

use chrono::{FixedOffset, TimeZone};

// Records "now"
install::first_run.set(None);
// Records a custom datetime
let custom_date = FixedOffset::east(0).ymd(2019, 3, 25).and_hms(0, 0, 0);
install::first_run.set(Some(custom_date));

There are test APIs available too.

// Was anything recorded?
assert!(metrics.install.first_run.test_get_value(None).is_some());

// Was it the expected value?
let expected_date = FixedOffset::east(0).ymd(2019, 3, 25).and_hms(0, 0, 0);
assert_eq!(expected_date, metrics.install.first_run.test_get_value(None));
// Was the value invalid?
assert_eq!(0, install::first_run.test_get_num_recorded_errors(
ErrorType::InvalidValue
));

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

PRExplodedTime date = {0, 35, 10, 12, 6, 10, 2020, 0, 0, {5 * 60 * 60, 0}};
mozilla::glean::install::first_run.Set(&date);

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have the expected value?
ASSERT_STREQ(
mozilla::glean::install::first_run.TestGetValue().value(),
"2020-11-06T12:10:35+05:00"_ns
);
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

const value = new Date("2020-06-11T12:00:00");
Glean.install.firstRun.set(value.getTime() * 1000);

There are test APIs available too:

Assert.ok(Glean.install.firstRun.testGetValue().startsWith("2020-06-11T12:00:00"));
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

• None.

## Examples

• When did the user first run the application?

## Recorded errors

• invalid_value: Setting the date time to an invalid value.

# Events

Events allow recording of e.g. individual occurrences of user actions, say every time a view was open and from where.

Each event contains the following data:

• A timestamp, in milliseconds. The first event in any ping always has a value of 0, and subsequent event timestamps are relative to it.
• The name of the event.
• A set of key-value pairs, where the keys are predefined in the extra_keys metric parameter, and the values are strings.

Important: events are the most expensive metric type to record, transmit, store and analyze, so they should be used sparingly, and only when none of the other metric types are sufficient for answering your question.

## Configuration

Say you're adding a new event for when a view is shown. First you need to add an entry for the event to the metrics.yaml file:

views:
type: event
description: >
Recorded when the login view is opened.
...
extra_keys:
description: The source from which the login view was opened, e.g. "toolbar".

The extra_keys parameter enumerates the acceptable keys on the event. This is an object mapping the key to an object containing metadata about the key. A maximum of 10 extra keys is allowed. This metadata object has the following keys:

• description: Required. A description of the key.

## API

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Keys added.

import org.mozilla.yourApplication.GleanMetrics.Views

There are test APIs available too, for example:

import org.mozilla.yourApplication.GleanMetrics.Views

// Was any event recorded?
// Get a List of the recorded events.
// Check that two events were recorded.
assertEquals(2, snapshot.size)
val first = snapshot.single()
// Check that no errors were recorded

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Keys added.

There are test APIs available too, for example:

@testable import Glean

// Was any event recorded?
// Get a List of the recorded events.
// Check that two events were recorded.
XCTAssertEqual(2, snapshot.size)
val first = snapshot[0]
// Check that no errors were recorded

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with _keys added.

{
}
)

There are test APIs available too, for example:

# Was any event recorded?
# Get a List of the recorded events.
# Check that two events were recorded.
assert 2 == len(snapshot)
first = snapshot[0]
# Check that no errors were recorded
ErrorType.INVALID_OVERFLOW
)

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Keys added.

using static Mozilla.YourApplication.GleanMetrics.Views;

});

There are test APIs available too, for example:

using static Mozilla.YourApplication.GleanMetrics.Views;

// Was any event recorded?
// Get a List of the recorded events.
// Check that two events were recorded.
Assert.Equal(2, snapshot.Length);
var first = snapshot.First();
// Check that no errors were recorded

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Keys added.

#![allow(unused)]
fn main() {
use metrics::views;

let mut extra = HashMap::new();
}

There are test APIs available too, for example:

#![allow(unused)]
fn main() {
use metrics::views;

// Was any event recorded?
// Get a List of the recorded events.
// Check that two events were recorded.
assert_eq!(2, snapshot.len());
let first = &snapshot[0];
// Check that no errors were recorded
}

Note: C++ APIs are only available in Firefox Desktop.

#include "mozilla/glean/GleanMetrics.h"

nsCString source = "toolbar"_ns;

There are test APIs available too:

#include "mozilla/glean/GleanMetrics.h"

// Does it have a value?
// Does it have the expected value?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1678567
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

Note: JS APIs are only available in Firefox Desktop.

let extra = { sourceOfLogin: "toolbar" };

There are test APIs available too:

// Does it have the expected value?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1678567
// Did it run across any errors?
// TODO: https://bugzilla.mozilla.org/show_bug.cgi?id=1683171

## Limits

• When 500 events are queued on the client an events ping is immediately sent.

• The extra_keys allows for a maximum of 10 keys.

• The keys in the extra_keys list must be in dotted snake case, with a maximum length of 40 bytes in UTF-8.

• The values in the extras object have a maximum length of 50 in UTF-8.

## Examples

• Every time a new tab is opened.

## Recorded errors

• invalid_overflow: if any of the values in the extras object are greater than 50 bytes in length. (Prior to Glean 31.5.0, this recorded an invalid_value).

# Custom Distribution

Custom distributions are used to record the distribution of arbitrary values.

It should be used only when direct control over how the histogram buckets are computed is required. Otherwise, look at the standard distribution metric types:

Note: Custom distributions are currently only allowed for GeckoView metrics (the gecko_datapoint parameter is present) and thus have only a Kotlin API.

## Configuration

Custom distributions have the following required parameters:

• range_min: (Integer) The minimum value of the first bucket
• range_max: (Integer) The minimum value of the last bucket
• bucket_count: (Integer) The number of buckets
• histogram_type:
• linear: The buckets are evenly spaced
• exponential: The buckets follow a natural logarithmic distribution

Note Check out how these bucketing algorithms would behave on the Custom distribution simulator

In addition, the metric should specify:

• unit: (String) The unit of the values in the metric. For documentation purposes only -- does not affect data collection.

If you wanted to create a custom distribution of the peak number of pixels used during a checkerboard event, first you need to add an entry for it to the metrics.yaml file:

graphics:
checkerboard_peak:
type: custom_distribution
description: >
Peak number of CSS pixels checkerboarded during a checkerboard event.
range_min: 1
range_max: 66355200
bucket_count: 50
histogram_type: exponential
unit: pixels
gecko_datapoint: CHECKERBOARD_PEAK
...

## API

Now you can use the custom distribution from the application's code.

import org.mozilla.yourApplication.GleanMetrics.Graphics

Graphics.checkerboardPeak.accumulateSamples([23])

There are test APIs available too. For convenience, properties sum and count are exposed to facilitate validating that data was recorded correctly.

import org.mozilla.yourApplication.GleanMetrics.Graphics

// Was anything recorded?
assertTrue(Graphics.checkerboardPeak.testHasValue())

// Get snapshot
val snapshot = Graphics.checkerboardPeak.testGetValue()

// Does the sum have the expected value?
assertEquals(11, snapshot.sum)

// Usually you don't know the exact timing values, but how many should have been recorded.
assertEquals(2L, snapshot.count())

/// Did the metric receive a negative value?
assertEquals(1, Graphics.checkerboardPeak.testGetNumRecordedErrors(ErrorType.InvalidValue))

#![allow(unused)]
fn main() {
use glean_metrics;

graphics::checkerboard_peak.accumulate_samples_signed(vec![23]);
}

There are test APIs available too.

#![allow(unused)]
fn main() {
use glean::ErrorType;
use glean_metrics;

// Was anything recorded?
assert!(graphics::checkerboard_peak.test_get_value(None).is_some());
// Does it have the expected value?
assert_eq!(23, graphics::checkerboard_peak.test_get_value(None).unwrap().sum);

// Were any of the values negative and thus caused an error to be recorded?
assert_eq!(
0,
graphics::checkerboard_peak.test_get_num_recorded_errors(ErrorType::InvalidValue));
}

## Limits

• The maximum value of bucket_count is 100.

• Only non-negative values may be recorded.

## Recorded errors

• invalid_value: If recording a negative value.

# Quantity

Used to record a single non-negative integer value or 0. For example, the width of the display in pixels.

IMPORTANT If you need to count something (e.g. number of tabs open or number of times a button is pressed) prefer using the Counter metric type, which has a specific API for counting things and also takes care of resetting the count at the correct time.

## Configuration

Say you're adding a new quantity for the width of the display in pixels. First you need to add an entry for the quantity to the metrics.yaml file:

display:
width:
type: quantity
description: >
The width of the display, in pixels.
unit: pixels
...

Note that quantities have a required unit parameter, which is a free-form string for documentation purposes.

## API

import org.mozilla.yourApplication.GleanMetrics.Display

Display.width.set(width)

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Display

// Was anything recorded?
assertTrue(Display.width.testHasValue())
// Does the quantity have the expected value?
assertEquals(6, Display.width.testGetValue())
// Did it record an error due to a negative value?
assertEquals(1, Display.width.testGetNumRecordedErrors(ErrorType.InvalidValue))
import org.mozilla.yourApplication.GleanMetrics.Display;

Display.INSTANCE.width.set(width);

There are test APIs available too:

import org.mozilla.yourApplication.GleanMetrics.Display;

// Was anything recorded?
assertTrue(Display.INSTANCE.width.testHasValue());
// Does the quantity have the expected value?
assertEquals(6, Display.INSTANCE.width.testGetValue());
// Did the quantity record a negative value?
assertEquals(
1, Display.INSTANCE.width.testGetNumRecordedErrors(ErrorType.InvalidValue)
);
Display.width.set(width)

There are test APIs available too:

@testable import Glean

// Was anything recorded?
XCTAssert(Display.width.testHasValue())
// Does the quantity have the expected value?
XCTAssertEqual(6, try Display.width.testGetValue())
// Did the quantity record a negative value?
XCTAssertEqual(1, Display.width.testGetNumRecordedErrors(.invalidValue))

metrics.display.width.set(width)

There are test APIs available too:

# Was anything recorded?
assert metrics.display.width.test_has_value()
# Does the quantity have the expected value?
assert 6 == metrics.display.width.test_get_value()
# Did the quantity record an negative value?
from glean.testing import ErrorType
assert 1 == metrics.display.width.test_get_num_recorded_errors(
ErrorType.INVALID_VALUE
)
using static Mozilla.YourApplication.GleanMetrics.Display;

Display.width.Set(width);

There are test APIs available too:

using static Mozilla.YourApplication.GleanMetrics.Display;

// Was anything recorded?
Assert.True(Display.width.TestHasValue());
// Does the counter have the expected value?
Assert.Equal(6, Display.width.TestGetValue());
// Did the counter record an negative value?
Assert.Equal(
1, Display.width.TestGetNumRecordedErrors(ErrorType.InvalidValue)
);

#![allow(unused)]
fn main() {
use glean_metrics;

display::width.set(width);
}

There are test APIs available too:

#![allow(unused)]
fn main() {
use glean_metrics;

// Was anything recorded?
assert!(display::width.test_get_value(None).is_some());
// Does it have the expected value?
assert_eq!(width, display::width.test_get_value(None).unwrap());
}

## Limits

• Quantities must be non-negative integers or 0.

## Examples

• What is the width of the display, in pixels?

## Recorded errors

• invalid_value: If a negative value is passed in.

# Rate

Used to count how often something happens relative to how often something else happens. Like how many documents use a particular CSS Property, or how many HTTP connections had an error. You can think of it like a fraction, with a numerator and a denominator.

All rates start without a value. A rate with a numerator of 0 is valid and will be sent to ensure we capture the "no errors happened" or "no use counted" cases.

IMPORTANT: When using a rate metric, it is important to let the Glean metric do the counting. Using your own variable for counting and setting the metric yourself could be problematic: ping scheduling will make it difficult to ensure the metric is at the correct value at the correct time. Instead, count to the numerator and denominator as you go.

## Configuration

Say you're adding a new rate for how often HTTP connections have errors. First you need to add an entry for the rate to the metrics.yaml file:

network:
http_connection_error:
type: rate
description: >
How many HTTP connections error out out of the total connections made.
...

### External Denominators

If several rates share the same denominator (from our example above, maybe there are multiple rates per total connections made) then the denominator should be defined as a counter and shared between rates using the denominator_metric property:

network:
http_connections:
type: counter
description: >
Total number of http connections made.
...

http_connection_error:
type: rate
description: >
How many HTTP connections error out out of the total connections made.
denominator_metric: network.http_connections
...

http_connection_slow:
type: rate
description: >
How many HTTP connections were slow, out of the total connections made.
denominator_metric: network.http_connections
...

## API

Since a rate is two numbers, you add to each one individually:

#![allow(unused)]
fn main() {
use glean_metrics::*;

}

}

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

#![allow(unused)]
fn main() {
use glean_metrics;

}
if connection_was_slow {
}

// network::http_connection_error has no add_to_denominator method.
}

There are test APIs available too. Whether the rate has an external denominator or not, you can use this API to get the current value:

#![allow(unused)]
fn main() {
use glean::ErrorType;

use glean_metrics;

// Was anything recorded?
assert!(network::http_connection_error.test_get_value(None).is_some());
// Does it contain counter have the expected values?
assert_eq!((1, 1), network::http_connection_error.test_get_value(None).unwrap());
// Did the numerator or denominator ever have a negative value added?
assert_eq!(
0,
network::http_connection_error.test_get_num_recorded_errors(
ErrorType::InvalidValue
)
);
}

## Limits

• Numerator and Denominator only increment.
• Numerator and Denominator saturate at the largest value that can be represented as a 32-bit signed integer (2147483647).

## Examples

• How often did an HTTP connection error?
• How many documents used a given CSS Property?

## Recorded errors

• invalid_value: If either numerator or denominator is incremented by a negative value.

# Glean Pings

A ping is a bundle of related metrics, gathered in a payload to be transmitted. The ping payload will be encoded in JSON format and contains one or more of the common sections with shared information data.

If data collection is enabled, the Glean SDK provides a set of built-in pings that are assembled out of the box without any developer intervention. The following is a list of these built-in pings:

• baseline ping: A small ping sent every time the application goes to foreground and background. Going to foreground also includes when the application starts.
• metrics ping: The default ping for metrics. Sent approximately daily.
• events ping: The default ping for events. Sent every time the application goes to background or a certain number of events is reached.
• deletion-request ping: Sent when the user disables telemetry in order to request a deletion of their data.

Applications can also define and send their own custom pings when the schedules of these pings is not suitable.

There is also a high-level overview of how the metrics and baseline pings relate and the timings they record.

## Ping sections

Every ping has the following keys at the top-level:

• The ping_info section contains core metadata that is included in every ping.

• The client_info section contains information that identifies the client. It is included in most pings (including all built-in pings), but may be excluded from pings where we don't want to connect client information with the other metrics in the ping.

• The metrics section contains the submitted values for all metric types except for events. It has keys for each of the metric types, under which is data for each metric.

• The events section contains the events recorded in the ping.

See the payload documentation for more details for each metric type in the metrics and events section.

### The ping_info section

The following fields are included in the ping_info section, for every ping. Optional fields are marked accordingly.

Field nameTypeDescription
seqCounterA running counter of the number of times pings of this type have been sent
experimentsObjectOptional. A dictionary of active experiments
start_timeDatetimeThe time of the start of collection of the data in the ping, in local time and with minute precision, including timezone information.
end_timeDatetimeThe time of the end of collection of the data in the ping, in local time and with minute precision, including timezone information. This is also the time this ping was generated and is likely well before ping transmission time.
reasonStringOptional. The reason the ping was submitted. The specific set of values and their meanings are defined for each metric type in the reasons field in the pings.yaml file.

All the metrics surviving application restarts (e.g. seq, ...) are removed once the application using the Glean SDK is uninstalled.

#### The experiments object

This object (included in the ping_info section) contains experiment annotations keyed by the experiment id. Each annotation contains the experiment branch the client is enrolled in and may contain a string to string map with additional data in the extra key. Both the id and branch are truncated to 30 characters. See Using the Experiments API on how to record experiments data.

{
"<id>": {
"branch": "branch-id",
"extra": {
"some-key": "a-value"
}
}
}

### The client_info section

The following fields are included in the client_info section. Optional fields are marked accordingly.

Field nameTypeDescription
app_buildStringThe build identifier generated by the CI system (e.g. "1234/A"). For language bindings that provide automatic detection for this value, (e.g. Android/Kotlin), in the unlikely event that the build identifier can not be retrieved from the OS, it is set to inaccessible. For other language bindings, if the value was not provided through configuration, this metric gets set to Unknown.
app_channelStringOptional The product-provided release channel (e.g. "beta")
app_display_versionStringThe user-visible version string (e.g. "1.0.3"). The meaning of the string (e.g. whether semver or a git hash) is application-specific. In the unlikely event this value can not be obtained from the OS, it is set to "inaccessible". If it is accessible, but not set by the application, it is set to "Unknown".
architectureStringThe architecture of the device (e.g. "arm", "x86")
client_idUUIDOptional A UUID identifying a profile and allowing user-oriented correlation of data
device_manufacturerStringOptional The manufacturer of the device
device_modelStringOptional The model name of the device. On Android, this is Build.MODEL, the user-visible name of the device.
first_run_dateDatetimeThe date of the first run of the application, in local time and with day precision, including timezone information.
osStringThe name of the operating system (e.g. "linux", "Android", "ios")
os_versionStringThe user-visible version of the operating system (e.g. "1.2.3")
android_sdk_versionStringOptional. The Android specific SDK version of the software running on this hardware device (e.g. "23")
telemetry_sdk_buildStringThe version of the Glean SDK
localeStringOptional. The locale of the application during initialization (e.g. "es-ES"). If the locale can't be determined on the system, the value is "und", to indicate "undetermined".

All the metrics surviving application restarts (e.g. client_id, ...) are removed once the application using the Glean SDK is uninstalled.

## Ping submission

The pings that the Glean SDK generates are submitted to the Mozilla servers at specific paths, in order to provide additional metadata without the need to unpack the ping payload.

A typical submission URL looks like

where:

• <application-id>: a unique application id, automatically detected by the Glean SDK; this is the value returned by Context.getPackageName();
• <doc-type>: the name of the ping; this can be one of the pings available out of the box with the Glean SDK, or a custom ping;
• <glean-schema-version>: the version of the Glean ping schema;
• <document-id>: a unique identifier for this ping.

### Limitations

To keep resource usage in check, the Glean SDK enforces some limitations on ping uploading and ping storage.

• Rate limiting: only up to 10 ping submissions every 60 seconds are allowed. There are no exposed methods to change these rate limiting defaults yet, follow Bug 1647630 for updates.
• Request body size limiting: the body of a ping request may have up to 1MB. Pings that exceed this size are discarded and don't get uploaded. Size and number of discarded pings are recorded on the internal Glean metric glean.upload.discarded_exceeding_pings_size.
• Storage quota: Pending pings are stored on disk. The storage directory is scanned every time Glean is initialized and upon scanning Glean checks its size. If this directory exceeds a size of 10MB or 250 pending ping files, pings are deleted to get the directory back to an accepted size. Pings are deleted oldest first, until the directory size is below the quota. The number of deleted pings due to exceeding storage quota is recorded on the metric glean.upload.deleted_pings_after_quota_hit and the size of the pending pings directory is recorded (regardless on whether quota has been reached) on the metric glean.upload.pending_pings_directory_size Note Deletion request pings are stored in a different directory, are not subject to this limitation and never get deleted.

A pre-defined set of headers is additionally sent along with the submitted ping:

Content-Typeapplication/json; charset=utf-8Describes the data sent to the server
User-AgentDefaults to e.g. Glean/0.40.0 (Kotlin on Android), where 0.40.0 is the Glean SDK version number and Kotlin on Android is the name of the language used by the binding that sent the request plus the name of the platform it is running on.Describes the application sending the ping using the Glean SDK
Datee.g. Mon, 23 Jan 2019 10:10:10 GMT+00:00Submission date/time in GMT/UTC+0 offset
X-Client-TypeGleanCustom header to support handling of Glean pings in the legacy pipeline
X-Client-Versione.g. 0.40.0The Glean SDK version, sent as a custom header to support handling of Glean pings in the legacy pipeline
X-Debug-IDOptional, e.g. test-tagDebug header attached to Glean pings when using the debug tools
X-Source-TagsOptional, e.g. automation, perfA list of tags to associate with the ping, useful for clustering pings at analysis time, for example to tell data generated from CI from other data.

## Defining foreground and background state

These docs refer to application 'foreground' and 'background' state in several places.

### Foreground

For Android, this specifically means the activity becomes visible to the user, it has entered the Started state, and the system invokes the onStart() callback.

### Background

This specifically means when the activity is no longer visible to the user, it has entered the Stopped state, and the system invokes the onStop() callback.

This may occur, if the user uses Overview button to change to another app, the user presses the Back button and navigates to a previous application or the home screen, or if the user presses the Home button to return to the home screen. This can also occur if the user navigates away from the application through some notification or other means.

The system may also call onStop() when the activity has finished running, and is about to be terminated.

### Foreground

For iOS, the Glean SDK attaches to the willEnterForegroundNotification. This notification is posted by the OS shortly before an app leaves the background state on its way to becoming the active app.

### Background

For iOS, this specifically means when the app is no longer visible to the user, or when the UIApplicationDelegate receives the applicationDidEnterBackground event.

This may occur if the user opens the task switcher to change to another app, or if the user presses the Home button to show the home screen. This can also occur if the user navigates away from the app through a notification or other means.

Note: Glean does not currently support Scene based lifecycle events that were introduced in iOS 13.

# Ping schedules and timings overview

Full reference details about the metrics and baseline ping schedules are detailed elsewhere.

The following diagram shows a typical timeline of a mobile application, when pings are sent and what timing-related information is included.

There are two distinct runs of the application, where the OS shutdown the application at the end of Run 1, and the user started it up again at the beginning of Run 2.

There are three distinct foreground sessions, where the application was visible on the screen and the user was able to interact with it.

The rectangles for the baseline and metrics pings represent the measurement windows of those pings, which always start exactly at the end of the preceding ping. The ping_info.start_time and ping_info.end_time metrics included in these pings correspond to these beginning and the end of their measurement windows.

The baseline.duration metric (included only in baseline pings) corresponds to amount of time the application spent on the foreground, which, since measurement window always extend to the next ping, is not always the same thing as the baseline ping's measurement window.

The submission_timestamp is the time the ping was received at the telemetry endpoint, added by the ingestion pipeline. It is not exactly the same as ping_info.end_time, since there may be various networking and system latencies both on the client and in the ingestion pipeline (represented by the dotted horizontal line, not to scale). Also of note is that start_time/end_time are measured using the client's real-time clock in its local timezone, which is not a fully reliable source of time.

The "Baseline 4" ping illustrates an important corner case. When "Session 2" ended, the OS also shut down the entire process, and the Glean SDK did not have an opportunity to send a baseline ping immediately. In this case, it is sent at the next available opportunity when the application starts up again in "Run 2". This baseline ping is annotated with the reason code dirty_startup.

The "Metrics 2" ping likewise illustrates another important corner case. "Metrics 1" was able to be sent at the target time of 04:00 (local device time) because the application was currently running. However, the next time 04:00 came around, the application was not active, so the Glean SDK was unable to send a metrics ping. It is sent at the next available opportunity, when the application starts up again in "Run 2". This metrics ping is annotated with the reason code overdue.

# The baseline ping

## Description

This ping is intended to provide metrics that are managed by the Glean SDK itself, and not explicitly set by the application or included in the application's metrics.yaml file.

Note: As the baseline ping was specifically designed for mobile operating systems, it is not sent when using the Glean Python bindings.

## Scheduling

The baseline ping is automatically submitted with a reason: active when the application becomes active (on mobile it means getting to foreground). These baseline pings do not contain duration.

The baseline ping is automatically submitted with a reason: inactive when the application becomes inactive (on mobile it means getting to background). If no baseline ping is triggered when becoming inactive (e.g. the process is abruptly killed) a baseline ping with reason dirty_startup will be submitted on the next application startup. This only happens from the second application start onward.

## Contents

The baseline ping also includes the common ping sections found in all pings.

It also includes a number of metrics defined in the Glean SDK itself.

### Querying ping contents

A quick note about querying ping contents (i.e. for sql.telemetry.mozilla.org): Each metric in the baseline ping is organized by its metric type, and uses a namespace of glean.baseline. For instance, in order to select duration you would use metrics.timespan['glean.baseline.duration']. If you were trying to select a String based metric such as os, then you would use metrics.string['glean.baseline.os']

## Example baseline ping

{
"ping_info": {
"experiments": {
"third_party_library": {
"branch": "enabled"
}
},
"seq": 0,
"start_time": "2019-03-29T09:50-04:00",
"end_time": "2019-03-29T09:53-04:00",
"reason": "foreground"
},
"client_info": {
"telemetry_sdk_build": "0.49.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "27",
"os_version": "8.1.0",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"metrics": {
"timespan": {
"glean.baseline.duration": {
"value": 52,
"time_unit": "second"
}
}
}
}

# The deletion-request ping

## Description

This ping is submitted when a user opts out of sending technical and interaction data.

This ping contains the client id.

This ping is intended to communicate to the Data Pipeline that the user wishes to have their reported Telemetry data deleted. As such it attempts to send itself at the moment the user opts out of data collection, and continues to try and send itself.

Note: It is possible to send secondary ids in the deletion request ping. For instance, if the application is migrating from legacy telemetry to Glean, the legacy client ids can be added to the deletion request ping by creating a metrics.yaml entry for the id to be added with a send_in_pings value of deletion_request.

An example metrics.yaml entry might look like this:

legacy_client_id:
type: uuid
description:
A UUID uniquely identifying the legacy client.
send_in_pings:
- deletion_request
...

## Scheduling

The deletion-request ping is automatically submitted when upload is disabled in Glean. If upload fails, it is retried after Glean is initialized.

## Contents

The deletion-request does not contain additional metrics aside from secondary ids that have been added.

## Example deletion-request ping

{
"ping_info": {
"seq": 0,
"start_time": "2019-12-06T09:50-04:00",
"end_time": "2019-12-06T09:53-04:00"
},
"client_info": {
"telemetry_sdk_build": "22.0.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "28",
"os_version": "9",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"metrics": {
"uuid": {
"legacy_client_id": "5faffa6d-6147-4d22-a93e-c1dbd6e06171"
}
}
}

# The metrics ping

## Description

The metrics ping is intended for all of the metrics that are explicitly set by the application or are included in the application's metrics.yaml file (except events). The reported data is tied to the ping's measurement window, which is the time between the collection of two metrics pings. Ideally, this window is expected to be about 24 hours, given that the collection is scheduled daily at 04:00. However, the metrics ping is only submitted while the application is actually running, so in practice, it may not meet the 04:00 target very frequently. Data in the ping_info section of the ping can be used to infer the length of this window and the reason that triggered the ping to be submitted. If the application crashes, unsent recorded metrics are sent along with the next metrics ping.

Additionally, it is undesirable to mix metric recording from different versions of the application. Therefore, if a version upgrade is detected, the metrics ping is collected immediately before further metrics from the new version are recorded.

Note: As the metrics ping was specifically designed for mobile operating systems, it is not sent when using the Glean Python bindings.

## Scheduling

The desired behavior is to collect the ping at the first available opportunity after 04:00 local time on a new calendar day, but given constraints of the platform, it can only be submitted while the application is running. This breaks down into three scenarios:

1. the application was just installed;
2. the application was just upgraded (the version of the app is different from the last time the app was run);
3. the application was just started (after a crash or a long inactivity period);
4. the application was running at 04:00.

In the first case, since the application was just installed, if the due time for the current calendar day has passed, a metrics ping is immediately generated and scheduled for sending (reason code overdue). Otherwise, if the due time for the current calendar day has not passed, a ping collection is scheduled for that time (reason code today).

In the second case, if a version change is detected at startup, the metrics ping is immediately submitted so that metrics from one version are not aggregated with metrics from another version (reason code upgrade).

In the third case, if the metrics ping was not already collected on the current calendar day, and it is before 04:00, a collection is scheduled for 04:00 on the current calendar day (reason code today). If it is after 04:00, a new collection is scheduled immediately (reason code overdue). Lastly, if a ping was already collected on the current calendar day, the next one is scheduled for collecting at 04:00 on the next calendar day (reason code tomorrow).

In the fourth and last case, the application is running during a scheduled ping collection time. The next ping is scheduled for 04:00 the next calendar day (reason code reschedule).

More scheduling examples are included below.

## Contents

The metrics ping contains all of the metrics defined in metrics.yaml (except events) that don't specify a ping or where default is specified in their send in pings property.

Additionally, error metrics in the glean.error category are included in the metrics ping.

The metrics ping shall also include the common ping_info and 'client_info' sections. It also includes a number of metrics defined in the Glean SDK itself.

### Querying ping contents

Information about query ping contents is available in Accessing Glean data in the Firefox data docs.

## Scheduling Examples

### Crossing due time with the application closed

1. The application is opened on Feb 7 on 15:00, closed on 15:05.

• Glean records one metric A (say startup time in ms) during this measurement window MW1.
2. The application is opened again on Feb 8 on 17:00.

• Glean notes that we passed local 04:00 since MW1.

• Glean closes MW1, with:

• start_time=Feb7/15:00;
• end_time=Feb8/17:00.
• Glean records metric A again, into MW2, which has a start_time of Feb8/17:00.

### Crossing due time and changing timezones

1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

• Glean records one metric A (say startup time in ms) during this measurement window MW1.
2. The application is opened again on Feb 8 on 17:00 in timezone UTC+1.

• Glean notes that we passed local 04:00 UTC+1 since MW1.

• Glean closes MW1, with:

• start_time=Feb7/15:00/UTC;
• end_time=Feb8/17:00/UTC+1.
• Glean records metric A again, into MW2.

### The application doesn’t run in a week

1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

• Glean records one metric A (say startup time in ms) during this measurement window MW1.
2. The application is opened again on Feb 16 on 17:00 in timezone UTC.

• Glean notes that we passed local 04:00 UTC since MW1.

• Glean closes MW1, with:

• start_time=Feb7/15:00/UTC;
• end_time=Feb16/17:00/UTC.
• Glean records metric A again, into MW2.

### The application doesn’t run for a week, and when it’s finally re-opened the timezone has changed

1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

• Glean records one metric A (say startup time in ms) during this measurement window MW1.
2. The application is opened again on Feb 16 on 17:00 in timezone UTC+1.

• Glean notes that we passed local 04:00 UTC+1 since MW1.

• Glean closes MW1, with:

• start_time=Feb7/15:00/UTC
• end_time=Feb16/17:00/UTC+1.
• Glean records metric A again, into MW2.

### The user changes timezone in an extreme enough fashion that they cross 04:00 twice on the same date

1. The application is opened on Feb 7 at 15:00 in timezone UTC+11, closed at 15:05.

• Glean records one metric A (say startup time in ms) during this measurement window MW1.
2. The application is opened again on Feb 8 at 04:30 in timezone UTC+11.

• Glean notes that we passed local 04:00 UTC+11.

• Glean closes MW1, with:

• start_time=Feb7/15:00/UTC+11;
• end_time=Feb8/04:30/UTC+11.
• Glean records metric A again, into MW2.

3. The user changes to timezone UTC-10 and opens the application at Feb 7 at 22:00 in timezone UTC-10

• Glean records metric A again, into MW2 (not MW1, which was already sent).
4. The user opens the application at Feb 8 05:00 in timezone UTC-10

• Glean notes that we have not yet passed local 04:00 on Feb 9
• Measurement window MW2 remains the current measurement window
5. The user opens the application at Feb 9 07:00 in timezone UTC-10

• Glean notes that we have passed local 04:00 on Feb 9

• Glean closes MW2 with:

• start_time=Feb8/04:30/UTC+11;
• end_time=Feb9/19:00/UTC-10.
• Glean records metric A again, into MW3.

# The events ping

## Description

The events ping's purpose is to transport all of the event metric information. If the application crashes, an events ping is generated next time the application starts with events that were not sent before the crash.

## Scheduling

The events ping is collected under the following circumstances:

1. Normally, it is collected when the application becomes inactive (on mobile, this means going to background), if there are any recorded events to send.

2. When the queue of events exceeds Glean.configuration.maxEvents (default 500).

3. If there are any unsent events found on disk when starting the application. It would be impossible to coordinate the timestamps across a reboot, so it's best to just collect all events from the previous run into their own ping, and start over.

All of these cases are handled automatically, with no intervention or configuration required by the application.

Note: Since the Python bindings don't have a concept of "going to background", case (1) above does not apply.

## Contents

At the top-level, this ping contains the following keys:

• ping_info: The information common to all pings.

• events: An array of all of the events that have occurred since the last time the events ping was sent.

Each entry in the events array is an object with the following properties:

• "timestamp": The milliseconds relative to the first event in the ping.

• "category": The category of the event, as defined by its location in the metrics.yaml file.

• "name": The name of the event, as defined in the metrics.yaml file.

• "extra" (optional): A mapping of strings to strings providing additional data about the event. The keys are restricted to 40 characters and values in this map will never exceed 100 characters.

### Example event JSON

{
"ping_info": {
"experiments": {
"third_party_library": {
"branch": "enabled"
}
},
"seq": 0,
"start_time": "2019-03-29T09:50-04:00",
"end_time": "2019-03-29T10:02-04:00"
},
"client_info": {
"telemetry_sdk_build": "0.49.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "27",
"os_version": "8.1.0",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"events": [
{
"timestamp": 123456789,
"category": "examples",
"name": "event_example",
"extra": {
}
},
{
"timestamp": 123456791,
"category": "examples",
"name": "event_example"
}
]
}

# Custom pings

Applications can define metrics that are sent in custom pings. Unlike the built-in pings, custom pings are sent explicitly by the application.

This is useful when the scheduling of the built-in pings (metrics, baseline and events) are not appropriate for your data. Since the timing of the submission of custom pings is handled by the application, the measurement window is under the application's control.

This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.

## Defining a custom ping

Custom pings must be defined in a pings.yaml file, which is in the same directory alongside your app's metrics.yaml file.

Ping names are limited to lowercase letters from the ISO basic Latin alphabet and hyphens and a maximum of 30 characters.

Each ping has the following parameters:

• description (required): A textual description describing the purpose of the ping. It may contain markdown syntax.
• include_client_id (required): A boolean indicating whether to include the client_id in the client_info section).
• send_if_empty (optional, default: false): A boolean indicating if the ping is sent if it contains no metric data.
• reasons (optional, default: {}): The reasons that this ping may be sent. The keys are the reason codes, and the values are a textual description of each reason. The ping payload will (optionally) contain one of these reasons in the ping_info.reason field.

In addition to these parameters, pings also support the parameters related to data review and expiration defined in common metric parameters: description, notification_emails, bugs, and data_reviews.

For example, to define a custom ping called search specifically for search information:

# Required to indicate this is a pings.yaml file
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0 search: description: > A ping to record search data. include_client_id: false notification_emails: - CHANGE-ME@example.com bugs: - http://bugzilla.mozilla.org/123456789/ data_reviews: - http://example.com/path/to/data-review Note: the names baseline, metrics, events, deletion-request and all-pings are reserved and may not be used as the name of a custom ping. ## Loading custom ping metadata into your application or library The Glean SDK build generates code from pings.yaml in a Pings object, which must be instantiated so Glean can send pings by name. In Kotlin, this object must be registered with the Glean SDK from your startup code before calling Glean.initialize (such as in your application's onCreate method or a function called from that method). import org.mozilla.yourApplication.GleanMetrics.Pings override fun onCreate() { Glean.registerPings(Pings) Glean.initialize(applicationContext, uploadEnabled = true) } In Swift, this object must be registered with the Glean SDK from your startup code before calling Glean.shared.initialize (such as in your application's UIApplicationDelegate application(_:didFinishLaunchingWithOptions:) method or a function called from that method). import Glean @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { Glean.shared.registerPings(GleanMetrics.Pings) Glean.shared.initialize(uploadEnabled = true) } } For Python, the pings.yaml file must be available and loaded at runtime. If your project is a script (i.e. just Python files in a directory), you can load the pings.yaml before calling Glean.initialize using: from glean import load_pings pings = load_pings("pings.yaml") Glean.initialize( application_id="my-app-id", application_version="0.1.0", upload_enabled=True, ) If your project is a distributable Python package, you need to include the pings.yaml file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename() to get the filename at runtime. from glean import load_pings from pkg_resources import resource_filename pings = load_pings(resource_filename(__name__, "pings.yaml")) In C#, this object must be registered with the Glean SDK from your startup code (such as in your application's Main method or a function called from that method). using static Mozilla.YourApplication.GleanMetrics.Pings; ... class Program { static void Main(string[] args) { ... Glean.RegisterPings(Pings); ... } } ## Sending metrics in a custom ping To send a metric on a custom ping, you add the custom ping's name to the send_in_pings parameter in the metrics.yaml file. For example, to define a new metric to record the default search engine, which is sent in a custom ping called search, put search in the send_in_pings parameter. Note that it is an error to specify a ping in send_in_pings that does not also have an entry in pings.yaml. search.default: name: type: string description: > The name of the default search engine. send_in_pings: - search If this metric should also be sent in the default ping for the given metric type, you can add the special value default to send_in_pings: send_in_pings: - search - default ## Submitting a custom ping To collect and queue a custom ping for eventual uploading, call the submit method on the PingType object that the Glean SDK generated for your ping. By default, if the ping doesn't currently have any events or metrics set, submit will do nothing. However, if the send_if_empty flag is set to true in the ping definition, it will always be submitted. For example, to submit the custom ping defined above: import org.mozilla.yourApplication.GleanMetrics.Pings Pings.search.submit( GleanMetrics.Pings.searchReasonCodes.performed ) import Glean GleanMetrics.Pings.shared.search.submit( reason: .performed ) from glean import load_pings pings = load_pings("pings.yaml") pings.search.submit(pings.search_reason_codes.PERFORMED) using static Mozilla.YourApplication.GleanMetrics.Pings; Pings.search.Submit( GleanMetrics.Pings.searchReasonCodes.performed ); use glean::Pings; pings::search.submit(pings::SearchReasonCodes::Performed); If none of the metrics for the ping contain data the ping is not sent (unless send_if_empty is set to true in the definition file) # Unit testing Glean custom pings for Android Applications defining custom pings can use use the strategy defined in this document to test these pings in unit tests. ## General testing strategy The schedule of custom pings depends on the specific application implementation, since it is up to the SDK user to define the ping semantics. This makes writing unit tests for custom pings a bit more involved. One possible strategy could be to wrap the Glean SDK API call to send the ping in a function that can be mocked in the unit test. This would allow for checking the status and the values of the metrics contained in the ping at the time in which the application would have sent it. ## Example testing of a custom ping Let us start by defining a custom ping with a sample metric in it. Here is the pings.yaml file:$schema: moz://mozilla.org/schemas/glean/pings/2-0-0

my-custom-ping:
description: >
This ping is intended to showcase the recommended testing strategy for
custom pings.
include_client_id: false
bugs:
- https://bugzilla.mozilla.org/1556985/
data_reviews:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1556985
- custom-ping-owner@example.com

And here is the metrics.yaml

## gleanGenerateMarkdownDocs

The Glean SDK can automatically generate Markdown documentation for metrics and pings defined in the registry files, in addition to the metrics API code.

ext.gleanGenerateMarkdownDocs = true

## gleanYamlFiles

By default, the Glean Gradle plugin will look for metrics.yaml and pings.yaml files in the same directory that the plugin is included from in your application or library. To override this, ext.gleanYamlFiles may be set to a list of explicit paths.

ext.gleanYamlFiles = ["$rootDir/glean-core/metrics.yaml", "$rootDir/glean-core/pings.yaml"]

## Offline builds of Android applications that use Glean

The Glean SDK has basic support for building Android applications that use Glean in offline mode.

The Glean SDK uses a Python script, glean_parser to generate code for metrics from the metrics.yaml and pings.yaml files when Glean-using applications are built. When online, the pieces necessary to run this script are installed automatically.

For offline builds, the Python environment, and packages of glean_parser and its dependencies must be provided prior to building the Glean-using application.

To build a Glean-using application in offline mode, do the following:

• Install Python 3.6 or later and ensure it's on the PATH.

• On Linux, installing Python from your Linux distribution's package manager is usually sufficient.

• On macOS, installing Python from homebrew is known to work, but other package managers may also work.

• On Windows, we recommend installing one of the official Python installers from python.org.

• Determine the version of glean_parser required.

• It can be really difficult to manually determine the version of glean_parser that is required for a given application, since it needs to be tracked through android-components, to glean-core and finally to glean_parser. The required version of glean_parser can be determined by running the following at the top-level of the Glean-using application:

$./gradlew | grep "Requires glean_parser" Requires glean_parser==1.28.1 • Download packages for glean_parser and its dependencies: • In the root directory of the Glean-using project, create a directory called glean-wheels and cd into it. • Download packages for glean_parser and its dependencies, replacing X.Y.Z with the correct version of glean_parser:$ python3 -m pip download glean_parser==X.Y.Z

• Build the Glean-using project using ./gradlew, but passing in the --offline flag.

There are a couple of environment variables that control offline building:

• To override the location of the Python interpreter to use, set the GLEAN_PYTHON environment variable. If unset, the first Python interpreter on the PATH will be used.

• To override the location of the downloaded Python wheels, set the GLEAN_PYTHON_WHEELS_DIR environment variable. If unset ${projectDir}/glean-wheels will be used. # Focused Use Cases Here is a list of specific examples of using Glean to instrument different situations. ## Instrumenting Android Crashes With Glean This is a simple example illustrating a strategy or method for instrumenting crashes in Android applications using Glean. # Instrumenting Android crashes with the Glean SDK One of the things that might be useful to collect data on in an Android application is crashes. This guide will walk through a basic strategy for instrumenting an Android application with crash telemetry using a custom ping. Note: This is a very simple example of instrumenting crashes using the Glean SDK. There will be challenges to using this approach in a production application that should be considered. For instance, when an app crashes it can be in an unknown state and may not be able to do things like upload data to a server. The recommended way of instrumenting crashes with Android Components is called lib-crash, which takes into consideration things like multiple processes and persistence. ## Before You Start There are a few things that need to be installed in order to proceed, mainly Android Studio. If you include the Android SDK, Android Studio can take a little while to download and get installed. This walk-through assumes some knowledge of Android application development. Knowing where to go to create a new project and how to add dependencies to a Gradle file will be helpful in following this guide. ## Setup Build Configuration Please follow the instruction in the "Adding Glean to your project" chapter in order to set up Glean in an Android project. ### Add A Custom Metric Since crashes will be instrumented with some custom metrics, the next step will be to add a metrics.yaml file to define the metrics used to record the crash information and a pings.yaml file to define a custom ping which will give some control over the scheduling of the uploading. See "Adding new metrics" for more information about adding metrics. What metric type should be used to represent the crash data? While this could be implemented several ways, an event is an excellent choice, simply because events capture information in a nice concise way and they have a built-in way of passing additional information using the extras field. If it is necessary to pass along the cause of the exception or a few lines of description, events let us do that easily (with some limitations). Now that a metric type has been chosen to represent the metric, the next step is creating the metrics.yaml. Inside of the root application folder of the Android Studio project create a new file named metrics.yaml. After adding the schema definition and event metric definition, the metrics.yaml should look like this: # Required to indicate this is a metrics.yaml file$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

crash:
exception:
type: event
description: |
Event to record crashes caused by unhandled exceptions
- crashes@example.com
bugs:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1582479
data_reviews:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1582479
expires:
2021-01-01
send_in_pings:
- crash
extra_keys:
cause:
description: The cause of the crash
message:
description: The exception message

As a brief explanation, this creates a metric called exception within a metric category called crash. There is a text description and the required notification_emails, bugs, data_reviews, and expires fields. The send_in_pings field is important to note here that it has a value of - crash. This means that the crash event metric will be sent via a custom ping named crash (which hasn't been created yet). Finally, note the extra_keys field which has two keys defined, cause and message. This allows for sending additional information along with the event to be associated with these keys.

Note: For Mozilla applications, a mandatory data review is required in order to collect information with the Glean SDK.

Define the custom ping that will help control the upload scheduling by creating a pings.yaml file in the same directory as the metrics.yaml file. For more information about adding custom pings, see the section on custom pings.
The name of the ping will be crash, so the pings.yaml file should look like this:

# Required to indicate this is a pings.yaml file
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0 crash: description: > A ping to transport crash data include_client_id: true notification_emails: - crash@example.com bugs: - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479 data_reviews: - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479 Before the newly defined metric or ping can be used, the application must first be built. This will cause the glean_parser to execute and generate the API files that represent the metric and ping that were newly defined. Note: If changes to the YAML files aren't showing up in the project, try running the clean task on the project before building any time one of the Glean YAML files has been modified. It is recommended that Glean be initialized as early in the application startup as possible, which is why it's good to use a custom Application, like the Glean Sample App GleanApplication.kt. Initializing Glean in the Application.onCreate() is ideal for this purpose. Start by adding the import statement to allow the usage of the custom ping that was created, adding the following to the top of the file: import org.mozilla.gleancrashexample.GleanMetrics.Pings Next, register the custom ping by calling Glean.registerPings(Pings) in the onCreate() function, preferably before calling Glean.initialize(). The completed function should look something like this: override fun onCreate() { super.onCreate() // Register the application's custom pings. Glean.registerPings(Pings) // Initialize the Glean library Glean.initialize(applicationContext) } This completes the registration of the custom ping with the Glean SDK so that it knows about it and can manage the storage and other important details of it like sending it when send() is called. ### Instrument The App To Record The Event In order to make the custom Application class handle uncaught exceptions, extend the class definition by adding Thread.UncaughtExceptionHandler as an inherited class like this: class MainActivity : AppCompatActivity(), Thread.UncaughtExceptionHandler { ... } As part of implementing the Thread.UncaughtExceptionHandler interface, the custom Application needs to implement the override of the uncaughtException() function. An example of this override that records data and sends the ping could look something like this: override fun uncaughtException(thread: Thread, exception: Throwable) { Crash.exception.record( mapOf( Crash.exceptionKeys.cause to exception.cause!!.toString(), Crash.exceptionKeys.message to exception.message!! ) ) Pings.crash.submit() } This records data to the Crash.exception metric from the metrics.yaml. The category of the metric is crash and the name is exception so it is accessed it by calling record() on the Crash.exception object. The extra information for the cause and the message is set as well. Finally, calling Pings.crash.submit() forces the crash ping to be scheduled to be sent. The final step is to register the custom Application as the default uncaught exception handler by adding the following to the onCreate() function after Glean.initialize(this): Thread.setDefaultUncaughtExceptionHandler(this) ### Next Steps This information didn't really get recorded by anything, as it would be rejected by the telemetry pipeline unless the application was already known. In order to collect telemetry from a new application, there is additional work that is necessary that is beyond the scope of this example. In order for data to be collected from your project, metadata must be added to the probe_scraper. The instructions for accomplishing this can be found in the probe_scraper documentation. # Metrics This document enumerates the metrics collected by this project using the Glean SDK. This project may depend on other projects which also collect metrics. This means you might have to go searching through the dependency tree to get a full picture of everything collected by this project. # Pings ## all-pings These metrics are sent in every ping. All Glean pings contain built-in metrics in the ping_info and client_info sections. In addition to those built-in metrics, the following metrics are added to the ping: NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity glean.error.invalid_labellabeled_counterCounts the number of times a metric was set with an invalid label. The labels are the category.name identifier of the metric.Bug 1499761never1 glean.error.invalid_overflowlabeled_counterCounts the number of times a metric was set a value that overflowed. The labels are the category.name identifier of the metric.Bug 1591912never1 glean.error.invalid_statelabeled_counterCounts the number of times a timing metric was used incorrectly. The labels are the category.name identifier of the metric.Bug 1499761never1 glean.error.invalid_valuelabeled_counterCounts the number of times a metric was set to an invalid value. The labels are the category.name identifier of the metric.Bug 1499761never1 ## baseline This is a built-in ping that is assembled out of the box by the Glean SDK. See the Glean SDK documentation for the baseline ping. This ping is sent if empty. This ping includes the client id. Data reviews for this ping: Bugs related to this ping: Reasons this ping may be sent: • active: The ping was submitted when the application became active again, which includes when the application starts. In earlier versions, this was called foreground. *Note*: this ping will not contain the glean.baseline.duration metric. • dirty_startup: The ping was submitted at startup, because the application process was killed before the Glean SDK had the chance to generate this ping, before becoming inactive, in the last session. *Note*: this ping will not contain the glean.baseline.duration metric. • inactive: The ping was submitted when becoming inactive. In earlier versions, this was called background. All Glean pings contain built-in metrics in the ping_info and client_info sections. In addition to those built-in metrics, the following metrics are added to the ping: NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity glean.baseline.durationtimespanThe duration of the last foreground session.Bug 1512938never1, 2 glean.validation.first_run_hourdatetimeThe hour of the first run of the application.Bug 1680783never1 glean.validation.pings_submittedlabeled_counterA count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping)Bug 1586764never1 ## deletion-request This is a built-in ping that is assembled out of the box by the Glean SDK. See the Glean SDK documentation for the deletion-request ping. This ping is sent if empty. This ping includes the client id. Data reviews for this ping: Bugs related to this ping: All Glean pings contain built-in metrics in the ping_info and client_info sections. This ping contains no metrics. ## metrics This is a built-in ping that is assembled out of the box by the Glean SDK. See the Glean SDK documentation for the metrics ping. This ping includes the client id. Data reviews for this ping: Bugs related to this ping: Reasons this ping may be sent: • overdue: The last ping wasn't submitted on the current calendar day, but it's after 4am, so this ping submitted immediately • reschedule: A ping was just submitted. This ping was rescheduled for the next calendar day at 4am. • today: The last ping wasn't submitted on the current calendar day, but it is still before 4am, so schedule to send this ping on the current calendar day at 4am. • tomorrow: The last ping was already submitted on the current calendar day, so schedule this ping for the next calendar day at 4am. • upgrade: This ping was submitted at startup because the application was just upgraded. All Glean pings contain built-in metrics in the ping_info and client_info sections. In addition to those built-in metrics, the following metrics are added to the ping: NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity glean.database.sizememory_distributionThe size of the database file at startup.Bug 1656589never1 glean.error.iocounterThe number of times we encountered an IO error when writing a pending ping to disk.Bug 1686233never1 glean.error.preinit_tasks_overflowcounterThe number of tasks queued in the pre-initialization buffer. Only sent if the buffer overflows.Bug 1609482never1 glean.time.invalid_timezone_offsetcounterCounts the number of times we encountered an invalid timezone offset when trying to get the current time. A timezone offset is invalid if it is outside [-24h, +24h]. If invalid a UTC offset is used (+0h).Bug 16117702021-06-301 glean.upload.deleted_pings_after_quota_hitcounterThe number of pings deleted after the quota for the size of the pending pings directory or number of files is hit. Since quota is only calculated for the pending pings directory, and deletion request ping live in a different directory, deletion request pings are never deleted.Bug 1601550never1 glean.upload.discarded_exceeding_pings_sizememory_distributionThe size of pings that exceeded the maximum ping size allowed for upload.Bug 1597761never1 glean.upload.pending_pingscounterThe total number of pending pings at startup. This does not include deletion-request pings.Bug 1665041never1 glean.upload.pending_pings_directory_sizememory_distributionThe size of the pending pings directory upon initialization of Glean. This does not include the size of the deletion request pings directory.Bug 1601550never1 glean.upload.ping_upload_failurelabeled_counterCounts the number of ping upload failures, by type of failure. This includes failures for all ping types, though the counts appear in the next successfully sent metrics ping.Bug 1589124 • status_code_4xx • status_code_5xx • status_code_unknown • unrecoverable • recoverable never1 glean.validation.first_run_hourdatetimeThe hour of the first run of the application.Bug 1680783never1 glean.validation.foreground_countcounterOn mobile, the number of times the application went to foreground.Bug 1683707never1 glean.validation.pings_submittedlabeled_counterA count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping)Bug 1586764never1 Data categories are defined here. # Glossary A glossary with explanations and background for wording used in the Glean project. ## Glean According to the dictionary the word “glean” means: to gather information or material bit by bit Glean is the combination of the Glean SDK, the Glean pipeline & Glean tools. See also: Glean - product analytics & telemetry. ## Glean Pipeline The general data pipeline is the infrastructure that collects, stores, and analyzes telemetry data from our products and logs from various services. See An overview of Mozilla’s Data Pipeline. The Glean pipeline additionally consists of ## Glean SDK The Glean SDK is the bundle of libraries with support for different platforms. The source code is available at https://github.com/mozilla/glean. ## Glean SDK book This documentation. ## Glean tools Glean provides additional tools for its usage: ## Metric Metrics are the individual things being measured using Glean. They are defined in metrics.yaml files, also known as registry files. Glean itself provides some metrics out of the box. ## Ping A ping is a bundle of related metrics, gathered in a payload to be transmitted. The Glean SDK provides default pings and allows for custom ping, see Glean Pings. ## Submission "To submit" means to collect & to enqueue a ping for uploading. The Glean SDK stores locally all the metrics set by it or by its clients. Each ping has its own schedule to gather all its locally saved metrics and create a JSON payload with them. This is called "collection". Upon successful collection, the payload is queued for upload, which may not happen immediately or at all (in case network connectivity is not available). Unless the user has defined their own custom pings, they don’t need to worry too much about submitting pings. All the default pings have their scheduling and submission handled by the SDK. ## Measurement window The measurement window of a ping is the time frame in which metrics are being actively gathered for it. The measurement window start time is the moment the previous ping is submitted. In the absence of a previous ping, this time will be the time the application process started. The measurement window end time is the moment the current ping gets submitted. Any new metric recorded after submission will be part of the next ping, so this pings measurement window is over. ## This Week in Glean (TWiG) This Week in Glean is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. # Unreleased changes Full changelog # v36.0.1 (2021-04-09) Full changelog • RLB • Provide an internal-use-only API to pass in raw samples for timing distributions (#1561). • Expose Timespan's set_raw to Rust (#1578). • Android • BUGFIX: TimespanMetricType.measure and TimingDistributionMetricType.measure won't get inlined anymore (#1560). This avoids a potential bug where a return used inside the closure would end up not measuring the time. Use return@measure <val> for early returns. • Python • The Glean Python bindings now use rkv's safe mode backend. This should avoid intermittent segfaults in the LMDB backend. • General • Attempt to upload a ping even in the face of IO Errors (#1576). # v36.0.0 (2021-03-16) Full changelog • General • Introduce a new API Ping#test_before_next_submit to run a callback right before a custom ping is submitted (#1507). • The new API exists for all language bindings (Kotlin, Swift, Rust, Python). • Updated glean_parser version to 2.5.0 • Change the fmt- and lint- make commands for consistency (#1526) • The Glean SDK can now produce testing coverage reports for your metrics (#1482). • Python • Update minimal required version of cffi dependency to 1.13.0 (#1520). • Ship wheels for arm64 macOS (#1534). • RLB • Added rate metric type (#1516). • Set internal_metrics::os_version for MacOS, Windows and Linux (#1538) • Expose a function get_timestamp_ms to get a timestamp from a monotonic clock on all supported operating systems, to be used for event timestamps (#1546). • Expose a function to record events with an externally provided timestamp. • iOS • Breaking Change: Event timestamps are now correctly recorded in milliseconds (#1546). • Since the first release event timestamps were erroneously recorded with nanosecond precision (#1549). This is now fixed and event timestamps are in milliseconds. This is equivalent to how it works in all other language bindings. # v35.0.0 (2021-02-22) Full changelog • Android • Glean.initialize can now take a buildInfo parameter to pass in build time version information, and avoid calling out to the Android package manager at runtime. A suitable instance is generated by glean_parser in${PACKAGE_ROOT}.GleanMetrics.GleanBuildInfo.buildInfo (#1495). Not passing in a buildInfo object is still supported, but is deprecated.
• The testGetValue APIs now include a message on the NullPointerException thrown when the value is missing.
• Breaking change: LEGACY_TAG_PINGS is removed from GleanDebugActivity (#1510)
• RLB
• Breaking change: Configuration.data_path is now a std::path::PathBuf(#1493).

# v34.1.0 (2021-02-04)

Full changelog

• General
• A new metric glean.validation.pings_submitted tracks the number of pings sent. It is included in both the metrics and baseline pings.
• iOS
• The metric glean.validation.foreground_count is now sent in the metrics ping (#1472).
• BUGFIX: baseline pings with reason dirty_startup are no longer sent if Glean did not full initialize in the previous run (#1476).
• Python
• Expose the client activity API (#1481).
• BUGFIX: Publish a macOS wheel again. The previous release failed to build a Python wheel for macOS platforms (#1471).
• RLB
• BUGFIX: baseline pings with reason dirty_startup are no longer sent if Glean did shutdown cleanly (#1483).

# v34.0.0 (2021-01-29)

Full changelog

• General
• Other bindings detect when RLB is used and try to flush the RLB dispatcher to unblock the Rust API (#1442).
• This is detected automatically, no changes needed for consuming code.
• Add support for the client activity API (#1455). This API is either automatically used or exposed by the language bindings.
• Rename the reason background to inactive for both the baseline and events ping. Rename the reason foreground to active for the baseline ping.
• RLB
• When the pre-init task queue overruns, this is now recorded in the metric glean.error.preinit_tasks_overflow (#1438).
• Expose the client activity API (#1455).
• Send the baseline ping with reason dirty_startup, if needed, at startup.
• Expose all required types directly (#1452).
• Rust consumers will not need to depend on glean-core anymore.
• Android
• BUGFIX: Don't crash the ping uploader when throttled due to reading too large wait time values (#1454).
• Use the client activity API (#1455).
• Update AndroidX dependencies (#1441).
• iOS
• Use the client activity API (#1465). Note: this now introduces a baseline ping with reason active on startup.

# v33.10.3 (2021-01-18)

Full changelog

• Rust
• Upgrade rkv to 0.17 (#1434)

# v33.10.2 (2021-01-15)

Full changelog

• General:
• A new metric glean.error.io has been added, counting the times an IO error happens when writing a pending ping to disk (#1428)
• Android
• A new metric glean.validation.foreground_count was added to the metrics ping (#1418).
• Rust
• BUGFIX: Fix lock order inversion in RLB Timing Distribution (#1431).
• Use RLB types instead of glean-core ones for RLB core metrics. (#1432).

# v33.10.1 (2021-01-06)

Full changelog

No functional changes. v33.10.0 failed to generated iOS artifacts due to broken tests (#1421).

# v33.10.0 (2021-01-06)

Full changelog

• General
• A new metric glean.validation.first_run_hour, analogous to the existing first_run_date but with hour resolution, has been added. Only clients running the app for the first time after this change will report this metric (#1403).
• Rust
• BUGFIX: Don't require mutable references in RLB traits (#1417).
• Python
• Building the Python package from source now works on musl-based Linux distributions, such as Alpine Linux (#1416).

# v33.9.1 (2020-12-17)

Full changelog

• Rust
• BUGFIX: Don't panic on shutdown and avoid running tasks if uninitialized (#1398).
• BUGFIX: Don't fail on empty database files (#1398).
• BUGFIX: Support ping registration before Glean initializes (#1393).

# v33.9.0 (2020-12-15)

Full changelog

• Rust
• Introduce the String List metric type in the RLB. (#1380).
• Introduce the Datetime metric type in the RLB (#1384).
• Introduce the CustomDistribution and TimingDistribution metric type in the RLB (#1394).

# v33.8.0 (2020-12-10)

Full changelog

• Rust
• Introduce the Memory Distribution metric type in the RLB. (#1376).
• Shut down Glean in tests before resetting to make sure they don't mistakenly init Glean twice in parallel (#1375).
• BUGFIX: Fixing 2 lock-order-inversion bugs found by TSan (#1378).
• TSan runs on mozilla-central tests, which found two (potential) bugs where 2 different locks were acquired in opposite order in different code paths, which could lead to deadlocks in multi-threaded code. As RLB uses multiple threads (e.g. for init and the dispatcher) by default, this can easily become an actual issue.
• Python
• All log messages from the Glean SDK are now on the glean logger, obtainable through logging.getLogger("glean"). (Prior to this, each module had its own logger, for example glean.net.ping_upload_worker).

# v33.7.0 (2020-12-07)

Full changelog

• Rust
• Upgrade rkv to 0.16.0 (no functional changes) (#1355).
• Introduce the Event metric type in the RLB (#1361).
• Python
• Python Linux wheels no longer work on Linux distributions released before 2014 (they now use the manylinux2014 ABI) (#1353).
• Unbreak Python on non-Linux ELF platforms (BSD, Solaris/illumos) (#1363).

# v33.6.0 (2020-12-02)

Full changelog

• Rust
• BUGFIX: Negative timespans for the timespan metric now correctly record an InvalidValue error (#1347).
• Introduce the Timespan metric type in the RLB (#1347).
• Python
• BUGFIX: Network slowness or errors will no longer block the main dispatcher thread, leaving work undone on shutdown (#1350).
• BUGFIX: Lower sleep time on upload waits to avoid being stuck when the main process ends (#1349).

# v33.5.0 (2020-12-01)

Full changelog

• Rust
• Introduce the UUID metric type in the RLB.
• Introduce the Labeled metric type in the RLB (#1327).
• Introduce the Quantity metric type in the RLB.
• Introduce the shutdown API.
• Python
• BUGFIX: Setting a UUID metric to a value that is not in the expected UUID format will now record an error with the Glean error reporting system.

# v33.4.0 (2020-11-17)

Full changelog

• General
• When Rkv's safe mode is enabled (features = ["rkv-safe-mode"] on the glean-core crate) LMDB data is migrated at first start (#1322).
• Rust
• Introduce the Counter metric type in the RLB.
• Introduce the String metric type in the RLB.
• BUGFIX: Track the size of the database directory at startup (#1304).
• Python
• BUGFIX: Fix too-long sleep time in uploader due to unit mismatch (#1325).
• Swift
• BUGFIX: Fix too-long sleep time in uploader due to unit mismatch (#1325).

# v33.3.0 (2020-11-12)

Full changelog

• General
• Do not require default-features on rkv and downgrade bincode (#1317)
• Rust
• Implement the experiments API (#1314)

# v33.2.0 (2020-11-10)

Full changelog

• Python
• Fix building of Linux wheels (#1303)
• Python Linux wheels no longer work on Linux distributions released before 2010. (They now use the manylinux2010 ABI, rather than the manylinux1 ABI.)
• Rust
• Introduce the RLB net module (#1292)

# v33.1.2 (2020-11-04)

Full changelog

• No changes. v33.1.1 was tagged incorrectly.

# v33.1.1 (2020-11-04)

Full changelog

• No changes. v33.1.0 was tagged incorrectly.

# v33.1.0 (2020-11-04)

Full changelog

• General
• Standardize throttle backoff time throughout all bindings. (#1240)
• Update glean_parser to 1.29.0
• Generated code now includes a comment next to each metric containing the name of the metric in its original snake_case form.
• Expose the description of the metric types in glean_core using traits.
• Rust
• Add the dispatcher module (copied over from mozilla-central).
• Allow consumers to specify a custom uploader.
• Android
• Update the JNA dependency from 5.2.0 to 5.6.0
• The glean-gradle-plugin now makes sure that only a single Miniconda installation will happen at the same time to avoid a race condition when multiple components within the same project are using Glean.

# v33.0.4 (2020-09-28)

Full changelog

Note: Previous 33.0.z releases were broken. This release now includes all changes from 33.0.0 to 33.0.3.

• General
• Update glean_parser to 1.28.6
• BUGFIX: Ensure Kotlin arguments are deterministically ordered
• Android
• Breaking change: Updated to the Android Gradle Plugin v4.0.1 and Gradle 6.5.1. Projects using older versions of these components will need to update in order to use newer versions of the Glean SDK.
• Update the Kotlin Gradle Plugin to version 1.4.10.
• Fixed the building of .aar releases on Android so they include the Rust shared objects.

# v33.0.3 (2020-09-25)

Full changelog

• General
• v33.0.2 was tagged incorrectly. This release is just to correct that mistake.

# v33.0.2 (2020-09-25)

Full changelog

• Android
• Fixed the building of .aar releases on Android so they include the Rust shared objects.

# v33.0.1 (2020-09-24)

Full changelog

• General
• Update glean_parser to 1.28.6
• BUGFIX: Ensure Kotlin arguments are deterministically ordered
• Android
• Update the Kotlin Gradle Plugin to version 1.4.10.

# v33.0.0 (2020-09-22)

Full changelog

• Android
• Breaking change: Updated to the Android Gradle Plugin v4.0.1 and Gradle 6.5.1. Projects using older versions of these components will need to update in order to use newer versions of the Glean SDK.

# v32.4.1 (2020-10-01)

Full changelog

• General
• Update glean_parser to 1.28.6
• BUGFIX: Ensure Kotlin arguments are deterministically ordered
• BUGFIX: Transform ping directory size from bytes to kilobytes before accumulating to glean.upload.pending_pings_directory_size (#1236).

# v32.4.0 (2020-09-18)

Full changelog

• General
• Allow using quantity metric type outside of Gecko (#1198)
• Update glean_parser to 1.28.5
• The SUPERFLUOUS_NO_LINT warning has been removed from the glinter. It likely did more harm than good, and makes it hard to make metrics.yaml files that pass across different versions of glean_parser.
• Expired metrics will now produce a linter warning, EXPIRED_METRIC.
• Expiry dates that are more than 730 days (~2 years) in the future will produce a linter warning, EXPIRATION_DATE_TOO_FAR.
• Allow using the Quantity metric type outside of Gecko.
• New parser configs custom_is_expired and custom_validate_expires added. These are both functions that take the expires value of the metric and return a bool. (See Metric.is_expired and Metric.validate_expires). These will allow FOG to provide custom validation for its version-based expires values.
• Add a limit of 250 pending ping files. (#1217).
• Android
• Don't retry the ping uploader when waiting, sleep instead. This avoids a never-ending increase of the backoff time (#1217).

# v32.3.2 (2020-09-11)

Full changelog

• General
• Track the size of the database file at startup (#1141).
• Submitting a ping with upload disabled no longer shows an error message (#1201).
• BUGFIX: scan the pending pings directories after dealing with upload status on initialization. This is important, because in case upload is disabled we delete any outstanding non-deletion ping file, and if we scan the pending pings folder before doing that we may end up sending pings that should have been discarded. (#1205)
• iOS
• Disabled code coverage in release builds (#1195).
• Python
• Glean now ships a source package to pip install on platforms where wheels aren't provided.

# v32.3.1 (2020-09-09)

Full changelog

• Python
• Fixed the release process to generate all wheels (#1193).

# v32.3.0 (2020-08-27)

Full changelog

• Android
• Handle ping registration off the main thread. This removes a potential blocking call (#1132).
• iOS
• Handle ping registration off the main thread. This removes a potential blocking call (#1132).
• Glean for iOS is now being built with Xcode 12.0.0 (Beta 5) (#1170).

# v32.2.0 (2020-08-25)

Full changelog

• General
• Move logic to limit the number of retries on ping uploading "recoverable failures" to glean-core. (#1120)
• The functionality to limit the number of retries in these cases was introduced to the Glean SDK in v31.1.0. The work done now was to move that logic to the glean-core in order to avoid code duplication throughout the language bindings.
• Update glean_parser to v1.28.3
• BUGFIX: Generate valid C# code when using Labeled metric types.
• BUGFIX: Support HashSet and Dictionary in the C# generated code.
• Add a 10MB quota to the pending pings storage. (#1100)
• C#
• Add support for the String List metric type (#1108).
• Enable generating the C# APIs using the glean_parser (#1092).
• Add support for the EventMetricType in C# (#1129).
• Add support for the TimingDistributionMetricType in C# (#1131).
• Implement the experiments API in C# (#1145).
• This is the last release with C# language bindings changes. Reach out to the Glean SDK team if you want to use the C# bindings in a new product and require additional features.
• Python
• BUGFIX: Limit the number of retries for 5xx server errors on ping uploads (#1120).
• This kinds of failures yield a "recoverable error", which means the ping gets re-enqueued. That can cause infinite loops on the ping upload worker. For python we were incorrectly only limiting the number of retries for I/O errors, another type of "recoverable error".
• kebab-case ping names are now converted to snake_case so they are available on the object returned by load_pings (#1122).
• For performance reasons, the glinter is no longer run as part of glean.load_metrics(). We recommend running glinter as part of your project's continuous integration instead (#1124).
• A measure context manager for conveniently measuring runtimes has been added to TimespanMetricType and TimingDistributionMetricType (#1126).
• Networking errors have changed from ERROR level to DEBUG level so they aren't displayed by default (#1166).
• iOS
• Changed logging to use OSLog rather than a mix of NSLog and print. (#1133)

Full changelog

# v32.1.0 (2020-08-17)

Full changelog

• General
• The upload rate limiter has been changed from 10 pings per minute to 15 pings per minute.

# v32.0.0 (2020-08-03)

Full changelog

• General
• Limit ping request body size to 1MB. (#1098)
• iOS
• Implement ping tagging (i.e. the X-Source-Tags header) through custom URL (#1100).
• C#
• Add support for Labeled Strings and Labeled Booleans.
• Add support for the Counter metric type and Labeled Counter.
• Add support for the MemoryDistributionMetricType.
• Python
• Breaking change: data_dir must always be passed to Glean.initialize. Prior to this, a missing value would store Glean data in a temporary directory.
• Logging messages from the Rust core are now sent through Python's standard library logging module. Therefore all logging in a Python application can be controlled through the logging module interface.
• Android
• BUGFIX: Require activities executed via GleanDebugView to be exported.

# v31.6.0 (2020-07-24)

Full changelog

• General
• Implement JWE metric type (#1073, #1062).
• Due to Glean's asynchronous initialization the return value can be incorrect. Applications should not rely on Glean's internal state. Upload enabled status should be tracked by the application and communicated to Glean if it changes. Note: The method was removed from the C# and Python implementation.
• Update glean_parser to v1.28.1
• The glean_parser linting was leading consumers astray by incorrectly suggesting that deletion-request be instead deletion_request when used for send_in_pings. This was causing metrics intended for the deletion-request ping to not be included when it was collected and submitted. Consumers that are sending metrics in the deletion-request ping will need to update the send_in_pings value in their metrics.yaml to correct this.
• Fixes a bug in doc rendering.

# v31.5.0 (2020-07-22)

Full changelog

• General
• Implement ping tagging (i.e. the X-Source-Tags header) (#1074). Note that this is not yet implemented for iOS.
• String values that are too long now record invalid_overflow rather than invalid_value through the Glean error reporting mechanism. This affects the string, event and string list metrics.
• metrics.yaml files now support a data_sensitivity field to all metrics for specifying the type of data collected in the field.
• Python
• The Python unit tests no longer send telemetry to the production telemetry endpoint.
• BUGFIX: If an application_version isn't provided to Glean.initialize, the client_info.app_display_version metric is set to "Unknown", rather than resulting in invalid pings.
• Android
• Allow defining which Activity to run next when using the GleanDebugActivity.
• iOS
• BUGFIX: The memory unit is now correctly set on the MemoryDistribution metric type in Swift in generated metrics code.
• C#
• Metrics can now be generated from the metrics.yaml files.

# v31.4.1 (2020-07-20)

Full changelog

• General
• BUGFIX: fix int32 to ErrorType mapping. The InvalidOverflow had a value mismatch between glean-core and the bindings. This would only be a problem in unit tests. (#1063)
• Android
• Enable propagating options to the main product Activity when using the GleanDebugActivity.
• BUGFIX: Fix the metrics ping collection for startup pings such as reason=upgrade to occur in the same thread/task as Glean initialize. Otherwise, it gets collected after the application lifetime metrics are cleared such as experiments that should be in the ping. (#1069)

# v31.4.0 (2020-07-16)

Full changelog

• General
• Enable debugging features through environment variables. (#1058)

# v31.3.0 (2020-07-10)

Full changelog

• General
• Remove locale from baseline ping. (1609968, #1016)
• Persist X-Debug-ID header on store ping. (1605097, #1042)
• BUGFIX: raise an error if Glean is initialized with an empty string as the application_id (#1043).
• Python
• BUGFIX: correctly set the app_build metric to the newly provided application_build_id initialization option (#1031).
• The Python bindings now report networking errors in the glean.upload.ping_upload_failure metric (like all the other bindings) (#1039).
• Python default upgraded to Python 3.8 (#995)
• iOS
• BUGFIX: Make LabeledMetric subscript public, so consuming applications can actually access it (#1027)

# v31.2.3 (2020-06-29)

Full changelog

• General
• Move debug view tag management to the Rust core. (1640575, #998)
• BUGFIX: Fix mismatch in events keys and values by using glean_parser version 1.23.0.

# v31.2.2 (2020-06-26)

Full changelog

• Android
• BUGFIX: Compile dependencies with NDEBUG to avoid linking unavailable symbols. This fixes a crash due to a missing stderr symbol on older Android (#1020)

# v31.2.1 (2020-06-25)

Full changelog

• Python
• BUGFIX: Core metrics are now present in every ping, even if submit is called before initialize has a chance to complete. (#1012)

# v31.2.0 (2020-06-24)

Full changelog

• General
• Android
• BUGFIX: baseline pings with reason "dirty startup" are no longer sent if Glean did not full initialize in the previous run (#996).
• Python
• Support for Python 3.5 was dropped (#987).
• Python wheels are now shipped with Glean release builds, resulting in much smaller libraries (#1002)
• The Python bindings now use locale.getdefaultlocale() rather than locale.getlocale() to determine the locale (#1004).

# v31.1.2 (2020-06-23)

Full changelog

• General
• BUGFIX: Correctly format the date and time in the Date header (#993).
• Python
• BUGFIX: Additional time is taken at shutdown to make sure pings are sent and telemetry is recorded. (1646173, #983)
• BUGFIX: Glean will run on the main thread when running in a multiprocessing subprocess (#986).

# v31.1.1 (2020-06-12)

Full changelog

• Android
• Dropping the version requirement for lifecycle extensions down again. Upping the required version caused problems in A-C.

# v31.1.0 (2020-06-11)

Full changelog

• General:
• The regex crate is no longer required, making the Glean binary smaller (#949)
• Record upload failures into a new metric (#967)
• Log FFI errors as actual errors (#935)
• Limit the number of upload retries in all implementations (#953, #968)
• Python

# v31.0.2 (2020-05-29)

Full changelog

• Rust
• Fix list of included files in published crates

# v31.0.1 (2020-05-29)

Full changelog

• Rust
• Relax version requirement for flate2 for compatibility reasons

# v31.0.0 (2020-05-28)

Full changelog

• General:

• The version of glean_parser has been upgraded to v1.22.0
• A maximum of 10 extra_keys is now enforced for event metric types.
• Breaking change: (Swift only) Combine all metrics and pings into a single generated file Metrics.swift.
• For Swift users this requires to change the list of output files for the sdk_generator.sh script. It now only needs to include the single file Generated/Metrics.swift.
• Python:

• BUGFIX: lifetime: application metrics are no longer recorded as lifetime: user.
• BUGFIX: glean-core is no longer crashing when calling uuid.set with invalid UUIDs.
• Most of the work in Glean.initialize happens on a worker thread and no longer blocks the main thread.
• Rust:

• Expose Datetime types to Rust consumers.

# v30.1.0 (2020-05-22)

Full changelog

• Android & iOS
• Ping payloads are now compressed using gzip.
• iOS
• Glean.initialize is now a no-op if called from an embedded extension. This means that Glean will only run in the base application process in order to prevent extensions from behaving like separate applications with different client ids from the base application. Applications are responsible for ensuring that extension metrics are only collected within the base application.
• Python
• lifetime: application metrics are now cleared after the Glean-owned pings are sent, after the product starts.
• Glean Python bindings now build in a native Windows environment.
• BUGFIX: MemoryDistributionMetric now parses correctly in metrics.yaml files.
• BUGFIX: Glean will no longer crash if run as part of another library's coverage testing.

# v30.0.0 (2020-05-13)

Full changelog

• General:
• We completely replaced how the upload mechanism works. glean-core (the Rust part) now controls all upload and coordinates the platform side with its own internals. All language bindings implement ping uploading around a common API and protocol. There is no change for users of Glean, the language bindings for Android and iOS have been adopted to the new mechanism already.
• Expose RecordedEvent and DistributionData types to Rust consumers (#876)
• Log crate version at initialize (#873)
• Android:
• iOS:

# v29.1.2 (2021-01-26)

Full changelog

This is an iOS release only, built with Xcode 11.7

Otherwise no functional changes.

• iOS
• Build with Xcode 11.7 (#1457)

# v29.1.1 (2020-05-22)

Full changelog

• Android
• BUGFIX: Fix a race condition that leads to a ConcurrentModificationException. Bug 1635865

# v29.1.0 (2020-05-11)

Full changelog

• General:
• The version of glean_parser has been upgraded to v1.20.4
• BUGFIX: yamllint errors are now reported using the correct file name.
• The minimum and maximum values of a timing distribution can now be controlled by the time_unit parameter. See bug 1630997 for more details.

# v29.0.0 (2020-05-05)

Full changelog

• General:
• The version of glean_parser has been upgraded to v1.20.2 (#827):
• Breaking change: glinter errors found during code generation will now return an error code.
• glean_parser now produces a linter warning when user lifetime metrics are set to expire. See bug 1604854 for additional context.
• Android:
• The PingType.submit() can now be called without a null by Java consumers (#853).
• Python:
• BUGFIX: Fixed a race condition in the atexit handler, that would have resulted in the message "No database found" (#854).
• The Glean FFI header is now parsed at build time rather than runtime. Relevant for packaging in PyInstaller, the wheel no longer includes glean.h and adds _glean_ffi.py (#852).
• The minimum versions of many secondary dependencies have been lowered to make the Glean SDK compatible with more environments.
• Dependencies that depend on the version of Python being used are now specified using the Declaring platform specific dependencies syntax in setuptools. This means that more recent versions of dependencies are likely to be installed on Python 3.6 and later, and unnecessary backport libraries won't be installed on more recent Python versions.
• iOS:
• Glean for iOS is now being built with Xcode 11.4.1 (#856)

# v28.0.0 (2020-04-23)

Full changelog

• General:
• The baseline ping is now sent when the application goes to foreground, in addition to background and dirty-startup.
• Python:
• BUGFIX: The ping uploader will no longer display a trace back when the upload fails due to a failed DNS lookup, network outage, or related issues that prevent communication with the telemetry endpoint.
• The dependency on inflection has been removed.
• The Python bindings now use subprocess rather than multiprocessing to perform ping uploading in a separate process. This should be more compatible on all of the platforms Glean supports.

# v27.1.0 (2020-04-09)

Full changelog

• General:
• BUGFIX: baseline pings sent at startup with the dirty_startup reason will now include application lifetime metrics (#810)
• iOS:
• Breaking change: Change Glean iOS to use Application Support directory #815. No migration code is included. This will reset collected data if integrated without migration. Please contact the Glean SDK team if this affects you.
• Python
• BUGFIX: Fixed a race condition between uploading pings and deleting the temporary directory on shutdown of the process.

# v27.0.0 (2020-04-08)

Full changelog

• General
• Glean will now detect when the upload enabled flag changes outside of the application, for example due to a change in a config file. This means that if upload is disabled while the application wasn't running (e.g. between the runs of a Python command using the Glean SDK), the database is correctly cleared and a deletion request ping is sent. See #791.
• The events ping now includes a reason code: startup, background or max_capacity.
• iOS:
• BUGFIX: A bug where the metrics ping is sent immediately at startup on the last day of the month has been fixed.
• Glean for iOS is now being built with Xcode 11.4.0
• The measure convenience function on timing distributions and time spans will now cancel the timing if the measured function throws, then rethrow the exception (#808)
• Broken doc generation has been fixed (#805).
• Kotlin
• The measure convenience function on timing distributions and time spans will now cancel the timing if the measured function throws, then rethrow the exception (#808)
• Python:
• Glean will now wait at application exit for up to one second to let its worker thread complete.
• Ping uploading now happens in a separate child process by default. This can be disabled with the allow_multiprocessing configuration option.

# v26.0.0 (2020-03-27)

Full changelog

• General:
• The version of glean_parser has been updated to 1.19.0:
• Breaking change: The regular expression used to validate labels is stricter and more correct.
• State whether the ping includes client id;
• glean_parser now makes it easier to write external translation functions for different language targets.
• BUGFIX: glean_parser now works on 32-bit Windows.
• Android:
• gradlew clean will no longer remove the Miniconda installation in ~/.gradle/glean. Therefore clean can be used without reinstalling Miniconda afterward every time.
• Python:
• Breaking Change: The glean.util and glean.hardware modules, which were unintentionally public, have been made private.
• Most Glean work and I/O is now done on its own worker thread. This brings the parallelism Python in line with the other platforms.
• The timing distribution, memory distribution, string list, labeled boolean and labeled string metric types are now supported in Python (#762, #763, #765, #766)

# v25.1.0 (2020-02-26)

Full changelog

• Python:
• The Boolean, Datetime and Timespan metric types are now supported in Python (#731, #732, #737)
• Make public, document and test the debugging features (#733)

# v25.0.0 (2020-02-17)

Full changelog

• General:
• ping_type is not included in the ping_info any more (#653), the pipeline takes the value from the submission URL.
• The version of glean_parser has been upgraded to 1.18.2:
• Breaking Change (Java API) Have the metrics names in Java match the names in Kotlin. See Bug 1588060.
• The reasons a ping are sent are now included in the generated markdown documentation.
• Android:
• The Glean.initialize method runs mostly off the main thread (#672).
• Labels in labeled metrics now have a correct, and slightly stricter, regular expression. See label format for more information.
• iOS:
• The baseline ping will now include reason codes that indicate why it was submitted. If an unclean shutdown is detected (e.g. due to force-close), this ping will be sent at startup with reason: dirty_startup.
• Per Bug 1614785, the clearing of application lifetime metrics now occurs after the metrics ping is sent in order to preserve values meant to be included in the startup metrics ping.
• initialize() now performs most of its work in a background thread.
• Python:
• When the pre-init task queue overruns, this is now recorded in the metric glean.error.preinit_tasks_overflow.
• glinter warnings are printed to stderr when loading metrics.yaml and pings.yaml files.

# v24.2.0 (2020-02-11)

Full changelog

• General:
• Add locale to client_info section.
• Deprecation Warning Since locale is now in the client_info section, the one in the baseline ping (glean.baseline.locale) is redundant and will be removed by the end of the quarter.
• Drop the Glean handle and move state into glean-core (#664)
• If an experiment includes no extra fields, it will no longer include {"extra": null} in the JSON payload.
• Support for ping reason codes was added.
• The metrics ping will now include reason codes that indicate why it was submitted.
• The version of glean_parser has been upgraded to 1.17.3
• Android:
• Collections performed before initialization (preinit tasks) are now dispatched off the main thread during initialization.
• The baseline ping will now include reason codes that indicate why it was submitted. If an unclean shutdown is detected (e.g. due to force-close), this ping will be sent at startup with reason: dirty_startup.
• iOS:
• Collections performed before initialization (preinit tasks) are now dispatched off the main thread and not awaited during initialization.

# v24.1.0 (2020-01-16)

Full changelog

• General:
• Stopping a non started measurement in a timing distribution will now be reported as an invalid_state error.
• Android:

# v24.0.0 (2020-01-14)

Full changelog

• General:
• Breaking Change An enableUpload parameter has been added to the initialize() function. This removes the requirement to call setUploadEnabled() prior to calling the initialize() function.
• Android:
• The metrics ping scheduler will now only send metrics pings while the application is running. The application will no longer "wake up" at 4am using the Work Manager.
• The code for migrating data from Glean SDK before version 19 was removed.
• When using the GleanTestLocalServer rule in instrumented tests, pings are immediately flushed by the WorkManager and will reach the test endpoint as soon as possible.
• Python:
• The Python bindings now support Python 3.5 - 3.7.
• The Python bindings are now distributed as a wheel on Linux, macOS and Windows.

# v23.0.1 (2020-01-08)

Full changelog

• Android:
• BUGFIX: The Glean Gradle plugin will now work if an app or library doesn't have a metrics.yaml or pings.yaml file.
• iOS:
• The released iOS binaries are now built with Xcode 11.3.

# v23.0.0 (2020-01-07)

Full changelog

• Python bindings:
• Support for events and UUID metrics was added.
• Android:
• The Glean Gradle Plugin correctly triggers docs and API updates when registry files change, without requiring them to be deleted.
• parseISOTimeString has been made 4x faster. This had an impact on Glean migration and initialization.
• Metrics with lifetime: application are now cleared when the application is started, after startup Glean SDK pings are generated.
• All platforms:
• The public method PingType.send() (in all platforms) have been deprecated and renamed to PingType.submit().
• Rename deletion_request ping to deletion-request ping after glean_parser update

# v22.1.0 (2019-12-17)

Full changelog

• Add InvalidOverflow error to TimingDistributions (#583)

# v22.0.0 (2019-12-05)

Full changelog

• Add a crate for the nice control API (#542)
• Pending deletion_request pings are resent on start (#545)

# v21.3.0 (2019-12-03)

Full changelog

• Timers are reset when disabled. That avoids recording timespans across disabled/enabled toggling (#495).
• Add a new flag to pings: send_if_empty (#528)
• Implement the deletion request ping in Glean (#526)

# v21.2.0 (2019-11-21)

Full changelog

• All platforms

• The experiments API is no longer ignored before the Glean SDK initialized. Calls are recorded and played back once the Glean SDK is initialized.

• String list items were being truncated to 20, rather than 50, bytes when using .set() (rather than .add()). This has been corrected, but it may result in changes in the sent data if using string list items longer than 20 bytes.

# v21.1.1 (2019-11-20)

Full changelog

• Android:

• Use the LifecycleEventObserver interface, rather than the DefaultLifecycleObserver interface, since the latter isn't compatible with old SDK targets.

# v21.1.0 (2019-11-20)

Full changelog

• Android:

• Two new metrics were added to investigate sending of metrics and baseline pings. See bug 1597980 for more information.

• Glean's two lifecycle observers were refactored to avoid the use of reflection.

• All platforms:

• Timespans will now not record an error if stopping after setting upload enabled to false.

# v21.0.0 (2019-11-18)

Full changelog

• Android:

• The GleanTimerId can now be accessed in Java and is no longer a typealias.

• Fixed a bug where the metrics ping was getting scheduled twice on startup.

• All platforms

• Bumped glean_parser to version 1.11.0.

# v20.2.0 (2019-11-11)

Full changelog

• In earlier 20.x.x releases, the version of glean-ffi was incorrectly built against the wrong version of glean-core.

# v20.1.0 (2019-11-11)

Full changelog

• The version of Glean is included in the Glean Gradle plugin.

• When constructing a ping, events are now sorted by their timestamp. In practice, it rarely happens that event timestamps are unsorted to begin with, but this guards against a potential race condition and incorrect usage of the lower-level API.

# v20.0.0 (2019-11-11)

Full changelog

• Glean users should now use a Gradle plugin rather than a Gradle script. (#421) See integrating with the build system docs for more information.

• In Kotlin, metrics that can record errors now have a new testing method, testGetNumRecordedErrors. (#401)

# v19.1.0 (2019-10-29)

Full changelog

• Fixed a crash calling start on a timing distribution metric before Glean is initialized. Timings are always measured, but only recorded when upload is enabled (#400)
• BUGFIX: When the Debug Activity is used to log pings, each ping is now logged only once (#407)
• New invalid state error, used in timespan recording (#230)
• Add an Android crash instrumentation walk-through (#399)
• Fix crashing bug by avoiding assert-printing in LMDB (#422)
• Upgrade dependencies, including rkv (#416)

# v19.0.0 (2019-10-22)

Full changelog

First stable release of Glean in Rust (aka glean-core). This is a major milestone in using a cross-platform implementation of Glean on the Android platform.

• Fix round-tripping of timezone offsets in dates (#392)
• Handle dynamic labels in coroutine tasks (#394)

# v0.0.1-TESTING6 (2019-10-18)

Full changelog

• Ignore dynamically stored labels if Glean is not initialized (#374)
• Make sure ProGuard doesn't remove Glean classes from the app (#380)
• Keep track of pings in all modes (#378)
• Add jnaTest dependencies to the forUnitTest JAR (#382)

# v0.0.1-TESTING5 (2019-10-10)

Full changelog

• Upgrade to NDK r20 (#365)

# v0.0.1-TESTING4 (2019-10-09)

Full changelog

• Take DST into account when converting a calendar into its items (#359)
• Include a macOS library in the forUnitTests builds (#358)
• Keep track of all registered pings in test mode (#363)

# v0.0.1-TESTING3 (2019-10-08)

Full changelog

• Allow configuration of Glean through the GleanTestRule
• Bump glean_parser version to 1.9.2

# v0.0.1-TESTING2 (2019-10-07)

Full changelog

• Include a Windows library in the forUnitTests builds

# v0.0.1-TESTING1 (2019-10-02)

Full changelog

### General

First testing release.

# This Week in Glean (TWiG)

“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.