Introduction

The Glean SDKs are modern cross-platform telemetry client libraries and are a part of the Glean project.

Glean logo

The Glean SDKs are available for several programming languages and development environments. Each SDK aims to contain the same group of features with similar, but idiomatic APIs.

To learn more about each SDK, refer to the SDKs overview page.

To get started adding Glean to your project, choose one of the following guides:

  • Kotlin
    • Get started adding Glean to an Android application or library.
  • Swift
    • Get started adding Glean to an iOS application or library.
  • Python
    • Get started adding Glean to any Python project.
  • Rust
    • Get started adding Glean to any Rust project or library.
  • JavaScript
    • Get started adding Glean to a website, web extension or Node.js project.
  • QML
    • Get started adding Glean to a Qt/QML application or library.
  • Server
    • Get started adding Glean to a server-side application.

For development documentation on the Glean SDK, refer to the Glean SDK development book.

Sections

User Guides

This section of the book contains step-by-step guides and essays detailing how to achieve specific tasks with each Glean SDK.

It contains guides on the first steps of integrating Glean into your project, choosing the right metric type for you, debugging products that use Glean and Glean's built-in error reporting mechanism.

If you want to start using Glean to report data, this is the section you should read.

API Reference

This section of the book contains reference pages for Glean’s user facing APIs.

If you are looking for information a specific Glean API, this is the section you should check out.

SDK Specific Information

This section contains guides and essays regarding specific usage information and possibilities in each Glean SDK.

Check out this section for more information on the SDK you are using.

Appendix

Glossary

In this book we use a lot of Glean specific terminology. In the glossary, we go through many of the terms used throughout this book and describe exactly what we mean when we use them.

Changelog

This section contains detailed notes about changes in Glean, per release.

This Week in Glean

“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.

Contribution Guidelines

This section contains detailed information on where and how to include new content to this book.

Contact

To contact the Glean team you can:

License

The Glean SDKs Source Code is subject to the terms of the Mozilla Public License v2.0. You can obtain a copy of the MPL at https://mozilla.org/MPL/2.0/.

Adding Glean to your project

This page describes the steps for adding Glean to a project. This does not include the steps for adding a new metrics or pings to an existing Glean integration. If that is what your are looking for, refer to the Adding new metrics or the Adding new pings guide.

Glean integration checklist

The Glean integration checklist can help to ensure your Glean SDK-using product is meeting all of the recommended guidelines.

Products (applications or libraries) using a Glean SDK to collect telemetry must:

  1. Integrate the Glean SDK into the build system. Since the Glean SDK does some code generation for your metrics at build time, this requires a few more steps than just adding a library.

  2. Go through data review process for all newly collected data.

  3. Ensure that telemetry coming from automated testing or continuous integration is either not sent to the telemetry server or tagged with the automation tag using the sourceTag feature.

  4. At least one week before releasing your product, enable your product's application id and metrics to be ingested by the data platform (and, as a consequence, indexed by the Glean Dictionary).

Important consideration for libraries: For libraries that are adding Glean, you will need to indicate which applications use the library as a dependency so that the library metrics get correctly indexed and added to the products that consume the library. If the library is added to a new product later, then it is necessary to file a new [bug][dataeng-bug] to add it as a dependency to that product in order for the library metrics to be collected along with the data from the new product.

Additionally, applications (but not libraries) must:

  1. Request a data review to add Glean to your application (since it can send data out of the box).

  2. Initialize Glean as early as possible at application startup.

  3. Provide a way for users to turn data collection off (e.g. providing settings to control Glean.setUploadEnabled()). The exact method used is application-specific.

Looking for an integration guide?

Step-by-step tutorials for each supported language/platform, can be found on the specific integration guides:

Adding Glean to your Kotlin project

This page provides a step-by-step guide on how to integrate the Glean library into a Kotlin project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Currently, this SDK only supports the Android platform.

Setting up the dependency

The Glean Kotlin SDK is published on maven.mozilla.org. To use it, you need to add the following to your project's top-level build file, in the allprojects block (see e.g. Glean SDK's own build.gradle):

repositories {
    maven {
       url "https://maven.mozilla.org/maven2"
    }
}

Each module that uses the Glean Kotlin SDK needs to specify it in its build file, in the dependencies block. Add this to your Gradle configuration:

implementation "org.mozilla.components:service-glean:{latest-version}"
Pick the correct version

The {latest-version} placeholder in the above link should be replaced with the version of Android Components used by the project.

The Glean Kotlin SDK is released as part of android-components. Therefore, it follows android-components' versions. The android-components release page can be used to determine the latest version.

For example, if version 33.0.0 is used, then the include directive becomes:

implementation "org.mozilla.components:service-glean:33.0.0"
Size impact on the application APK

The Glean Kotlin SDK APK ships binary libraries for all the supported platforms. Each library file measures about 600KB. If the final APK size of the consuming project is a concern, please enable ABI splits.

Dependency for local testing

Due to its use of a native library you will need additional setup to allow local testing.

First add a new configuration to your build.gradle, just before your dependencies:

configurations {
    jnaForTest
}

Then add the following lines to your dependencies block:

jnaForTest "net.java.dev.jna:jna:5.6.0@jar"
testImplementation files(configurations.jnaForTest.copyRecursive().files)
testImplementation "org.mozilla.telemetry:glean-forUnitTests:${project.ext.glean_version}"

Note: Always use org.mozilla.telemetry:glean-forUnitTests. This package is standalone and its version will be exported from the main Glean package automatically.

Setting up metrics and pings code generation

In order for the Glean Kotlin SDK to generate an API for your metrics, two Gradle plugins must be included in your build:

The Glean Gradle plugin is distributed through Mozilla's Maven, so we need to tell your build where to look for it by adding the following to the top of your build.gradle:

buildscript {
    repositories {
        // Include the next clause if you are tracking snapshots of android components
        maven {
            url "https://snapshots.maven.mozilla.org/maven2"
        }
        maven {
            url "https://maven.mozilla.org/maven2"
        }

        dependencies {
            classpath "org.mozilla.components:tooling-glean-gradle:{android-components-version}"
        }
    }
}
Important

As above, the {android-components-version} placeholder in the above link should be replaced with the version number of android components used in your project.

The JetBrains Python plugin is distributed in the Gradle plugin repository, so it can be included with:

plugins {
    id "com.jetbrains.python.envs" version "0.0.26"
}

Right before the end of the same file, we need to apply the Glean Gradle plugin. Set any additional parameters to control the behavior of the Glean Gradle plugin before calling apply plugin.

// Optionally, set any parameters to send to the plugin.
ext.gleanGenerateMarkdownDocs = true
apply plugin: "org.mozilla.telemetry.glean-gradle-plugin"
Rosetta 2 required on Apple Silicon

On Apple Silicon machines (M1/M2/M3 MacBooks and iMacs) Rosetta 2 is required for the bundled Python. See the Apple documentation about Rosetta 2 and Bug 1775420 for details.
You can install it with softwareupdate --install-rosetta

Offline builds

The Glean Gradle plugin has limited support for offline builds of applications that use the Glean SDK.

Adding Glean to your Swift project

This page provides a step-by-step guide on how to integrate the Glean library into a Swift project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Currently, this SDK only supports the iOS platform.

Requirements

  • Python >= 3.8.

Setting up the dependency

The Glean Swift SDK can be consumed as a Swift Package. In your Xcode project add a new package dependency:

https://github.com/mozilla/glean-swift

Use the dependency rule "Up to Next Major Version". Xcode will automatically fetch the latest version for you.

The Glean library will be automatically available to your code when you import it:

import Glean

Setting up metrics and pings code generation

The metrics.yaml file is parsed at build time and Swift code is generated. Add a new metrics.yaml file to your Xcode project.

Follow these steps to automatically run the parser at build time:

  1. Download the sdk_generator.sh script from the Glean repository:
    https://raw.githubusercontent.com/mozilla/glean/{latest-release}/glean-core/ios/sdk_generator.sh
    
Pick the correct version

As above, the {latest-version} placeholder should be replaced with the version number of Glean Swift SDK release used in this project.

  1. Add the sdk_generator.sh file to your Xcode project.

  2. On your application targets' Build Phases settings tab, click the + icon and choose New Run Script Phase. Create a Run Script in which you specify your shell (ex: /bin/sh), add the following contents to the script area below the shell:

    bash $PWD/sdk_generator.sh
    

    Set additional options to control the behavior of the script.

  3. Add the path to your metrics.yaml and (optionally) pings.yaml and tags.yaml under "Input files":

    $(SRCROOT)/{project-name}/metrics.yaml
    $(SRCROOT)/{project-name}/pings.yaml
    $(SRCROOT)/{project-name}/tags.yaml
    
  4. Add the path to the generated code file to the "Output Files":

    $(SRCROOT)/{project-name}/Generated/Metrics.swift
    
  5. If you are using Git, add the following lines to your .gitignore file:

    .venv/
    {project-name}/Generated
    

    This will ignore files that are generated at build time by the sdk_generator.sh script. They don't need to be kept in version control, as they can be re-generated from your metrics.yaml and pings.yaml files.

Glean and embedded extensions

Metric collection is a no-op in application extensions and Glean will not run. Since extensions run in a separate sandbox and process from the application, Glean would run in an extension as if it were a completely separate application with different client ids and storage. This complicates things because Glean doesn’t know or care about other processes. Because of this, Glean is purposefully prevented from running in an application extension and if metrics need to be collected from extensions, it's up to the integrating application to pass the information to the base application to record in Glean.

Adding Glean to your Python project

This page provides a step-by-step guide on how to integrate the Glean library into a Python project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Setting up the dependency

We recommend using a virtual environment for your work to isolate the dependencies for your project. There are many popular abstractions on top of virtual environments in the Python ecosystem which can help manage your project dependencies.

The Glean Python SDK currently has prebuilt wheels on PyPI for Windows (i686 and x86_64), Linux/glibc (x86_64) and macOS (x86_64). For other platforms, including BSD or Linux distributions that don't use glibc, such as Alpine Linux, the glean_sdk package will be built from source on your machine. This requires that Cargo and Rust are already installed. The easiest way to do this is through rustup.

Once you have your virtual environment set up and activated, you can install the Glean Python SDK into it using:

$ python -m pip install glean_sdk
Important

Installing Python wheels is still a rapidly evolving feature of the Python package ecosystem. If the above command fails, try upgrading pip:

python -m pip install --upgrade pip
Important

The Glean Python SDK make extensive use of type annotations to catch type related errors at build time. We highly recommend adding mypy to your continuous integration workflow to catch errors related to type mismatches early.

Consuming YAML registry files

For Python, the metrics.yaml file must be available and loaded at runtime.

If your project is a script (i.e. just Python files in a directory), you can load the metrics.yaml using:

from glean import load_metrics

metrics = load_metrics("metrics.yaml")

# Use a metric on the returned object
metrics.your_category.your_metric.set("value")

If your project is a distributable Python package, you need to include the metrics.yaml file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename() to get the filename at runtime.

from glean import load_metrics
from pkg_resources import resource_filename

metrics = load_metrics(resource_filename(__name__, "metrics.yaml"))

# Use a metric on the returned object
metrics.your_category.your_metric.set("value")

Automation steps

Documentation

The documentation for your application or library's metrics and pings are written in metrics.yaml and pings.yaml.

For Mozilla projects, this SDK documentation is automatically published on the Glean Dictionary. For non-Mozilla products, it is recommended to generate markdown-based documentation of your metrics and pings into the repository. For most languages and platforms, this transformation can be done automatically as part of the build. However, for some SDKs the integration to automatically generate docs is an additional step.

The Glean Python SDK provides a commandline tool for automatically generating markdown documentation from your metrics.yaml and pings.yaml files. To perform that translation, run glean_parser's translate command:

python3 -m glean_parser translate -f markdown -o docs metrics.yaml pings.yaml

To get more help about the commandline options:

python3 -m glean_parser translate --help

We recommend integrating this step into your project's documentation build. The details of that integration is left to you, since it depends on the documentation tool being used and how your project is set up.

Metrics linting

Glean includes a "linter" for metrics.yaml and pings.yaml files called the glinter that catches a number of common mistakes in these files.

As part of your continuous integration, you should run the following on your metrics.yaml and pings.yaml files:

python3 -m glean_parser glinter metrics.yaml pings.yaml

Parallelism

Most Glean SDKs use a separate worker thread to do most of its work, including any I/O. This thread is fully managed by the SDK as an implementation detail. Therefore, users should feel free to use the Glean SDKs wherever they are most convenient, without worrying about the performance impact of updating metrics and sending pings.

Since the Glean SDKs perform disk and networking I/O, they try to do as much of their work as possible on separate threads and processes. Since there are complex trade-offs and corner cases to support Python parallelism, it is hard to design a one-size-fits-all approach.

Default behavior

When using the Python SDK, most of the Glean's work is done on a separate thread, managed by the SDK itself. The SDK releases the Global Interpreter Lock (GIL) for most of its operations, therefore your application's threads should not be in contention with the Glean's worker thread.

The Glean Python SDK installs an atexit handler so that its worker thread can cleanly finish when your application exits. This handler will wait up to 30 seconds for any pending work to complete.

By default, ping uploading is performed in a separate child process. This process will continue to upload any pending pings even after the main process shuts down. This is important for commandline tools where you want to return control to the shell as soon as possible and not be delayed by network connectivity.

Cases where subprocesses aren't possible

The default approach may not work with applications built using PyInstaller or similar tools which bundle an application together with a Python interpreter making it impossible to spawn new subprocesses of that interpreter. For these cases, there is an option to ensure that ping uploading occurs in the main process. To do this, set the allow_multiprocessing parameter on the glean.Configuration object to False.

Using the multiprocessing module

Additionally, the default approach does not work if your application uses the multiprocessing module for parallelism. The Glean Python SDK can not wait to finish its work in a multiprocessing subprocess, since atexit handlers are not supported in that context.
Therefore, if the Glean Python SDK detects that it is running in a multiprocessing subprocess, all of its work that would normally run on a worker thread will run on the main thread. In practice, this should not be a performance issue: since the work is already in a subprocess, it will not block the main process of your application.

Adding Glean to your Rust project

This page provides a step-by-step guide on how to integrate the Glean library into a Rust project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Setting up the dependency

The Glean Rust SDK is published on crates.io. Add it to your dependencies in Cargo.toml:

[dependencies]
glean = "50.0.0"

Setting up metrics and pings code generation

glean-build is in beta.

The glean-build crate is new and currently in beta. It can be used as a git dependency. Please file a bug if it does not work for you.

At build time you need to generate the metrics and ping API from your definition files. Add the glean-build crate as a build dependency in your Cargo.toml:

[build-dependencies]
glean-build = { git = "https://github.com/mozilla/glean" }

Then add a build.rs file next to your Cargo.toml and call the builder:

use glean_build::Builder;

fn main() {
    Builder::default()
        .file("metrics.yaml")
        .file("pings.yaml")
        .generate()
        .expect("Error generating Glean Rust bindings");
}

Ensure your metrics.yaml and pings.yaml files are placed next to your Cargo.toml or adjust the path in the code above. You can also leave out any of the files.

Include the generated code

glean-build will generate a glean_metrics.rs file that needs to be included in your source code. To do so add the following lines of code in your src/lib.rs file:

mod metrics {
    include!(concat!(env!("OUT_DIR"), "/glean_metrics.rs"));
}

Alternatively create src/metrics.rs (or a different name) with only the include line:

include!(concat!(env!("OUT_DIR"), "/glean_metrics.rs"));

Then add mod metrics; to your src/lib.rs file.

Use the metrics

In your code you can then access generated metrics nested within their category under the metrics module (or your chosen name):

metrics::your_category::metric_name.set(true);

See the metric API reference for details.

Adding Glean to your JavaScript project

This page provides a step-by-step guide on how to integrate the Glean JavaScript SDK into a JavaScript project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

The Glean JavaScript SDK allows integration with three distinct JavaScript environments: websites, web extension and Node.js.

Requirements

  • Node.js >= 12.20.0
  • npm >= 7.0.0
  • Webpack >= 5.34.0
  • Python >= 3.8
    • The glean command requires Python to download glean_parser which is a Python library

Browser extension specific requirements

  • webextension-polyfill >= 0.8.0
    • Glean.js assumes a Promise-based browser API: Firefox provides such an API by default. Other browsers may require using a polyfill library such us webextension-polyfill when using Glean in browser extensions
  • Host permissions to the telemetry server
    • Only necessary if the defined server endpoint denies cross-origin requests
    • Not necessary if using the default https://incoming.telemetry.mozilla.org.
  • "storage" API permissions

Browser extension example configuration

The manifest.json file of the sample browser extension available on the mozilla/glean.js repository provides an example on how to define the above permissions as well as how and where to load the webextension-polyfill script.

Setting up the dependency

The Glean JavaScript SDK is distributed as an npm package @mozilla/glean.

This package has different entry points to access the different SDKs.

  • @mozilla/glean/web gives access to the websites SDK
  • @mozilla/glean/webext gives access to the web extension SDK
  • @mozilla/glean/node gives access to the Node.js SDK1
1

The Node.js SDK does not have persistent storage yet. This means, Glean does not persist state throughout application runs. For updates on the implementation of this feature in Node.js, follow Bug 1728807.

Install Glean in your JavaScript project, by running:

npm install @mozilla/glean

Then import Glean into your project:

// Importing the Glean JavaScript SDK for use in **web extensions**
//
// esm
import Glean from "@mozilla/glean/webext";
// cjs
const { default: Glean } = require("@mozilla/glean/webext");

// Importing the Glean JavaScript SDK for use in **websites**
//
// esm
import Glean from "@mozilla/glean/web";
// cjs
const { default: Glean } = require("@mozilla/glean/web");

// Importing the Glean JavaScript SDK for use in **Node.js**
//
// esm
import Glean from "@mozilla/glean/node";
// cjs
const { default: Glean } = require("@mozilla/glean/node");

Browser extension security considerations

In case of privilege-escalation attack into the context of the web extension using Glean, the malicious scripts would be able to call Glean APIs or use the browser.storage.local APIs directly. That would be a risk to Glean data, but not caused by Glean. Glean-using extensions should be careful not to relax the default Content-Security-Policy that generally prevents these attacks.

Common import errors

"Cannot find module '@mozilla/glean'"

Glean.js does not have a main package entry point. Instead it relies on a series of entry points depending on the platform you are targeting.

In order to import Glean use:

import Glean from '@mozilla/glean/{your-platform}'

"Module not found: Error: Can't resolve '@mozilla/glean/webext' in '...'"

Glean.js relies on Node.js' subpath exports feature to define multiple package entry points.

Please make sure that you are using a supported Node.js runtime and also make sure the tools you are using support this Node.js feature.

Setting up metrics and pings code generation

In JavaScript, the metrics and pings definitions must be parsed at build time. The @mozilla/glean package exposes glean_parser through the glean script.

To parse your YAML registry files using this script, define a new script in your package.json file:

{
  // ...
  "scripts": {
    // ...
    "build:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f javascript -o path/to/generated",
    // Or, if you are building for a Typescript project
    "build:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f typescript -o path/to/generated"
  }
}

Then run this script by calling:

npm run build:glean

Automation steps

Documentation

Prefer using the Glean Dictionary

While it is still possible to generate Markdown documentation, if working on a public Mozilla project rely on the Glean Dictionary for documentation. Your product will be automatically indexed by the Glean Dictionary after it gets enabled in the pipeline.

One of the commands provided by glean_parser allows users to generate Markdown documentation based on the contents of their YAML registry files. To perform that translation, use the translate command with a different output format, as shown below.

In your package.json, define the following script:

{
  // ...
  "scripts": {
    // ...
    "docs:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f markdown -o path/to/docs",
  }
}

Then run this script by calling:

npm run docs:glean

YAML registry files linting

Glean includes a "linter" for the YAML registry files called the glinter that catches a number of common mistakes in these files. To run the linter use the glinter command.

In your package.json, define the following script:

{
  // ...
  "scripts": {
    // ...
    "lint:glean": "glean glinter path/to/metrics.yaml path/to/pings.yaml",
  }
}

Then run this script by calling:

npm run lint:glean

Adding Glean to your Qt/QML project

This page provides a step-by-step guide on how to integrate the Glean.js library into a Qt/QML project.

Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Requirements

  • Python >= 3.7
  • Qt >= 5.15.2

Setting up the dependency

Glean.js' Qt/QML build is distributed as an asset with every Glean.js release. In order to download the latest version visit https://github.com/mozilla/glean.js/releases/latest.

Glean.js is a QML module, so extract the contents of the downloaded file wherever you keep your other modules. Make sure that whichever directory that module is placed in, is part of the QML Import Path.

After doing that, import Glean like so:

import org.mozilla.Glean <version>

Picking the correct version

The <version> number is the version of the release you downloaded minus its patch version. For example, if you downloaded Glean.js version 0.15.0 your import statement will be:

import org.mozilla.Glean 0.15

Consuming YAML registry files

Qt/QML projects need to setup metrics and pings code generation manually.

First install the glean_parser CLI tool.

pip install glean_parser

Make sure you have the correct glean_parser version!

Qt/QML support was added to glean_parser in version 3.5.0.

Then call glean_parser from the command line:

glean_parser translate path/to/metrics.yaml path/to/pings.yaml \
  -f javascript \
  -o path/to/generated/files \
  --option platform=qt \
  --option version=0.15

The translate command will takes a list of YAML registry file paths and an output path and parse the given YAML registry files into QML JavaScript files.

The generated folder will be a QML module. Make sure wherever the generated module is placed is also part of the QML Import Path.

Notice that when building for Qt/QML it is mandatory to give the translate command two extra options.

--option platform=qt

This option is what changes the output file from standard JavaScript to QML JavaScript.

--option version=<version>

The version passed to this option will be the version of the generated QML module.

Automation steps

Documentation

Prefer using the Glean Dictionary

While it is still possible to generate Markdown documentation, if working on a public Mozilla project rely on the Glean Dictionary for documentation. Your product will be automatically indexed by the Glean Dictionary after it gets enabled in the pipeline.

One of the commands provided by glean_parser allows users to generate Markdown documentation based on the contents of their YAML registry files. To perform that translation, use the translate command with a different output format, as shown below.

glean_parser translate path/to/metrics.yaml path/to/pings.yaml \
  -f markdown \
  -o path/to/docs

YAML registry files linting

glean_parser includes a "linter" for the YAML registry files called the glinter that catches a number of common mistakes in these files. To run the linter use the glinter command.

glean_parser glinter path/to/metrics.yaml path/to/pings.yaml

Debugging

By default, the Glean.js QML module uses a minified version of the Glean.js library. It may be useful to use the unminified version of the library in order to get proper line numbers and function names when debugging crashes.

The bundle provided contains the unminified version of the library. In order to use it, open the glean.js file inside the included module and change the line:

.import "glean.lib.js" as Glean

to

.import "glean.dev.js" as Glean

Troubleshooting

submitPing may cause crashes when debugging iOS devices

The submitPing function hits a known bug in the Qt JavaScript interpreter.

This bug is only reproduced in iOS devices, it does not happen in emulators. It also only happens when using the Qt debug library for iOS.

There is no way around this bug other than avoiding the Qt debug library for iOS altogether until it is fixed. Refer to the the Qt debugging documentation on how to do that.

Adding Glean to your Server Application

Glean enables the collection of behavioral metrics through events in server environments. This method does not rely on the Glean SDK but utilizes the Glean parser to generate native code for logging events in a standard format compatible with the ingestion pipeline.

Differences from using the Glean SDK

This implementation of telemetry collection in server environments has some differences compared to using Glean SDK in client applications and Glean.js in the frontend of web applications. Primarily, in server environments the focus is exclusively on event-based metrics, diverging from the broader range of metric types supported by Glean. Additionally, there is no need to incorporate Glean SDK as a dependency in server applications. Instead, the Glean parser is used to generate native code for logging events.

When to use server-side collection

This method is intended for collecting user-level behavioral events in server environments. It is not suitable for collecting system-level metrics or performance data, which should be collected using cloud monitoring tools.

How to add Glean server side collection to your service

  1. Integrate glean_parser into your build system. Follow instructions for other SDK-enabled platforms, e.g. JavaScript. Use a server outputter to generate logging code. glean_parser currently supports Go, JavaScript/Typescript, Python, and Ruby.
  2. Define your metrics in metrics.yaml
  3. Request a data review for the collected data
  4. Add your product to probe-scraper

How to add a new event to your server side collection

Follow the standard Glean SDK guide for adding metrics to metrics.yaml file.

Technical details - ingestion

For more technical details on how ingestion works, see the Confluence page.

Enabling data to be ingested by the data platform

This page provides a step-by-step guide on how to enable data from your product to be ingested by the data platform.

This is just one of the required steps for integrating Glean successfully into a product. Check the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.

Requirements

  • GitHub Workflows

Add your product to probe scraper

At least one week before releasing your product, file a data engineering bug to enable your product's application id. This will result in your product being added to probe scraper's repositories.yaml.

Validate and publish metrics

After your product has been enabled, you must submit commits to probe scraper to validate and publish metrics. Metrics will only be published from branches defined in probe scraper's repositories.yaml, or the Git default branch if not explicitly configured. This should happen on every CI run to the specified branches. Nightly jobs will then automatically add published metrics to the Glean Dictionary and other data platform tools.

Enable the GitHub Workflow by creating a new file .github/workflows/glean-probe-scraper.yml with the following content:

---
name: Glean probe-scraper
on: [push, pull_request]
jobs:
  glean-probe-scraper:
    uses: mozilla/probe-scraper/.github/workflows/glean.yaml@main

Add your library to probe scraper

At least one week before releasing your product, file a data engineering bug to add your library to probe scraper and be scraped for metrics as a dependency of another product. This will result in your library being added to probe scraper's repositories.yaml.

Integrating Glean for product managers

This chapter provides guidance for planning the work involved in integrating Glean into your product, for internal Mozilla customers. For a technical coding perspective, see adding Glean to your project.

Glean is the standard telemetry platform required for all new Mozilla products. While there are some upfront costs to integrating Glean in your product, this pays off in easier long-term maintenance and a rich set of self-serve analysis tools. The Glean team is happy to support your telemetry integration and make it successful. Find us in #glean or email glean-team@mozilla.com.

Building a telemetry plan

The Glean SDKs provide support for answering basic product questions out-of-the-box, such as daily active users, product version and platform information.
However, it is also a good idea to have a sense of any additional product-specific questions you are trying to answer with telemetry, and, when possible, in collaboration with a data scientist.
This of course helps for your own planning, but is also invaluable for the Glean team to support you, since we will understand the ultimate goals of your product's telemetry and ensure the design will meet those goals and we can identify any new features that may be required. It is best to frame this document in the form of questions and use cases rather than as specific data points and schemas.

Integrating Glean into your product

The technical steps for integrating Glean in your product are documented in its own chapter for supported platforms. We recommend having a member of the Glean team review this integration to catch any potential pitfalls.

(Optional) Adapting Glean to your platform

The Glean SDKs are a collection of cross platform libraries and tools that facilitate collection of Glean conforming telemetry from applications.
Consult the list of the currently supported platforms and languages. If your product's tech stack isn't currently supported, please reach out to the Glean team: significant work will be required to create a new integration.
In previous efforts, this has ranged from 1 to 3 months FTE of work, so it is important to plan for this work well in advance. While the first phase of this work generally requires the specialized expertise of the Glean team, the second half can benefit from outside developers to move faster.

(Optional) Designing ping submission

The Glean SDKs periodically send telemetry to our servers in a bundle known as a "ping".
For mobile applications with common interaction models, such as web browsers, the Glean SDKs provide basic pings out-of-the-box. For other kinds of products, it may be necessary to carefully design what triggers the submission of a ping. It is important to have a solid telemetry plan (see above) so we can make sure the ping submission will be able to answer the telemetry questions required of the product.

(Optional) New metric types

The Glean SDKs have a number of different metric types that it can collect.
Metric types provide "guardrails" to make sure that telemetry is being collected correctly, and to present the data at analysis time more automatically. Occasionally, products need to collect data that doesn't fit neatly into one of the available metric types. Glean has a process to request and introduce more metric types and we will work with you to design something appropriate. This design and implementation work is at least 4 weeks, though we are working on the foundation to accelerate that. Having a telemetry plan (see above) will help to identify this work early.

Integrating Glean into GLAM

To use GLAM for analysis of your application's data file a ticket in the GLAM repository.

A data engineer from the GLAM team will reach out to you if further information is required.

Adding new metrics

Table of Contents

Process overview

When adding a new metric, the process is:

  • Consider the question you are trying to answer with this data, and choose the metric type and parameters to use.
  • Add a new entry to metrics.yaml.
  • Add code to your project to record into the metric by calling the Glean SDK.

Important: Any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK.

Choosing a metric type

The following is a set of questions to ask about the data being collected to help better determine which metric type to use.

Is it a single measurement?

If the value is true or false, use a boolean metric.

If the value is a string, use a string metric. For example, to record the name of the default search engine.

Beware: string metrics are exceedingly general, and you are probably best served by selecting the most specific metric for the job, since you'll get better error checking and richer analysis tools for free. For example, avoid storing a number in a string metric --- you probably want a counter metric instead.

If you need to store multiple string values in a metric, use a string list metric. For example, you may want to record the list of other Mozilla products installed on the device.

For all of the metric types in this section that measure single values, it is especially important to consider how the lifetime of the value relates to the ping it is being sent in. Since these metrics don't perform any aggregation on the client side, when a ping containing the metric is submitted, it will contain only the "last known" value for the metric, potentially resulting in data loss. There is further discussion of metric lifetimes below.

Are you measuring user behavior?

For tracking user behavior, it is usually meaningful to know the over of events that lead to the use of a feature. Therefore, for user behavior, an event metric is usually the best choice.

Be aware, however, that events can be particularly expensive to transmit, store and analyze, so should not be used for higher-frequency measurements.

Are you counting things?

If you want to know how many times something happened, use a counter metric. If you are counting a group of related things, or you don't know what all of the things to count are at build time, use a labeled counter metric.

If you need to know how many times something happened relative to the number of times something else happened, use a rate metric.

If you need to know when the things being counted happened relative to other things, consider using an event.

Are you measuring time?

If you need to record an absolute time, use a datetime metric. Datetimes are recorded in the user's local time, according to their device's real time clock, along with a timezone offset from UTC. Datetime metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question.

If you need to record how long something takes you have a few options.

If you need to measure the total time spent doing a particular task, look to the timespan metric. Timespan metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. Note that this metric should only be used to measure time on a single thread. If multiple overlapping timespans are measured for the same metric, an invalid state error is recorded.

If you need to measure the relative occurrences of many timings, use a timing distribution. It builds a histogram of timing measurements, and is safe to record multiple concurrent timespans on different threads.

If you need to know the time between multiple distinct actions that aren't a simple "begin" and "end" pair, consider using an event.

For how long do you need to collect this data?

Think carefully about how long the metric will be needed, and set the expires parameter to disable the metric at the earliest possible time. This is an important component of Mozilla's lean data practices.

When the metric passes its expiration date (determined at build time), it will automatically stop collecting data.

When a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com to the notification_emails addresses associated with the metric. At that time, the metric should be removed, which involves removing it from the metrics.yaml file and removing uses of it in the source code. Removing a metric does not affect the availability of data already collected by the pipeline.

If the metric is still needed after its expiration date, it should go back for another round of data review to have its expiration date extended.

Important: Ensure that telemetry alerts are received and are reviewed in a timely manner. Expired metrics don't record any data, extending or removing a metric should be done in time. Consider adding both a group email address and an individual who is responsible for this metric to the notification_emails list.

When should the Glean SDK automatically clear the measurement?

The lifetime parameter of a metric defines when its value will be cleared. There are three lifetime options available:

ping (default)

The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.

application

The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.

user

NOTE: Reach out to the Glean team before using this.

The metric is part of the user's profile and will live as long as the profile lives. This is often not the best choice unless the metric records a value that really needs to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id, the day the product was first executed. It is rare to use this lifetime outside of some metrics that are built in to the Glean SDK.

While lifetimes are important to understand for all metric types, they are particularly important for the metric types that record single values and don't aggregate on the client (boolean, string, labeled_string, string_list, datetime and uuid), since these metrics will send the "last known" value and missing the earlier values could be a form of unintended data loss.

A lifetime example

Let's work through an example to see how these lifetimes play out in practice. Let's suppose we have a user preference, "turbo mode", which defaults to false, but the user can turn it to true at any time. We want to know when this flag is true so we can measure its affect on other metrics in the same ping. In the following diagram, we look at a time period that sends 4 pings across two separate runs of the application. We assume here, that like the Glean SDK's built-in metrics ping, the developer writing the metric isn't in control of when the ping is submitted.

In this diagram, the ping measurement windows are represented as rectangles, but the moment the ping is "submitted" is represented by its right edge. The user changes the "turbo mode" setting from false to true in the first run, and then toggles it again twice in the second run.

Metric lifetime timeline

  • A. Ping lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. It is included in the first ping after being recorded (Ping 2), which causes it to be cleared.

  • B. Ping lifetime, set on init and change: The default value is included in Ping 1, and the changed value is included in Ping 2, which causes it to be cleared. It therefore misses Ping 3, but when the application is started, it is recorded again and it is included in Ping 4. However, this causes it to be cleared again and it is not in Ping 5.

  • C. Application lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. After the value is changed, it is included in Pings 2 and 3, but then due to application restart it is cleared, so it is not included until the value is manually toggled again.

  • D. Application, set on init and change: The default value is included in Ping 1, and the changed value is included in Pings 2 and 3. Even though the application startup causes it to be cleared, it is set again, and all subsequent pings also have the value.

  • E. User, set on change: The default value is missing from Ping 1, but since user lifetime metrics aren't cleared unless the user profile is reset (e.g. on Android, when the product is uninstalled), it is included in all subsequent pings.

  • F. User, set on init and change: Since user lifetime metrics aren't cleared unless the user profile is reset, it is included in all subsequent pings. This would be true even if the "turbo mode" preference were never changed again.

Note that for all of the metric configurations, the toggle of the preference off and on during Ping 4 is completely missed. If you need to create a ping containing one, and only one, value for this metric, consider using a custom ping to create a ping whose lifetime matches the lifetime of the value.

What if none of these lifetimes are appropriate?

If the timing at which the metric is sent in the ping needs to closely match the timing of the metrics value, the best option is to use a custom ping to manually control when pings are sent.

This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.

What should this new metric be called?

Metric names have a maximum length of 30 characters.

Reuse names from other applications

There's a lot of value using the same name for analogous metrics collected across different products. For example, BigQuery makes it simple to join columns with the same name across multiple tables. Therefore, we encourage you to investigate if a similar metric is already being collected by another product. If it is, there may be an opportunity for code reuse across these products, and if all the projects are using the Glean SDK, it's easy for libraries to send their own metrics. If sharing the code doesn't make sense, at a minimum we recommend using the same metric name for similar actions and concepts whenever possible.

Make names unique within an application

Metric identifiers (the combination of a metric's category and name) must be unique across all metrics that are sent by a single application. This includes not only the metrics defined in the app's metrics.yaml, but the metrics.yaml of any Glean SDK-using library that the application uses, including the Glean SDK itself. Therefore, care should be taken to name things specifically enough so as to avoid namespace collisions. In practice, this generally involves thinking carefully about the category of the metric, more than the name.

Note: Duplicate metric identifiers are not currently detected at build time. See bug 1578383 for progress on that. However, the probe_scraper process, which runs nightly, will detect duplicate metrics and e-mail the notification_emails associated with the given metrics.

Be as specific as possible

More broadly, you should choose the names of metrics to be as specific as possible. It is not necessary to put the type of the metric in the category or name, since this information is retained in other ways through the entire end-to-end system.

For example, if defining a set of events related to search, put them in a category called search, rather than just events or search_events. The events word here would be redundant.

What if none of these metric types is the right fit?

The current set of metrics the Glean SDKs support is based on known common use cases, but new use cases are discovered all the time.

Please reach out to us on #glean:mozilla.org. If you think you need a new metric type, we have a process for that.

How do I make sure my metric is working?

The Glean SDK has rich support for writing unit tests involving metrics. Writing a good unit test is a large topic, but in general, you should write unit tests for all new telemetry that does the following:

  • Performs the operation being measured.

  • Asserts that metrics contain the expected data, using the testGetValue API on the metric.

  • Where applicable, asserts that no errors are recorded, such as when values are out of range, using the testGetNumRecordedErrors API.

In addition to unit tests, it is good practice to validate the incoming data for the new metric on a pre-release channel to make sure things are working as expected.

Adding the metric to the metrics.yaml file

The metrics.yaml file defines the metrics your application or library will send. They are organized into categories. The overall organization is:

# Required to indicate this is a `metrics.yaml` file
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

toolbar:
  click:
    type: event
    description: |
      Event to record toolbar clicks.
    metadata:
      tags:
        - Interaction
    notification_emails:
      - CHANGE-ME@example.com
    bugs:
      - https://bugzilla.mozilla.org/123456789/
    data_reviews:
      - http://example.com/path/to/data-review
    expires: 2019-06-01  # <-- Update to a date in the future

  double_click:
    ...

Refer to the metrics YAML registry format for a full reference on the metrics.yaml file structure.

Using the metric from your code

The reference documentation for each metric type goes into detail about using each metric type from your code.

Note that all Glean metrics are write-only. Outside of unit tests, it is impossible to retrieve a value from the Glean SDK's database. While this may seem limiting, this is required to:

  • enforce the semantics of certain metric types (e.g. that Counters can only be incremented).
  • ensure the lifetime of the metric (when it is cleared or reset) is correctly handled.

Capitalization

One thing to note is that we try to adhere to the coding conventions of each language wherever possible, so the metric name and category in the metrics.yaml (which is in snake_case) may be changed to some other case convention, such as camelCase, when used from code.

Event extras and labels are never capitalized, no matter the target language.

Category and metric names in the metrics.yaml are in snake_case, but given the Kotlin coding standards defined by ktlint, these identifiers must be camelCase in Kotlin. For example, the metric defined in the metrics.yaml as:

views:
  login_opened:
    ...

is accessible in Kotlin as:

import org.mozilla.yourApplication.GleanMetrics.Views
GleanMetrics.Views.loginOpened...

Category and metric names in the metrics.yaml are in snake_case, but given the Swift coding standards defined by swiftlint, these identifiers must be camelCase in Swift. For example, the metric defined in the metrics.yaml as:

views:
  login_opened:
    ...

is accessible in Kotlin as:

GleanMetrics.Views.loginOpened...

Category and metric names in the metrics.yaml are in snake_case, which matches the PEP8 standard, so no translation is needed for Python.

Given the Rust coding standards defined by clippy, identifiers should all be snake_case. This includes category names which in the metrics.yaml are dotted.snake_case:

compound.category:
  metric_name:
    ...

In Rust this becomes:

use firefox_on_glean::metrics;

metrics::compound_category::metric_name...

JavaScript identifiers are customarily camelCase. This requires transforming a metric defined in the metrics.yaml as:

compound.category:
  metric_name:
    ...

to a form useful in JS as:

import * as compoundCategory from "./path/to/generated/files/compoundCategory.js";

compoundCategory.metricName...

Firefox Desktop has Coding Style Guidelines for both C++ and JS. This results in, for a metric defined in the metrics.yaml as:

compound.category:
  metric_name:
    ...

an identifier that looks like:

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::compound_category::metric_name...

JavaScript

Glean.compoundCategory.metricName...

Adding new metrics

Table of Contents

Process overview

When adding a new metric, the process is:

  • Consider the question you are trying to answer with this data, and choose the metric type and parameters to use.
  • Add a new entry to metrics.yaml.
  • Add code to your project to record into the metric by calling the Glean SDK.

Important: Any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK.

Choosing a metric type

The following is a set of questions to ask about the data being collected to help better determine which metric type to use.

Is it a single measurement?

If the value is true or false, use a boolean metric.

If the value is a string, use a string metric. For example, to record the name of the default search engine.

Beware: string metrics are exceedingly general, and you are probably best served by selecting the most specific metric for the job, since you'll get better error checking and richer analysis tools for free. For example, avoid storing a number in a string metric --- you probably want a counter metric instead.

If you need to store multiple string values in a metric, use a string list metric. For example, you may want to record the list of other Mozilla products installed on the device.

For all of the metric types in this section that measure single values, it is especially important to consider how the lifetime of the value relates to the ping it is being sent in. Since these metrics don't perform any aggregation on the client side, when a ping containing the metric is submitted, it will contain only the "last known" value for the metric, potentially resulting in data loss. There is further discussion of metric lifetimes below.

Are you measuring user behavior?

For tracking user behavior, it is usually meaningful to know the over of events that lead to the use of a feature. Therefore, for user behavior, an event metric is usually the best choice.

Be aware, however, that events can be particularly expensive to transmit, store and analyze, so should not be used for higher-frequency measurements.

Are you counting things?

If you want to know how many times something happened, use a counter metric. If you are counting a group of related things, or you don't know what all of the things to count are at build time, use a labeled counter metric.

If you need to know how many times something happened relative to the number of times something else happened, use a rate metric.

If you need to know when the things being counted happened relative to other things, consider using an event.

Are you measuring time?

If you need to record an absolute time, use a datetime metric. Datetimes are recorded in the user's local time, according to their device's real time clock, along with a timezone offset from UTC. Datetime metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question.

If you need to record how long something takes you have a few options.

If you need to measure the total time spent doing a particular task, look to the timespan metric. Timespan metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. Note that this metric should only be used to measure time on a single thread. If multiple overlapping timespans are measured for the same metric, an invalid state error is recorded.

If you need to measure the relative occurrences of many timings, use a timing distribution. It builds a histogram of timing measurements, and is safe to record multiple concurrent timespans on different threads.

If you need to know the time between multiple distinct actions that aren't a simple "begin" and "end" pair, consider using an event.

For how long do you need to collect this data?

Think carefully about how long the metric will be needed, and set the expires parameter to disable the metric at the earliest possible time. This is an important component of Mozilla's lean data practices.

When the metric passes its expiration date (determined at build time), it will automatically stop collecting data.

When a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com to the notification_emails addresses associated with the metric. At that time, the metric should be removed, which involves removing it from the metrics.yaml file and removing uses of it in the source code. Removing a metric does not affect the availability of data already collected by the pipeline.

If the metric is still needed after its expiration date, it should go back for another round of data review to have its expiration date extended.

Important: Ensure that telemetry alerts are received and are reviewed in a timely manner. Expired metrics don't record any data, extending or removing a metric should be done in time. Consider adding both a group email address and an individual who is responsible for this metric to the notification_emails list.

When should the Glean SDK automatically clear the measurement?

The lifetime parameter of a metric defines when its value will be cleared. There are three lifetime options available:

ping (default)

The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.

application

The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.

user

NOTE: Reach out to the Glean team before using this.

The metric is part of the user's profile and will live as long as the profile lives. This is often not the best choice unless the metric records a value that really needs to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id, the day the product was first executed. It is rare to use this lifetime outside of some metrics that are built in to the Glean SDK.

While lifetimes are important to understand for all metric types, they are particularly important for the metric types that record single values and don't aggregate on the client (boolean, string, labeled_string, string_list, datetime and uuid), since these metrics will send the "last known" value and missing the earlier values could be a form of unintended data loss.

A lifetime example

Let's work through an example to see how these lifetimes play out in practice. Let's suppose we have a user preference, "turbo mode", which defaults to false, but the user can turn it to true at any time. We want to know when this flag is true so we can measure its affect on other metrics in the same ping. In the following diagram, we look at a time period that sends 4 pings across two separate runs of the application. We assume here, that like the Glean SDK's built-in metrics ping, the developer writing the metric isn't in control of when the ping is submitted.

In this diagram, the ping measurement windows are represented as rectangles, but the moment the ping is "submitted" is represented by its right edge. The user changes the "turbo mode" setting from false to true in the first run, and then toggles it again twice in the second run.

Metric lifetime timeline

  • A. Ping lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. It is included in the first ping after being recorded (Ping 2), which causes it to be cleared.

  • B. Ping lifetime, set on init and change: The default value is included in Ping 1, and the changed value is included in Ping 2, which causes it to be cleared. It therefore misses Ping 3, but when the application is started, it is recorded again and it is included in Ping 4. However, this causes it to be cleared again and it is not in Ping 5.

  • C. Application lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. After the value is changed, it is included in Pings 2 and 3, but then due to application restart it is cleared, so it is not included until the value is manually toggled again.

  • D. Application, set on init and change: The default value is included in Ping 1, and the changed value is included in Pings 2 and 3. Even though the application startup causes it to be cleared, it is set again, and all subsequent pings also have the value.

  • E. User, set on change: The default value is missing from Ping 1, but since user lifetime metrics aren't cleared unless the user profile is reset (e.g. on Android, when the product is uninstalled), it is included in all subsequent pings.

  • F. User, set on init and change: Since user lifetime metrics aren't cleared unless the user profile is reset, it is included in all subsequent pings. This would be true even if the "turbo mode" preference were never changed again.

Note that for all of the metric configurations, the toggle of the preference off and on during Ping 4 is completely missed. If you need to create a ping containing one, and only one, value for this metric, consider using a custom ping to create a ping whose lifetime matches the lifetime of the value.

What if none of these lifetimes are appropriate?

If the timing at which the metric is sent in the ping needs to closely match the timing of the metrics value, the best option is to use a custom ping to manually control when pings are sent.

This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.

What should this new metric be called?

Metric names have a maximum length of 30 characters.

Reuse names from other applications

There's a lot of value using the same name for analogous metrics collected across different products. For example, BigQuery makes it simple to join columns with the same name across multiple tables. Therefore, we encourage you to investigate if a similar metric is already being collected by another product. If it is, there may be an opportunity for code reuse across these products, and if all the projects are using the Glean SDK, it's easy for libraries to send their own metrics. If sharing the code doesn't make sense, at a minimum we recommend using the same metric name for similar actions and concepts whenever possible.

Make names unique within an application

Metric identifiers (the combination of a metric's category and name) must be unique across all metrics that are sent by a single application. This includes not only the metrics defined in the app's metrics.yaml, but the metrics.yaml of any Glean SDK-using library that the application uses, including the Glean SDK itself. Therefore, care should be taken to name things specifically enough so as to avoid namespace collisions. In practice, this generally involves thinking carefully about the category of the metric, more than the name.

Note: Duplicate metric identifiers are not currently detected at build time. See bug 1578383 for progress on that. However, the probe_scraper process, which runs nightly, will detect duplicate metrics and e-mail the notification_emails associated with the given metrics.

Be as specific as possible

More broadly, you should choose the names of metrics to be as specific as possible. It is not necessary to put the type of the metric in the category or name, since this information is retained in other ways through the entire end-to-end system.

For example, if defining a set of events related to search, put them in a category called search, rather than just events or search_events. The events word here would be redundant.

What if none of these metric types is the right fit?

The current set of metrics the Glean SDKs support is based on known common use cases, but new use cases are discovered all the time.

Please reach out to us on #glean:mozilla.org. If you think you need a new metric type, we have a process for that.

How do I make sure my metric is working?

The Glean SDK has rich support for writing unit tests involving metrics. Writing a good unit test is a large topic, but in general, you should write unit tests for all new telemetry that does the following:

  • Performs the operation being measured.

  • Asserts that metrics contain the expected data, using the testGetValue API on the metric.

  • Where applicable, asserts that no errors are recorded, such as when values are out of range, using the testGetNumRecordedErrors API.

In addition to unit tests, it is good practice to validate the incoming data for the new metric on a pre-release channel to make sure things are working as expected.

Adding the metric to the metrics.yaml file

The metrics.yaml file defines the metrics your application or library will send. They are organized into categories. The overall organization is:

# Required to indicate this is a `metrics.yaml` file
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

toolbar:
  click:
    type: event
    description: |
      Event to record toolbar clicks.
    metadata:
      tags:
        - Interaction
    notification_emails:
      - CHANGE-ME@example.com
    bugs:
      - https://bugzilla.mozilla.org/123456789/
    data_reviews:
      - http://example.com/path/to/data-review
    expires: 2019-06-01  # <-- Update to a date in the future

  double_click:
    ...

Refer to the metrics YAML registry format for a full reference on the metrics.yaml file structure.

Using the metric from your code

The reference documentation for each metric type goes into detail about using each metric type from your code.

Note that all Glean metrics are write-only. Outside of unit tests, it is impossible to retrieve a value from the Glean SDK's database. While this may seem limiting, this is required to:

  • enforce the semantics of certain metric types (e.g. that Counters can only be incremented).
  • ensure the lifetime of the metric (when it is cleared or reset) is correctly handled.

Capitalization

One thing to note is that we try to adhere to the coding conventions of each language wherever possible, so the metric name and category in the metrics.yaml (which is in snake_case) may be changed to some other case convention, such as camelCase, when used from code.

Event extras and labels are never capitalized, no matter the target language.

Category and metric names in the metrics.yaml are in snake_case, but given the Kotlin coding standards defined by ktlint, these identifiers must be camelCase in Kotlin. For example, the metric defined in the metrics.yaml as:

views:
  login_opened:
    ...

is accessible in Kotlin as:

import org.mozilla.yourApplication.GleanMetrics.Views
GleanMetrics.Views.loginOpened...

Category and metric names in the metrics.yaml are in snake_case, but given the Swift coding standards defined by swiftlint, these identifiers must be camelCase in Swift. For example, the metric defined in the metrics.yaml as:

views:
  login_opened:
    ...

is accessible in Kotlin as:

GleanMetrics.Views.loginOpened...

Category and metric names in the metrics.yaml are in snake_case, which matches the PEP8 standard, so no translation is needed for Python.

Given the Rust coding standards defined by clippy, identifiers should all be snake_case. This includes category names which in the metrics.yaml are dotted.snake_case:

compound.category:
  metric_name:
    ...

In Rust this becomes:

use firefox_on_glean::metrics;

metrics::compound_category::metric_name...

JavaScript identifiers are customarily camelCase. This requires transforming a metric defined in the metrics.yaml as:

compound.category:
  metric_name:
    ...

to a form useful in JS as:

import * as compoundCategory from "./path/to/generated/files/compoundCategory.js";

compoundCategory.metricName...

Firefox Desktop has Coding Style Guidelines for both C++ and JS. This results in, for a metric defined in the metrics.yaml as:

compound.category:
  metric_name:
    ...

an identifier that looks like:

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::compound_category::metric_name...

JavaScript

Glean.compoundCategory.metricName...

Unit testing Glean metrics

In order to support unit testing inside of client applications using the Glean SDK, a set of testing API functions have been included. The intent is to make the Glean SDKs easier to test 'out of the box' in any client application it may be used in. These functions expose a way to inspect and validate recorded metric values within the client application. but are restricted to test code only. (Outside of a testing context, Glean APIs are otherwise write-only so that it can enforce semantics and constraints about data).

To encourage using the testing APIs, it is also possible to generate testing coverage reports to show which metrics in your project are tested.

Example of using the test API

In order to enable metrics testing APIs in each SDK, Glean must be reset and put in testing mode. For documentation on how to do that, refer to Initializing - Testing API.

Check out full examples of using the metric testing API on each Glean SDK. All examples omit the step of resetting Glean for tests to focus solely on metrics unit testing.

// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record(
    BrowserEngagementExtras(font = "Courier")
)

// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()

// Retrieve a snapshot of the recorded events
val events = BrowserEngagement.click.testGetValue()!!

// Check if we collected all 3 events in the snapshot
assertEquals(3, events.size)

// Check extra key/value for first event in the list
assertEquals("Courier", events.elementAt(0).extra["font"])
// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record([.font: "Courier"])

// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()

// Retrieve a snapshot of the recorded events
let events = BrowserEngagement.click.testGetValue()!

// Check if we collected all 3 events in the snapshot
XCTAssertEqual(3, events.count)

// Check extra key/value for first event in the list
XCTAssertEqual("Courier", events[0].extra?["font"])
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Record a metric value with extra to validate against
metrics.url.visit.add(1)

# Check if we collected any events into the 'click' metric
assert metrics.url.visit.test_get_value() is not Null

# Retrieve a snapshot of the recorded events
assert 1 == metrics.url.visit.test_get_value()

Generating testing coverage reports

Glean can generate coverage reports to track which metrics are tested in your unit test suite.

There are three steps to integrate it into your continuous integration workflow: recording coverage, post-processing the results, and uploading the results.

Recording coverage

Glean testing coverage is enabled by setting the GLEAN_TEST_COVERAGE environment variable to the name of a file to store results. It is good practice to set it to the absolute path to a file, since some testing harnesses (such as cargo test) may change the current working directory.

GLEAN_TEST_COVERAGE=$(realpath glean_coverage.txt) make test

Post-processing the results

A post-processing step is required to convert the raw output in the file specified by GLEAN_TEST_COVERAGE into usable output for coverage reporting tools. Currently, the only coverage reporting tool supported is codecov.io.

This post-processor is available in the coverage subcommand in the glean_parser tool.

For some build systems, glean_parser is already installed for you by the build system integration at the following locations:

  • On Android/Gradle, $GRADLE_HOME/glean/bootstrap-4.5.11/Miniconda3/bin/glean_parser
  • On iOS, $PROJECT_ROOT/.venv/bin/glean_parser
  • For other systems, install glean_parser using pip install glean_parser

The glean_parser coverage command requires the following parameters:

  • -f: The output format to produce, for example codecovio to produce codecov.io's custom format.
  • -o: The path to the output file, for example codecov.json.
  • -c: The input raw coverage file. glean_coverage.txt in the example above.
  • A list of the metrics.yaml files in your repository.

For example, to produce output for codecov.io:

glean_parser coverage -f codecovio -o glean_coverage.json -c glean_coverage.txt app/metrics.yaml

In this example, the glean_coverage.json file is now ready for uploading to codecov.io.

Uploading coverage

If using codecov.io, the uploader doesn't send coverage results for YAML files by default. Pass the -X yaml option to the uploader to make sure they are included:

bash <(curl -s https://codecov.io/bash) -X yaml

Validating the collected data

It is worth investing time when instrumentation is added to the product to understand if the data looks reasonable and expected, and to take action if it does not. It is important to highlight that an automated rigorous test suite for testing metrics is an important precondition for building confidence in newly collected data (especially business-critical ones).

The following checklist could help guide this validation effort.

  1. Before releasing the product with the new data collection, make sure the data looks as expected by generating sample data on a local machine and inspecting it on the Glean Debug View(see the debugging facilities):

    a. Is the data showing up in the correct ping(s)?

    b. Does the metric report the expected data?

    c. If exercising the same path again, is it expected for the data to be submitted again? And does it?

  2. As users start adopting the version of the product with the new data collection (usually within a few days of release), the initial data coming in should be checked, to understand how the measurements are behaving in the wild:

    a. Does this organically-sent data satisfy the same quality expectations the manually-sent data did in Step 1?

    b. Is the metric showing up correctly in the Glean Dictionary?

    c. Is there any new error being reported for the new data points? If so, does this point to an edge case that should be documented and/or fixed in the code?

    d. As the first three or four days pass, distributions will converge towards their final shapes. Consider extreme values; are there a very high number of zero/minimum values when there shouldn't be, or values near what you would realistically expect to be the maximum (e.g. a timespan for a single day that is reporting close to 86,400 seconds)? In case of oddities in the data, how much of the product population is affected? Does this require changing the instrumentation or documenting?

How to annotate metrics without changing the source code?

Data practitioners that lack familiarity with YAML or product-specific development workflows can still document any discovered edge-cases and anomalies by identifying the metric in the Glean Dictionary and initiate adding commentary from the metric page.

  1. After enough data is collected from the product population, are the expectations from the previous points still met?

Does the product support multiple release channels?

In case of multiple distinct product populations, the above checklist should be ideally run against all of them. For example, in case of Firefox, the checklist should be run for the Nightly population first, then on the other channels as the collection moves across the release trains.

Error reporting

The Glean SDKs record the number of errors that occur when metrics are passed invalid data or are otherwise used incorrectly. This information is reported back in special labeled counter metrics in the glean.error category. Error metrics are included in the same pings as the metric that caused the error. Additionally, error metrics are always sent in the metrics ping ping.

The following categories of errors are recorded:

  • invalid_value: The metric value was invalid.
  • invalid_label: The label on a labeled metric was invalid.
  • invalid_state: The metric caught an invalid state while recording.
  • invalid_overflow: The metric value to be recorded overflows the metric-specific upper range.
  • invalid_type: The metric value is not of the expected type. This error type is only recorded by the Glean JavaScript SDK. This error may only happen in dynamically typed languages.

For example, if you had a string metric and passed it a string that was too long:

MyMetrics.stringMetric.set("this_string_is_longer_than_the_limit_for_string_metrics")

The following error metric counter would be incremented:

Glean.error.invalidOverflow["my_metrics.string_metric"].add(1)

Resulting in the following keys in the ping:

{
  "metrics": {
    "labeled_counter": {
      "glean.error.invalid_overflow": {
        "my_metrics.string_metric": 1
      }
    }
  }
}

If you have a debug build of the Glean SDK, details about the errors being recorded are included in the logs. This detailed information is not included in Glean pings.

The Glean JavaScript SDK provides a slightly different set of metrics and pings

If you are looking for the metrics collected by Glean.js, refer to the documentation over on the @mozilla/glean.js repository.

Metrics

This document enumerates the metrics collected by this project using the Glean SDK. This project may depend on other projects which also collect metrics. This means you might have to go searching through the dependency tree to get a full picture of everything collected by this project.

Pings

all-pings

These metrics are sent in every ping.

All Glean pings contain built-in metrics in the ping_info and client_info sections.

In addition to those built-in metrics, the following metrics are added to the ping:

NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity
glean.client.annotation.experimentation_idstringAn experimentation identifier derived and provided by the application for the purpose of experimentation enrollment.Bug 1848201never1
glean.error.invalid_labellabeled_counterCounts the number of times a metric was set with an invalid label. The labels are the category.name identifier of the metric.Bug 1499761never1
glean.error.invalid_overflowlabeled_counterCounts the number of times a metric was set a value that overflowed. The labels are the category.name identifier of the metric.Bug 1591912never1
glean.error.invalid_statelabeled_counterCounts the number of times a timing metric was used incorrectly. The labels are the category.name identifier of the metric.Bug 1499761never1
glean.error.invalid_valuelabeled_counterCounts the number of times a metric was set to an invalid value. The labels are the category.name identifier of the metric.Bug 1499761never1
glean.restartedeventRecorded when the Glean SDK is restarted. Only included in custom pings that record events. For more information, please consult the Custom Ping documentation.Bug 1716725never1

baseline

This is a built-in ping that is assembled out of the box by the Glean SDK.

See the Glean SDK documentation for the baseline ping.

This ping is sent if empty.

This ping includes the client id.

Data reviews for this ping:

Bugs related to this ping:

Reasons this ping may be sent:

  • active: The ping was submitted when the application became active again, which includes when the application starts. In earlier versions, this was called foreground.

    *Note*: this ping will not contain the `glean.baseline.duration` metric.
    
  • dirty_startup: The ping was submitted at startup, because the application process was killed before the Glean SDK had the chance to generate this ping, before becoming inactive, in the last session.

    *Note*: this ping will not contain the `glean.baseline.duration` metric.
    
  • inactive: The ping was submitted when becoming inactive. In earlier versions, this was called background.

All Glean pings contain built-in metrics in the ping_info and client_info sections.

In addition to those built-in metrics, the following metrics are added to the ping:

NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity
glean.baseline.durationtimespanThe duration of the last foreground session.Bug 1512938never1, 2
glean.validation.pings_submittedlabeled_counterA count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping)Bug 1586764never1

deletion-request

This is a built-in ping that is assembled out of the box by the Glean SDK.

See the Glean SDK documentation for the deletion-request ping.

This ping is sent if empty.

This ping includes the client id.

Data reviews for this ping:

Bugs related to this ping:

Reasons this ping may be sent:

  • at_init: The ping was submitted at startup. Glean discovered that between the last time it was run and this time, upload of data has been disabled.

  • set_upload_enabled: The ping was submitted between Glean init and Glean shutdown. Glean was told after init but before shutdown that upload has changed from enabled to disabled.

All Glean pings contain built-in metrics in the ping_info and client_info sections.

This ping contains no metrics.

metrics

This is a built-in ping that is assembled out of the box by the Glean SDK.

See the Glean SDK documentation for the metrics ping.

This ping includes the client id.

Data reviews for this ping:

Bugs related to this ping:

Reasons this ping may be sent:

  • overdue: The last ping wasn't submitted on the current calendar day, but it's after 4am, so this ping submitted immediately

  • reschedule: A ping was just submitted. This ping was rescheduled for the next calendar day at 4am.

  • today: The last ping wasn't submitted on the current calendar day, but it is still before 4am, so schedule to send this ping on the current calendar day at 4am.

  • tomorrow: The last ping was already submitted on the current calendar day, so schedule this ping for the next calendar day at 4am.

  • upgrade: This ping was submitted at startup because the application was just upgraded.

All Glean pings contain built-in metrics in the ping_info and client_info sections.

In addition to those built-in metrics, the following metrics are added to the ping:

NameTypeDescriptionData reviewsExtrasExpirationData Sensitivity
glean.database.sizememory_distributionThe size of the database file at startup.Bug 1656589never1
glean.error.iocounterThe number of times we encountered an IO error when writing a pending ping to disk.Bug 1686233never1
glean.error.preinit_tasks_overflowcounterThe number of tasks that overflowed the pre-initialization buffer. Only sent if the buffer ever overflows. In Version 0 this reported the total number of tasks enqueued.Bug 1609482never1
glean.upload.deleted_pings_after_quota_hitcounterThe number of pings deleted after the quota for the size of the pending pings directory or number of files is hit. Since quota is only calculated for the pending pings directory, and deletion request ping live in a different directory, deletion request pings are never deleted.Bug 1601550never1
glean.upload.discarded_exceeding_pings_sizememory_distributionThe size of pings that exceeded the maximum ping size allowed for upload.Bug 1597761never1
glean.upload.in_flight_pings_droppedcounterHow many pings were dropped because we found them already in-flight.Bug 1816401never1
glean.upload.missing_send_idscounterHow many ping upload responses did we not record as a success or failure (in glean.upload.send_success or glean.upload.send_failue, respectively) due to an inconsistency in our internal bookkeeping?Bug 1816400never1
glean.upload.pending_pingscounterThe total number of pending pings at startup. This does not include deletion-request pings.Bug 1665041never1
glean.upload.pending_pings_directory_sizememory_distributionThe size of the pending pings directory upon initialization of Glean. This does not include the size of the deletion request pings directory.Bug 1601550never1
glean.upload.ping_upload_failurelabeled_counterCounts the number of ping upload failures, by type of failure. This includes failures for all ping types, though the counts appear in the next successfully sent metrics ping.Bug 1589124
  • status_code_4xx
  • status_code_5xx
  • status_code_unknown
  • unrecoverable
  • recoverable
never1
glean.upload.send_failuretiming_distributionTime needed for a failed send of a ping to the servers and getting a reply back.Bug 1814592never1
glean.upload.send_successtiming_distributionTime needed for a successful send of a ping to the servers and getting a reply backBug 1814592never1
glean.validation.foreground_countcounterOn mobile, the number of times the application went to foreground.Bug 1683707never1
glean.validation.pings_submittedlabeled_counterA count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping)Bug 1586764never1
glean.validation.shutdown_dispatcher_waittiming_distributionTime waited for the dispatcher to unblock during shutdown. Most samples are expected to be below the 10s timeout used.Bug 1828066never1
glean.validation.shutdown_waittiming_distributionTime waited for the uploader at shutdown.Bug 1814592never1

Data categories are defined here.

Data Control Plane (a.k.a. Server Knobs)

Glean provides a Data Control Plane through which metrics can be enabled, disabled or throttled through a Nimbus rollout or experiment.

Products can use this capability to control "data-traffic", similar to how a network control plane controls "network-traffic".

This provides the ability to do the following:

  • Allow runtime changes to data collection without needing to land code and ride release trains.
  • Eliminate the need for manual creation and maintenance of feature flags specific to data collection.
  • Sampling of measurements from a subset of the population so that we do not collect or ingest more data than is necessary from high traffic areas of an application instrumented with Glean metrics.
  • Operational safety through being able to react to high-volume or unwanted data.
  • Visibility into sampling and sampling rates for remotely configured metrics.

For information on controlling pings with Server Knobs, see the metrics documentation for Server Knobs - Pings.

Contents

Example Scenarios

Scenario 1

Landing a metric that is disabled by default and then enabling it for some segment of the population

This scenario can be expected in cases such as when instrumenting high-traffic areas of the browser. These are instrumentations that would normally generate a lot of data because they are recorded frequently for every user.

In this case, the telemetry which has the potential to be high-volume would land with the “disabled” property of the metric set to “true”. This will ensure that it does not record data by default.

An example metric definition with this property set would look something like this:

urlbar:
  impression:
    disabled: true
    type: event
    description: Recorded when urlbar results are shown to the user.
  ...

Once the instrumentation is landed, it can now be enabled for a subset of the population through a Nimbus rollout or experiment without further code changes.

Through Nimbus, we have the ability to sample the population by setting the audience size to a certain percentage of the eligible population. Nimbus also provides the ability to target clients based on the available targeting parameters for a particular application (for instance, Firefox Desktop’s available targeting parameters).

This can be used to slowly roll out instrumentations to the population in order to validate the data we are collecting before measuring the entire population and potentially avoiding costs and overhead by collecting data that isn’t useful.

Note: if planning to use this feature for permanently keeping a data collection on for the whole population, please consider enabling the metrics by default by setting disabled: false in the metrics.yaml. Then you can "down-sample" if necessary (see Scenario 2 below).

Scenario 2

Landing a metric that is enabled by default and then disabling it for a segment of the population

This is effectively the inverse of Scenario 1, instead of landing the metrics disabled by default, they are landed as enabled so that they are normally collecting data from the entire population.

Similar to the first scenario, a Nimbus rollout or experiment can then be launched to configure the metrics as disabled for a subset of the population.

This provides a mechanism by which we can disable the sending of telemetry from an audience that we do not wish to collect telemetry data from.

For instance, this could be useful in tuning out telemetry data coming from automation sources or bad actors. In addition, it provides a way to disable broken, incorrect, or unexpectedly noisy instrumentations as an operational safety mechanism to directly control the volume of the data we collect and ingest.

Product Integration

In order to enable sharing of this functionality between multiple Nimbus Features, the implementation is not defined as part of the stand-alone Glean feature defined in the Nimbus Feature Manifest, but instead is intended to be added as a feature variable to other Nimbus Feature definitions for them to make use of.

Desktop Feature Integration

In order to make use of the remote metric configuration in a Firefox Desktop component, there are two available options.

Integration Option 1:

Glean provides a general Nimbus feature that can be used for configuration of metrics named glean. Simply select the glean feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see https://experimenter.info for more information on multi-feature experiments).

The glean Nimbus feature requires the gleanMetricConfiguration variable to be used to provide the require metric configuration. The format of the configuration is a map of the fully qualified metric name (category.name) of the metric to a boolean value representing whether the metric is enabled. If a metric is omitted from this map, it will default to the value found in the metrics.yaml. An example configuration for the glean feature can be found on the Experimenter Configuration page.

Integration Option 2 (Advanced use):

A second option that can give you more control over the metric configuration, especially if there are more than one experiments or rollouts that are currently using the glean feature from Option 1 above, is to add a Feature Variable to represent the Glean metric configuration in your own feature. This can be accomplished by modifying the FeatureManifest.yaml file, adding a variable through which to pass metric configurations. Glean will handle merging this configuration with other metrics configurations for you (See Advanced Topics for more info on this). An example feature manifest entry would look like the following:

variables:
  ... // Definitions of other feature variables
  gleanMetricConfiguration:
    type: json
    description: >-
    "Glean metric configuration"

This definition allows for configuration to be set in a Nimbus rollout or experiment and fetched by the client to be applied based on the enrollment. Once the Feature Variable has been defined, the final step is to fetch the configuration from Nimbus and supply it to the Glean API. This can be done during initialization and again any time afterwards, such as in response to receiving an updated configuration from Nimbus. Glean will merge this configuration with any other active configurations and enable or disable the metrics accordingly. An example call to set a configuration through your Nimbus Feature could look like this:

// Fetch the Glean metric configuration from your feature's Nimbus variable
let cfg = lazy.NimbusFeatures.yourNimbusFeatureName.getVariable(
  "gleanMetricConfiguration"
);
// Apply the configuration through the Glean API
Services.fog.setMetricsFeatureConfig(JSON.stringify(cfg));

It is also recommended to register to listen for updates for the Nimbus Feature and apply new configurations as soon as possible. The following example illustrates how a Nimbus Feature might register and update the metric configuration whenever there is a change to the Nimbus configuration:

// Register to listen for the `onUpdate` event from Nimbus
lazy.NimbusFeatures.yourNimbusFeatureName.onUpdate(() => {
  // Fetch the Glean metric configuration from your feature's Nimbus variable
  let cfg = lazy.NimbusFeatures.yourNimbusFeatureName.getVariable(
    "gleanMetricConfiguration"
  );
  // Apply the configuration through the Glean API
  Services.fog.setMetricsFeatureConfig(JSON.stringify(cfg));
});

Mobile Feature Integration

Integration Option 1:

Glean provides a general Nimbus feature that can be used for configuration of metrics named glean. Simply select the glean feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see https://experimenter.info for more information on multi-feature experiments).

The glean Nimbus feature requires the gleanMetricConfiguration variable to be used to provide the require metric configuration. The format of the configuration is a map of the fully qualified metric name (category.name) of the metric to a boolean value representing whether the metric is enabled. If a metric is omitted from this map, it will default to the value found in the metrics.yaml. An example configuration for the glean feature can be found on the Experimenter Configuration page.

Integration Option 2 (Advanced use):

A second option that can give you more control over the metric configuration, especially if there are more than one experiments or rollouts that are currently using the glean feature from Option 1 above, is to add a Feature Variable to represent the Glean metric configuration in your own feature. This can be accomplished by modifying the Nimbus Feature Manifest file, adding a variable through which to pass metric configurations. Glean will handle merging this configuration with other metrics configurations for you (See Advanced Topics for more info on this). An example feature manifest entry would look like the following:

features:
  homescreen:
    description: |
      The homescreen that the user goes to when they press home or new    
      tab.
    variables:
      ... // Other homescreen variables
      gleanMetricConfiguration:
        description: Glean metric configuration
        type: String
        default: "{}"

Once the Feature Variable has been defined, the final step is to fetch the configuration from Nimbus and supply it to the Glean API. This can be done during initialization and again any time afterwards, such as in response to receiving an updated configuration from Nimbus. Only the latest configuration provided will be applied and any previously configured metrics that are omitted from the new configuration will not be changed. An example call to set a configuration from the “homescreen” Nimbus Feature could look like this:

Glean.applyServerKnobsConfig(FxNimbus.features.homescreen.value().metricsEnabled)

Since mobile experiments only update on initialization of the application, it isn't necessary to register to listen for notifications for experiment updates.

Experimenter Configuration

The structure of this configuration is a key-value collection with the full metric identification of the Glean metric serving as the key in the format <metric_category.metric_name>.

The values of the key-value pair are booleans which represent whether the metric is enabled (true) or not (false).

In the example below gleanMetricConfiguration is the name of the variable defined in the Nimbus feature.

This configuration would be what is entered into the branch configuration setup in Experimenter when defining an experiment or rollout.

Example Configuration:

{
  "gleanMetricConfiguration": {
    "metrics_enabled": {
      "urlbar.abandonment": true,
      "urlbar.engagement": true,
      "urlbar.impression": true
    }
  }
}

Advanced Topics

Merging of Configurations from Multiple Features

Since each feature defined as a Nimbus Feature can independently provide a Glean configuration, these must be merged together into a cohesive configuration for the entire set of metrics collected by Glean.

Configurations will be merged together along with the default values in the metrics.yaml file and applied to the appropriate metrics. Only the latest configuration provided for a given metric will be applied and any previously configured metrics that are omitted from the new configuration will not be changed.

Example

Imagine a situation where we have 3 features (A, B, C). Each of these features has an event (A.event, B.event, C.event) and these events all default to disabled from their definition in the metrics.yaml file.

Let’s walk through an example of changing configurations for these features that illustrates how the merging will work:

Initial State

This is what the initial state of the events looks like with no configurations applied. All of the events are falling back to the defaults from the metrics.yaml file. This is the starting point for Scenario 1 in the Example Scenarios.

  • Feature A
    • No config, default used
    • A.event is disabled
  • Feature B
    • No config, default used
    • B.event is disabled
  • Feature C
    • No config, default used
    • C.event is disabled

Second State

In this state, let’s create two rollouts which will provide configurations for features A and B that will enable the events associated with each. The first rollout selects Feature A in experimenter and provides the indicated configuration in the Branch setup page. The second rollout does the same thing, only for Feature B.

  • Feature A
    • Configuration:
      •   {
              // Other variable configs
        
              // Glean metric config
              "gleanMetricConfiguration": {
                "metrics_enabled": {
                  "A.event": true
                }
              }
          }
        
    • A.event is enabled
  • Feature B
    • Configuration:
      •   {
              // Other variable configs
        
              // Glean metric config
              "gleanMetricConfiguration": {
                "metrics_enabled": {
                  "B.event": true
                }
              }
          }
        
    • B.event is enabled
  • Feature C
    • No config, default used
    • C.event is disabled

As you can see, the A.event and B.event are enabled by the configurations while C.event remains disabled because there is no rollout for it.

Third State

In this state, let’s end the rollout for Feature B, start a rollout for Feature C, and launch an experiment for Feature A. Because experiments take precedence over rollouts, this should supersede our configuration from the rollout for Feature A.

  • Feature A
    • Configuration:
      •   {
              // Other variable configs
        
              // Glean metric config
              "gleanMetricConfiguration": {
                "metrics_enabled": {
                  "A.event": false
                }
              }
          }
        
    • A.event is disabled
  • Feature B
    • No config, default used
    • B.event is disabled
  • Feature C
    • Configuration:
      •   {
              // Other variable configs
        
              // Glean metric config
              "gleanMetricConfiguration": {
                "metrics_enabled": {
                  "C.event": true
                }
              }
          }
        
    • C.event is enabled

After the new changes to the currently running rollouts and experiments, this client is now enrolled in the experiment for Feature A and the configuration is suppressing the A.event. Feature B is no longer sending B.event because it is reverting back to the defaults from the metrics.yaml file. And finally, Feature C is sending C.event with the rollout configuration applied.

Fourth State

Finally, in this state, let’s end the rollout for Feature C along with the experiment for Feature A. This should stop the sending of the B.event and C.event and resume sending of the A.event as the rollout configuration will again be applied since the experiment configuration is no longer available.

  • Feature A
    • Configuration
      •  {
             // Other variable configs
        
             // Glean metric config
             "gleanMetricConfiguration": {
               "metrics_enabled": {
                 "A.event": true
               }
             }
         }
        
    • A.event is enabled
  • Feature B
    • No config, default used
    • B.event is disabled
  • Feature C
    • No config, default used
    • C.event is disabled

After the new changes to the currently running rollouts and experiments, this client is now enrolled in the experiment for Feature A and the configuration is suppressing the A.event. Feature B is no longer sending B.event because it is reverting back to the defaults from the metrics.yaml file. And finally, Feature C is sending C.event with the rollout configuration applied.

In each case, Glean only updates the configuration associated with the feature that provided it. Nimbus’ feature exclusion would prevent a client from being enrolled in multiple rollouts or experiments for a given feature, so no more than one configuration would be applied per feature for a given client.

Merging Caveats

Because there is currently nothing that ties a particular Nimbus Feature to a set of metrics, care must be taken to avoid feature overlap over a particular metric. If two different features supply conflicting configurations for the same metric, then whether or not the metric is enabled will likely come down to a race condition of whoever set the configuration last.

Frequently Asked Questions

  • How can I tell if a given client id has the metric X on?

Once we have established the functionality behind the data control plane, a dashboard for monitoring this will be provided. Details are to be determined.

  • Why isn't some client id reporting the metric that should be enabled for all the clients for that channel? (e.g. Some fraction of population may get stuck on “default” config)

Nimbus must be able to both reach and supply a valid configuration to the audience. For some outliers this doesn't work and so may be "unreachable" at times.

Pings

A ping is a bundle of related metrics, gathered in a payload to be transmitted. The ping payload is encoded in JSON format and contains one or more of the common sections with shared information data.

If data collection is enabled, the chosen Glean SDK may provide a set of built-in pings that are assembled out of the box without any developer intervention.

Table of contents

Payload structure

Every ping payload has the following keys at the top-level:

  • The ping_info section contains core metadata that is included in every ping that doesn't set the metadata.include_info_sections property to false.

  • The client_info section contains information that identifies the client. It is included in every ping that doesn't set the metadata.include_info_sections property to false. When included, it contains a persistent client identifier client_id, except when the include_client_id property is set to false.

The following keys are only present if any metrics or events were recorded for the given ping:

  • The metrics section contains the submitted values for all metric types except for events. It has keys for each of the metric types, under which is data for each metric.

  • The events section contains the events recorded in the ping.

See the payload documentation for more details for each metric type in the metrics and events section.

The ping_info section

Metadata about the ping itself. This section is included in every ping that doesn't set the metadata.include_info_sections property to false.

The following fields are included in the ping_info section. Optional fields are marked accordingly.

seq

Type: Counter, Lifetime: User

A running counter of the number of times pings of this type have been sent.

start_time

Type: Datetime, Lifetime: User

The time of the start of collection of the data in the ping, in local time and with millisecond precision (by default), including timezone information. (Note: Custom pings can opt-out of precise timestamps and use minute precision.)

end_time

Type: Datetime, Lifetime: Ping

The time of the end of collection of the data in the ping, in local time and with millisecond precision (by default), including timezone information. This is also the time this ping was generated and is likely well before ping transmission time. (Note: Custom pings can opt-out of precise timestamps and use minute precision.)

reason (optional)

The reason the ping was submitted. The specific set of values and their meanings are defined for each metric type in the reasons field in the pings.yaml file.

experiments (optional)

A dictionary of active experiments.

This object contains experiment annotations keyed by the experiment id. Each annotation contains the experiment branch the client is enrolled in and may contain a string to string map with additional data in the extra key.

Both the id and branch are truncated to 30 characters. See Using the Experiments API on how to record experiments data.

{
  "<id>": {
    "branch": "branch-id",
    "extra": {
      "some-key": "a-value"
    }
  }
}

The client_info section

A limited amount of metrics that are generally useful across products. The data is provided by the embedding application or automatically fetched by the Glean SDK. It is collected at initialization time and sent in every ping afterwards. For historical reasons it contains metrics that are only useful on a certain platform.

This section is included in every ping that doesn't set the metadata.include_info_sections property to false.

Additional metrics require a proposal

Adding new metrics maintained by the Glean SDKs team will require a full proposal and details on why that value is useful across multiple platforms and products and needs Glean SDKs team ownership.

The Glean SDKs are not taking ownership of new metrics that are platform- or product-specific.

The following fields are included in the client_info section. Optional fields are marked accordingly.

app_build

Type: String, Lifetime: Application

The build identifier generated by the CI system (e.g. "1234/A"). If the value was not provided through configuration, this metric gets set to Unknown.

app_channel (optional)

Type: String, Lifetime: Application

The product-provided release channel (e.g. "beta").

app_display_version

Type: String, Lifetime: Application

The user-visible version string (e.g. "1.0.3"). The meaning of the string (e.g. whether semver or a git hash) is application-specific. If the value was not provided through configuration, this metric gets set to Unknown.

build_date (optional)

Type: Datetime, Lifetime: Application

architecture

Type: String, Lifetime: Application

The architecture of the device (e.g. "arm", "x86").

client_id (optional)

Type: UUID, Lifetime: User

A UUID identifying a profile and allowing user-oriented correlation of data.

device_manufacturer (optional)

Type: String, Lifetime: Application

The manufacturer of the device the application is running on. Not set if the device manufacturer can't be determined (e.g. on Desktop).

device_model (optional)

Type: String, Lifetime: Application

The model of the device the application is running on. On Android, this is Build.MODEL, the user-visible marketing name, like "Pixel 2 XL". Not set if the device model can't be determined (e.g. on Desktop).

first_run_date

Type: Datetime, Lifetime: User

The date of the first run of the application, in local time and with day precision, including timezone information.

os

Type: String, Lifetime: Application

The name of the operating system (e.g. "Linux", "Android", "iOS").

os_version

Type: String, Lifetime: Application

The user-visible version of the operating system (e.g. "1.2.3"). If the version detection fails, this metric gets set to Unknown.

android_sdk_version (optional)

Type: String, Lifetime: Application

The Android specific SDK version of the software running on this hardware device (e.g. "23").

windows_build_number (optional)

Type: Quantity, Lifetime: Application

The optional Windows build number, reported by Windows (e.g. 22000) and not set for other platforms.

telemetry_sdk_build

Type: String, Lifetime: Application

The version of the Glean SDK.

locale (optional)

Type: String, Lifetime: Application

The locale of the application during initialization (e.g. "es-ES"). If the locale can't be determined on the system, the value is "und", to indicate "undetermined".

Ping submission

The pings that the Glean SDKs generate are submitted to the Mozilla servers at specific paths, in order to provide additional metadata without the need to unpack the ping payload.

URL

A typical submission URL looks like

"<server-address>/submit/<application-id>/<doc-type>/<glean-schema-version>/<document-id>"

where:

  • <server-address>: the address of the server that receives the pings;
  • <application-id>: a unique application id, automatically detected by the Glean SDK; this is the value returned by Context.getPackageName();
  • <doc-type>: the name of the ping; this can be one of the pings available out of the box with the Glean SDK, or a custom ping;
  • <glean-schema-version>: the version of the Glean ping schema;
  • <document-id>: a unique identifier for this ping.

Limitations

To keep resource usage in check, the Glean SDK enforces some limitations on ping uploading and ping storage.

Rate limiting

Only up to 15 ping submissions every 60 seconds are allowed.

For the JavaScript SDK that limit is higher and up to 40 ping submissions every 60 seconds are allowed.

Request body size limiting

The body of a ping request may have up to 1MB (after compression). Pings that exceed this size are discarded and don't get uploaded. Size and number of discarded pings are recorded on the internal Glean metric glean.upload.discarded_exceeding_pings_size.

Storage quota

Pending pings are stored on disk. Storage is scanned every time Glean is initialized and upon scanning Glean checks its size. If it exceeds a size of 10MB or 250 pending pings, pings are deleted to get the storage back to an accepted size. Pings are deleted oldest first, until the storage size is below the quota.

The number of deleted pings due to exceeding storage quota is recorded on the metric glean.upload.deleted_pings_after_quota_hit and the size of the pending pings directory is recorded (regardless on whether quota has been reached) on the metric glean.upload.pending_pings_directory_size.

Deletion request pings are not subject to this limitation and never get deleted.

Submitted headers

A pre-defined set of headers is additionally sent along with the submitted ping.

Content-Type

Describes the data sent to the server. Value is always application/json; charset=utf-8.

Date

Submission date/time in GMT/UTC+0 offset, e.g. Mon, 23 Jan 2019 10:10:10 GMT+00:00.

User-Agent (deprecated)

Up to Glean v44.0.0 and Glean.js v0.13.0 this contained the Glean SDK version and platform information. Newer Glean SDKs do not overwrite this header. See X-Telemetry-Agent for details.

Clients might still send it, for example, when sending pings from browsers it will contain the characteristic browser UA string.

This header is parsed by the Glean pipeline and can be queried at analysis time through the metadata.user_agent.* fields in the ping tables.

X-Telemetry-Agent

The Glean SDK version and platform this ping is sent from. Useful for debugging purposes when pings are sent to the error stream. as it describes the application and the Glean SDK used for sending the ping.

It's looks like Glean/40.0.0 (Kotlin on Android), where 40.0.0 is the Glean Kotlin SDK version number and Kotlin on Android is the name of the language used by the SDK that sent the request plus the name of the platform it is running on.

X-Debug-Id (optional)

Debug header attached to Glean pings by using the debug APIs, e.g. test-tag.

When this header is present, the ping is redirected to the Glean Debug View.

X-Source-Tags (optional)

A list of tags to associate with the ping, useful for clustering pings at analysis time, for example to tell data generated from CI from other data e.g. automation, perf.

This header is attached to Glean pings by using the debug APIs.

Custom pings

Applications can define metrics that are sent in custom pings. Unlike the built-in pings, custom pings are sent explicitly by the application.

This is useful when the scheduling of the built-in pings (metrics, baseline and events) are not appropriate for your data. Since the timing of the submission of custom pings is handled by the application, the measurement window is under the application's control.

This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.

Defining a custom ping

Custom pings must be defined in a pings.yaml file, placed in the same directory alongside your app's metrics.yaml file.

For example, to define a custom ping called search specifically for search information:

$schema: moz://mozilla.org/schemas/glean/pings/2-0-0

search:
  description: >
    A ping to record search data.
  metadata:
    tags:
      - Search
  include_client_id: false
  notification_emails:
    - CHANGE-ME@example.com
  bugs:
    - http://bugzilla.mozilla.org/123456789/
  data_reviews:
    - http://example.com/path/to/data-review

Tags are an optional feature you can use to provide an additional layer of categorization to pings. Any tags specified in the metadata section of a ping must have a corresponding entry in a tags YAML registry for your project.

Refer to the pings YAML registry format for a full reference on the pings.yaml file structure.

Sending metrics in a custom ping

To send a metric on a custom ping, you add the custom ping's name to the send_in_pings parameter in the metrics.yaml file.

Ping metadata must be loaded before sending!

After defining a custom ping, before it can be used for sending data, its metadata must be loaded into your application or library.

For example, to define a new metric to record the default search engine, which is sent in a custom ping called search, put search in the send_in_pings parameter. Note that it is an error to specify a ping in send_in_pings that does not also have an entry in pings.yaml.

search.default:
  name:
    type: string
    description: >
      The name of the default search engine.
    send_in_pings:
      - search

If this metric should also be sent in the default ping for the given metric type, you can add the special value default to send_in_pings:

    send_in_pings:
      - search
      - default

The glean.restarted event

For custom pings that contain event metrics, the glean.restarted event is injected by Glean on every application restart that may happen during the pings measurement window.

Note: All leading and trailing glean.restarted events are omitted from each ping.

Event timestamps throughout application restarts

Event timestamps are always calculated relative to the first event in a ping. The first event will always have timestamp 0 and subsequent events will have timestamps corresponding to the elapsed amount of milliseconds since that first event.

That is also the case for events recorded throughout restarts.

Example

In the below example payload, there were two events recorded on the first application run. The first event is timestamp 0 and the second event happens one second after the first one, so it has timestamp 1000.

The application is restarted one hour after the first event and a glean.restarted event is recorded, timestamp 3600000. Finally, an event is recorded during the second application run two seconds after restart, timestamp 3602000.

{
  ...
  "events": [
    {
      "timestamp": 0,
      "category": "examples",
      "name": "event_example",
    },
    {
      "timestamp": 1000,
      "category": "examples",
      "name": "event_example"
    },
    {
      "timestamp": 3600000,
      "category": "glean",
      "name": "restarted"
    },
    {
      "timestamp": 3602000,
      "category": "examples",
      "name": "event_example"
    },
  ]
}

Caveat: Handling decreasing time offsets

For events recorded in a single application run, Glean relies on a monotonically increasing timer to calculate event timestamps, while for calculating the time elapsed between application runs Glean has to rely on the computer clock, which is not necessarily monotonically increasing.

In the case that timestamps in between application runs are not monotonically increasing, Glean will take the value of the previous timestamp and add one millisecond, thus guaranteeing that timestamps are always increasing.

Checking for decreasing time offsets between restarts

When this edge case is hit, Glean records an InvalidValue error for the glean.restarted metric. This metric may be consulted at analysis time. It is sent in the same ping where the error happened.

In the below example payload, the first and second application runs go exactly like in the example above.

The only difference is that when the restart happens, the offset between the absolute time of the first event and the absolute time of the restart is not enough to keep the timestamps increasing. That may happen for many reasons, such as a change in timezones or simply a manual change in the clock by the user.

In this case, Glean will ignore the incorrect timestamp and add one millisecond to the last timestamp of the previous run, in order to keep the monotonically increasing nature of the timestamps.

{
  ...
  "events": [
    {
      "timestamp": 0,
      "category": "examples",
      "name": "event_example",
    },
    {
      "timestamp": 1000,
      "category": "examples",
      "name": "event_example"
    },
    {
      "timestamp": 1001,
      "category": "glean",
      "name": "restarted"
    },
    {
      "timestamp": 3001,
      "category": "examples",
      "name": "event_example"
    },
  ]
}

Testing custom pings

Applications defining custom pings can use use the ping testing API to test these pings in unit tests.

General testing strategy

The schedule of custom pings depends on the specific application implementation, since it is up to the SDK user to define the ping semantics. This makes the testing strategy a bit more complex, but usually boiling down to:

  1. Triggering the code path that accumulates/records the data.
  2. Defining a callback validation function using the ping testing API.
  3. Finally triggering the code path that submits the custom ping or submitting the ping using the submit API.

Pings sent by Glean

If data collection is enabled, the Glean SDKs provide a set of built-in pings that are assembled out of the box without any developer intervention. The following is a list of these built-in pings:

  • baseline ping: A small ping sent every time the application goes to foreground and background. Going to foreground also includes when the application starts.
  • deletion-request ping: Sent when the user disables telemetry in order to request a deletion of their data.
  • events ping: The default ping for events. Sent every time the application goes to background or a certain number of events is reached. Is not sent when there are no events recorded, even if there are other metrics with values.
  • metrics ping: The default ping for metrics. Sent approximately daily.

Applications can also define and send their own custom pings when the schedules of these pings is not suitable.

There is also a high-level overview of how the metrics and baseline pings relate and the timings they record.

Available pings per platform

SDKbaselinedeletion-requesteventsmetrics
Kotlin
Swift
Python12
Rust
JavaScript
Firefox Desktop
1

Not sent automatically. Use the handle_client_active and handle_client_inactive API.

2

Sent on startup when pending events are stored. Additionally sent when handle_client_inactive is called.

Defining foreground and background state

These docs refer to application 'foreground' and 'background' state in several places.

Foreground

For Android, this specifically means the activity becomes visible to the user, it has entered the Started state, and the system invokes the onStart() callback.

Background

This specifically means when the activity is no longer visible to the user, it has entered the Stopped state, and the system invokes the onStop() callback.

This may occur, if the user uses Overview button to change to another app, the user presses the Back button and navigates to a previous application or the home screen, or if the user presses the Home button to return to the home screen. This can also occur if the user navigates away from the application through some notification or other means.

The system may also call onStop() when the activity has finished running, and is about to be terminated.

Foreground

For iOS, the Glean Swift SDK attaches to the willEnterForegroundNotification. This notification is posted by the OS shortly before an app leaves the background state on its way to becoming the active app.

Background

For iOS, this specifically means when the app is no longer visible to the user, or when the UIApplicationDelegate receives the applicationDidEnterBackground event.

This may occur if the user opens the task switcher to change to another app, or if the user presses the Home button to show the home screen. This can also occur if the user navigates away from the app through a notification or other means.

Note: Glean does not currently support Scene based lifecycle events that were introduced in iOS 13.

The baseline ping

Description

This ping is intended to provide metrics that are managed by the Glean SDKs themselves, and not explicitly set by the application or included in the application's metrics.yaml file.

Platform availability

SDKKotlinSwiftPythonRustJavaScriptFirefox Desktop
baseline ping

Scheduling

The baseline ping is automatically submitted with a reason: active when the application becomes active (on mobile it means getting to foreground). These baseline pings do not contain duration.

The baseline ping is automatically submitted with a reason: inactive when the application becomes inactive (on mobile it means getting to background). If no baseline ping is triggered when becoming inactive (e.g. the process is abruptly killed) a baseline ping with reason dirty_startup will be submitted on the next application startup. This only happens from the second application start onward.

See also the ping schedules and timing overview.

Contents

The baseline ping also includes the common ping sections found in all pings.

It also includes a number of metrics defined in the Glean SDKs themselves.

Querying ping contents

A quick note about querying ping contents (i.e. for sql.telemetry.mozilla.org): Each metric in the baseline ping is organized by its metric type, and uses a namespace of glean.baseline. For instance, in order to select duration you would use metrics.timespan['glean.baseline.duration']. If you were trying to select a String based metric such as os, then you would use metrics.string['glean.baseline.os']

Example baseline ping

{
  "ping_info": {
    "experiments": {
      "third_party_library": {
        "branch": "enabled"
      }
    },
    "seq": 0,
    "start_time": "2019-03-29T09:50-04:00",
    "end_time": "2019-03-29T09:53-04:00",
    "reason": "foreground"
  },
  "client_info": {
    "telemetry_sdk_build": "0.49.0",
    "first_run_date": "2019-03-29-04:00",
    "os": "Android",
    "android_sdk_version": "27",
    "os_version": "8.1.0",
    "device_manufacturer": "Google",
    "device_model": "Android SDK built for x86",
    "architecture": "x86",
    "app_build": "1",
    "app_display_version": "1.0",
    "client_id": "35dab852-74db-43f4-8aa0-88884211e545"
  },
  "metrics": {
    "timespan": {
      "glean.baseline.duration": {
        "value": 52,
        "time_unit": "second"
      }
    }
  }
}

The deletion-request ping

Description

This ping is submitted when a user opts out of sending technical and interaction data.

This ping contains the client id.

This ping is intended to communicate to the Data Pipeline that the user wishes to have their reported Telemetry data deleted. As such it attempts to send itself at the moment the user opts out of data collection, and continues to try and send itself.

Adding secondary ids

It is possible to send secondary ids in the deletion request ping. For instance, if the application is migrating from legacy telemetry to Glean, the legacy client ids can be added to the deletion request ping by creating a metrics.yaml entry for the id to be added with a send_in_pings value of deletion_request.

An example metrics.yaml entry might look like this:

legacy_client_id:
   type: uuid
   description:
     A UUID uniquely identifying the legacy client.
   send_in_pings:
     - deletion_request
   ...

Platform availability

SDKKotlinSwiftPythonRustJavaScriptFirefox Desktop
deletion-request ping

Scheduling

The deletion-request ping is automatically submitted when upload is disabled in Glean. If upload fails, it is retried after Glean is initialized.

Contents

The deletion-request does not contain additional metrics aside from secondary ids that have been added.

Example deletion-request ping

{
  "ping_info": {
    "seq": 0,
    "start_time": "2019-12-06T09:50-04:00",
    "end_time": "2019-12-06T09:53-04:00"
  },
  "client_info": {
    "telemetry_sdk_build": "22.0.0",
    "first_run_date": "2019-03-29-04:00",
    "os": "Android",
    "android_sdk_version": "28",
    "os_version": "9",
    "device_manufacturer": "Google",
    "device_model": "Android SDK built for x86",
    "architecture": "x86",
    "app_build": "1",
    "app_display_version": "1.0",
    "client_id": "35dab852-74db-43f4-8aa0-88884211e545"
  },
  "metrics": {
    "uuid": {
      "legacy_client_id": "5faffa6d-6147-4d22-a93e-c1dbd6e06171"
    }
  }
}

The events ping

Description

The events ping's purpose is to transport event metric information.

If the application crashes, an events ping is generated next time the application starts with events that were not sent before the crash.

Platform availability

SDKKotlinSwiftPythonRustJavaScriptFirefox Desktop
events ping

Scheduling

The events ping is automatically submitted under the following circumstances:

  1. If there are any recorded events to send when the application becomes inactive (on mobile, this means going to background).

  2. When the queue of events exceeds Glean.configuration.maxEvents (default 1 for Glean.js, 500 for all other SDKs). This configuration option can be changed at initialization.

  3. If there are any unsent events found on disk when starting the application. (This results in this ping never containing the glean.restarted event.)

Python and JavaScript caveats

Since the Glean Python and JavaScript SDKs don't have a generic concept of "inactivity", case (1) above cannot be handled automatically.

On Python, users can call the handle_client_inactive API to let Glean know the app is inactive and that will trigger submission of the events ping.

On JavaScript there is no such API and only cases (2) and (3) apply.

Contents

At the top-level, this ping contains the following keys:

  • client_info: The information common to all pings.

  • ping_info: The information common to all pings.

  • events: An array of all of the events that have occurred since the last time the events ping was sent.

Each entry in the events array is an object with the following properties:

  • "timestamp": The milliseconds relative to the first event in the ping.

  • "category": The category of the event, as defined by its location in the metrics.yaml file.

  • "name": The name of the event, as defined in the metrics.yaml file.

  • "extra" (optional): A mapping of strings to strings providing additional data about the event. Keys are restricted to 40 UTF-8 bytes while values in the extra object are limited a maximum length of 500 UTF-8 bytes.

Example event JSON

{
  "ping_info": {
    "experiments": {
      "third_party_library": {
        "branch": "enabled"
      }
    },
    "seq": 0,
    "start_time": "2019-03-29T09:50-04:00",
    "end_time": "2019-03-29T10:02-04:00"
  },
  "client_info": {
    "telemetry_sdk_build": "0.49.0",
    "first_run_date": "2019-03-29-04:00",
    "os": "Android",
    "android_sdk_version": "27",
    "os_version": "8.1.0",
    "device_manufacturer": "Google",
    "device_model": "Android SDK built for x86",
    "architecture": "x86",
    "app_build": "1",
    "app_display_version": "1.0",
    "client_id": "35dab852-74db-43f4-8aa0-88884211e545"
  },
  "events": [
    {
      "timestamp": 0,
      "category": "examples",
      "name": "event_example",
      "extra": {
        "metadata1": "extra",
        "metadata2": "more_extra"
      }
    },
    {
      "timestamp": 1000,
      "category": "examples",
      "name": "event_example"
    }
  ]
}

The metrics ping

Description

The metrics ping is intended for all of the metrics that are explicitly set by the application or are included in the application's metrics.yaml file (except events). The reported data is tied to the ping's measurement window, which is the time between the collection of two metrics pings. Ideally, this window is expected to be about 24 hours, given that the collection is scheduled daily at 04:00. However, the metrics ping is only submitted while the application is actually running, so in practice, it may not meet the 04:00 target very frequently. Data in the ping_info section of the ping can be used to infer the length of this window and the reason that triggered the ping to be submitted. If the application crashes, unsent recorded metrics are sent along with the next metrics ping.

Additionally, it is undesirable to mix metric recording from different versions of the application. Therefore, if a version upgrade is detected, the metrics ping is collected immediately before further metrics from the new version are recorded.

Platform availability

SDKKotlinSwiftPythonRustJavaScriptFirefox Desktop
metrics ping

Scheduling

The desired behavior is to collect the ping at the first available opportunity after 04:00 local time on a new calendar day, but given constraints of the platform, it can only be submitted while the application is running. This breaks down into three scenarios:

  1. the application was just installed;
  2. the application was just upgraded (the version of the app is different from the last time the app was run);
  3. the application was just started (after a crash or a long inactivity period);
  4. the application was running at 04:00.

In the first case, since the application was just installed, if the due time for the current calendar day has passed, a metrics ping is immediately generated and scheduled for sending (reason code overdue). Otherwise, if the due time for the current calendar day has not passed, a ping collection is scheduled for that time (reason code today).

In the second case, if a version change is detected at startup, the metrics ping is immediately submitted so that metrics from one version are not aggregated with metrics from another version (reason code upgrade).

In the third case, if the metrics ping was not already collected on the current calendar day, and it is before 04:00, a collection is scheduled for 04:00 on the current calendar day (reason code today). If it is after 04:00, a new collection is scheduled immediately (reason code overdue). Lastly, if a ping was already collected on the current calendar day, the next one is scheduled for collecting at 04:00 on the next calendar day (reason code tomorrow).

In the fourth and last case, the application is running during a scheduled ping collection time. The next ping is scheduled for 04:00 the next calendar day (reason code reschedule).

More scheduling examples are included below.

See also the ping schedules and timing overview.

Contents

The metrics ping contains all of the metrics defined in metrics.yaml (except events) that don't specify a ping or where default is specified in their send in pings property.

Additionally, error metrics in the glean.error category are included in the metrics ping.

The metrics ping shall also include the common ping_info and 'client_info' sections. It also includes a number of metrics defined in the Glean SDKs themselves.

Querying ping contents

Information about query ping contents is available in Accessing Glean data in the Firefox data docs.

Scheduling Examples

Crossing due time with the application closed

  1. The application is opened on Feb 7 on 15:00, closed on 15:05.

    • Glean records one metric A (say startup time in ms) during this measurement window MW1.
  2. The application is opened again on Feb 8 on 17:00.

  • Glean notes that we passed local 04:00 since MW1.

  • Glean closes MW1, with:

    • start_time=Feb7/15:00;
    • end_time=Feb8/17:00.
  • Glean records metric A again, into MW2, which has a start_time of Feb8/17:00.

Crossing due time and changing timezones

  1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

    • Glean records one metric A (say startup time in ms) during this measurement window MW1.
  2. The application is opened again on Feb 8 on 17:00 in timezone UTC+1.

    • Glean notes that we passed local 04:00 UTC+1 since MW1.

    • Glean closes MW1, with:

      • start_time=Feb7/15:00/UTC;
      • end_time=Feb8/17:00/UTC+1.
    • Glean records metric A again, into MW2.

The application doesn’t run in a week

  1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

    • Glean records one metric A (say startup time in ms) during this measurement window MW1.
  2. The application is opened again on Feb 16 on 17:00 in timezone UTC.

    • Glean notes that we passed local 04:00 UTC since MW1.

    • Glean closes MW1, with:

      • start_time=Feb7/15:00/UTC;
      • end_time=Feb16/17:00/UTC.
    • Glean records metric A again, into MW2.

The application doesn’t run for a week, and when it’s finally re-opened the timezone has changed

  1. The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.

    • Glean records one metric A (say startup time in ms) during this measurement window MW1.
  2. The application is opened again on Feb 16 on 17:00 in timezone UTC+1.

    • Glean notes that we passed local 04:00 UTC+1 since MW1.

    • Glean closes MW1, with:

      • start_time=Feb7/15:00/UTC
      • end_time=Feb16/17:00/UTC+1.
    • Glean records metric A again, into MW2.

The user changes timezone in an extreme enough fashion that they cross 04:00 twice on the same date

  1. The application is opened on Feb 7 at 15:00 in timezone UTC+11, closed at 15:05.

    • Glean records one metric A (say startup time in ms) during this measurement window MW1.
  2. The application is opened again on Feb 8 at 04:30 in timezone UTC+11.

    • Glean notes that we passed local 04:00 UTC+11.

    • Glean closes MW1, with:

      • start_time=Feb7/15:00/UTC+11;
      • end_time=Feb8/04:30/UTC+11.
    • Glean records metric A again, into MW2.

  3. The user changes to timezone UTC-10 and opens the application at Feb 7 at 22:00 in timezone UTC-10

    • Glean records metric A again, into MW2 (not MW1, which was already sent).
  4. The user opens the application at Feb 8 05:00 in timezone UTC-10

    • Glean notes that we have not yet passed local 04:00 on Feb 9
    • Measurement window MW2 remains the current measurement window
  5. The user opens the application at Feb 9 07:00 in timezone UTC-10

    • Glean notes that we have passed local 04:00 on Feb 9

    • Glean closes MW2 with:

      • start_time=Feb8/04:30/UTC+11;
      • end_time=Feb9/19:00/UTC-10.
    • Glean records metric A again, into MW3.

Ping schedules and timings overview

Full reference details about the metrics and baseline ping schedules are detailed elsewhere.

The following diagram shows a typical timeline of a mobile application, when pings are sent and what timing-related information is included.

ping timeline diagram

There are two distinct runs of the application, where the OS shutdown the application at the end of Run 1, and the user started it up again at the beginning of Run 2.

There are three distinct foreground sessions, where the application was visible on the screen and the user was able to interact with it.

The rectangles for the baseline and metrics pings represent the measurement windows of those pings, which always start exactly at the end of the preceding ping. The ping_info.start_time and ping_info.end_time metrics included in these pings correspond to these beginning and the end of their measurement windows.

The baseline.duration metric (included only in baseline pings) corresponds to amount of time the application spent on the foreground, which, since measurement window always extend to the next ping, is not always the same thing as the baseline ping's measurement window.

The submission_timestamp is the time the ping was received at the telemetry endpoint, added by the ingestion pipeline. It is not exactly the same as ping_info.end_time, since there may be various networking and system latencies both on the client and in the ingestion pipeline (represented by the dotted horizontal line, not to scale). Also of note is that start_time/end_time are measured using the client's real-time clock in its local timezone, which is not a fully reliable source of time.

The "Baseline 4" ping illustrates an important corner case. When "Session 2" ended, the OS also shut down the entire process, and the Glean SDK did not have an opportunity to send a baseline ping immediately. In this case, it is sent at the next available opportunity when the application starts up again in "Run 2". This baseline ping is annotated with the reason code dirty_startup.

The "Metrics 2" ping likewise illustrates another important corner case. "Metrics 1" was able to be sent at the target time of 04:00 (local device time) because the application was currently running. However, the next time 04:00 came around, the application was not active, so the Glean SDK was unable to send a metrics ping. It is sent at the next available opportunity, when the application starts up again in "Run 2". This metrics ping is annotated with the reason code overdue.

NOTE: Ping scheduling and other application lifecycle dependent activities are not set up when Glean is used in a non-main process. See initializing

Data Control Plane (a.k.a. Server Knobs)

Glean provides a Data Control Plane through which pings can be enabled or disabled through a Nimbus rollout or experiment.

Products can use this capability to control "data-traffic", similar to how a network control plane controls "network-traffic".

This provides the ability to do the following:

  • Allow runtime changes to data collection without needing to land code and ride release trains.
  • Eliminate the need for manual creation and maintenance of feature flags specific to data collection.
  • Sampling of measurements from a subset of the population so that we do not collect or ingest more data than is necessary from high traffic areas of an application instrumented with Glean metrics.
  • Operational safety through being able to react to high-volume or unwanted data.
  • Visibility into sampling and sampling rates for remotely configured metrics.

For information on controlling metrics with Server Knobs, see the metrics documentation for Server Knobs - Metrics.

Contents

Product Integration

Glean provides a general Nimbus feature named glean that can be used for configuration of pings. Simply select the glean feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see https://experimenter.info for more information on multi-feature experiments).

The glean Nimbus feature requires the gleanMetricConfiguration variable to be used to provide the require metric configuration. The format of the configuration is defined in the Experimenter Configuration section. If a ping is not included, it will default to the value found in the pings.yaml. Note that this can also serve as an override for Glean builtin pings disabled using the Configuration property enable_internal_pings=false during initialization.

Experimenter Configuration

The structure of this configuration is a key-value collection with the name of the Glean ping serving as the keys and the values are booleans representing whether the ping is enabled (true) or not (false).

In the example below, gleanMetricConfiguration is the name of the variable defined in the Nimbus feature.

This configuration would be what is entered into the branch configuration setup in Experimenter when defining an experiment or rollout.

Example Configuration:

{
  "gleanMetricConfiguration": {
    "pings_enabled": {
      "baseline": false,
      "events": false,
      "metrics": false
    }
  }
}

Debugging products using the Glean SDK

Glean provides a few debugging features to assist with debugging a product using Glean.

Features

Log Pings

Print the ping payload upon sending a ping.

Debug View Tag

Tags all outgoing pings as debug pings to make them available for real-time validation, on the Glean Debug View.

Glean Debug View

The Glean Debug View enables you to easily see in real-time what data your application is sending.

This data is what actually arrives in our data pipeline, shown in a web interface that is automatically updated when new data arrives. Any data sent from a Glean-instrumented application usually shows up within 10 seconds, updating the pages automatically. Pings are retained for 3 weeks.

Troubleshooting

If nothing is showing up on the dashboard after you set a debugViewTag and you see Glean must be enabled before sending pings. in the logs, Glean is disabled. Check with the application author on how to re-enable it.

Source Tags

Tags outgoing pings with a maximum of 5 comma-separated tags.

Send Ping

Sends a ping on demand.

Debugging methods

Each Glean SDK may expose one or more of the following methods to interact with and enable these debugging functionalities.

  1. Enable debugging features through APIs exposed through the Glean singleton;
  2. Enable debugging features through environment variables set at runtime;
  3. Enable debugging features through platform specific tooling.

For methods 1. and 2., refer to the API reference section "Debugging" for detailed information on how to use them.

For method 3. please refer to the platform specific pages on how to debug products using Glean.

Platform Specific Information

  1. Debugging Android applications using the Glean SDK
  2. Debugging iOS applications using the Glean SDK
  3. Debugging Python applications using the Glean SDK
  4. Debugging JavaScript applications using Glean.js

Available debugging methods per platform

Glean APIEnvironment VariablesPlatform Specific Tooling
Kotlin1
Swift2
Python
Rust
JavaScript
Firefox Desktop3
1

The Glean Kotlin SDK exposes the GleanDebugActivity for interacting with debug features. Although it is technically possible to also use environment variables in Android, the Glean team is not aware of a proper way to set environment variables in Android devices or emulators.

2

The Glean Swift SDK exposes a custom URL format for interacting with debug features.

3

In Firefox Desktop, developers may use the interface exposed through about:glean to log, tag or send pings.

Debugging Android applications using the Glean SDK

The Glean Kotlin SDK exports the GleanDebugActivity that can be used to toggle debugging features on or off. Users can invoke this special activity, at run-time, using the following adb command:

adb shell am start -n [applicationId]/mozilla.telemetry.glean.debug.GleanDebugActivity [extra keys]

In the above:

  • [applicationId] is the product's application id as defined in the manifest file and/or build script. For the Glean sample application, this is org.mozilla.samples.gleancore for a release build and org.mozilla.samples.gleancore.debug for a debug build.

  • [extra keys] is a list of extra keys to be passed to the debug activity. See the documentation for the command line switches used to pass the extra keys. These are the currently supported keys:

keytypedescription
logPingsboolean (--ez)If set to true, pings are dumped to logcat; defaults to false
debugViewTagstring (--es)Tags all outgoing pings as debug pings to make them available for real-time validation, on the Glean Debug View. The value must match the pattern [a-zA-Z0-9-]{1,20}. Important: in older versions of the Glean SDK, this was named tagPings
sourceTagsstring array (--esa)Tags outgoing pings with a maximum of 5 comma-separated tags. The tags must match the pattern [a-zA-Z0-9-]{1,20}. The automation tag is meant for tagging pings generated on automation: such pings will be specially handled on the pipeline (i.e. discarded from non-live views). Tags starting with glean are reserved for future use. Subsequent calls of this overwrite any previously stored tag
sendPingstring (--es)Sends the ping with the given name immediately
startNextstring (--es)The name of an exported Android Activity, as defined in the product manifest file, to start right after the GleanDebugActivity completes. All the options provided are propagated to this next activity as well. When omitted, the default launcher activity for the product is started instead.

All the options provided to start the activity are passed over to the main activity for the application to process. This is useful if SDK users wants to debug telemetry while providing additional options to the product to enable specific behaviors.

Note: Due to limitations on Android logcat message size, pings larger than 4KB are broken into multiple log messages when using logPings.

For example, to direct a release build of the Glean sample application to (1) dump pings to logcat, (2) tag the ping with the test-metrics-ping tag, and (3) send the "metrics" ping immediately, the following command can be used:

adb shell am start -n org.mozilla.samples.gleancore/mozilla.telemetry.glean.debug.GleanDebugActivity \
  --ez logPings true \
  --es sendPing metrics \
  --es debugViewTag test-metrics-ping

The logPings command doesn't trigger ping submission and you won't see any output until a ping has been sent. You can use the sendPing command to force a ping to be sent, but it could be more desirable to trigger the pings submission on their normal schedule. For instance, the baseline and events pings can be triggered by moving the app out of the foreground and the metrics ping can be triggered normally if it is overdue for the current calendar day.

Note: The device or emulator must be connected to the internet for this to work. Otherwise the job that sends the pings won't be triggered.

If no metrics have been collected, no pings will be sent unless send_if_empty is set on your ping. See the ping documentation for more information on ping scheduling to learn when pings are sent.

Options that are set using the adb flags are not immediately reset and will persist until the application is closed or manually reset.

Glean Kotlin SDK Log messages

When running a Glean SDK-powered app in the Android emulator or on a device connected to your computer via cable, there are several ways to read the log output.

Android Studio

Android Studio can show the logs of a connected emulator or device. To display the log messages for an app:

  1. Run an app on your device.
  2. Click View > Tool Windows > Logcat (or click Logcat in the tool window bar).

The Logcat window will show all log messages and allows to filter those by the application ID. Select the application ID of the product you're debugging. You can also filter by Glean only.

More information can be found in the View Logs with Logcat help article.

Command line

On the command line you can show all of the log output using:

adb logcat

This is the unfiltered output of all log messages. You can match for glean using grep:

adb logcat | grep -i glean

A simple way to filter for only the application that is being debugged is by using pidcat, a wrapper around adb, which adds colors and proper filtering by application ID and log level. Run it like this to filter for an application:

pidcat [applicationId]

In the above [applicationId] is the product's application id as defined in the manifest file and/or build script. For the Glean sample application, this is org.mozilla.samples.gleancore for a release build and org.mozilla.samples.gleancore.debug for a debug build.

Debugging iOS applications using the Glean SDK

Enabling debugging features in iOS through environment variables

Debugging features in iOS can be enabled using environment variables. For more information on the available features accessible through this method and how to enable them, see Debugging API reference.

These environment variables must be set on the device that is running the application.

Enabling debugging features in iOS through a custom URL scheme

For debugging and validation purposes on iOS, the Glean Swift SDK makes use of a custom URL scheme which is implemented within the application. The Glean Swift SDK provides some convenience functions to facilitate this, but it's up to the consuming application to enable this functionality. Applications that enable this feature will be able to launch the application from a URL with the Glean debug commands embedded in the URL itself.

Available commands and query format

All 4 Glean debugging features are available through the custom URL scheme tool.

  • logPings: This is either true or false and will cause pings that are submitted to also be echoed to the device's log.
  • debugViewTag: This command will tag outgoing pings with the provided value, in order to identify them in the Glean Debug View.
  • sourceTags: This command tags outgoing pings with a maximum of 5 comma-separated tags.
  • sendPing: This command expects a string name of a ping to force immediate collection and submission of.

The structure of the custom URL uses the following format:

<protocol>://glean?<command 1>=<paramter 1>&<command 2>=<parameter 2> ...

Where:

  • <protocol> is the "URL Scheme" that has been added for your app (see Instrumenting the application below), such as glean-sample-app.
  • This is followed by :// and then glean which is required for the Glean Swift SDK to recognize the command is meant for it to process.
  • Following standard URL query format, the next character after glean is the ? indicating the beginning of the query.
  • This is followed by one or more queries in the form of <command>=<parameter>, where the command is one of the commands listed above, followed by an = and then the value or parameter to be used with the command.

There are a few things to consider when creating the custom URL:

  • Invalid commands will log an error and cause the entire URL to be ignored.
  • Not all commands are required to be encoded in the URL, you can mix and match the commands that you need.
  • Multiple instances of commands are not allowed in the same URL and, if present, will cause the entire URL to be ignored.
  • The logPings command doesn't trigger ping submission and you won't see any output until a ping has been submitted. You can use the sendPing command to force a ping to be sent, but it could be more desirable to trigger the pings submission on their normal schedule. For instance, the baseline and events pings can be triggered by moving the app out of the foreground and the metrics ping can be triggered normally if it is overdue for the current calendar day. See the ping documentation for more information on ping scheduling to learn when pings are sent.
  • Enabling debugging features through custom URLs overrides any debugging features set through environment variables.

Instrumenting the application for Glean Swift SDK debug functionality

In order to enable the debugging features in an iOS application, it is necessary to add some information to the application's Info.plist, and add a line and possibly an override for a function in the AppDelegate.swift.

Register custom URL scheme in Info.plist

Note: If your application already has a custom URL scheme implemented, there is no need to implement a second scheme, you can simply use that and skip to the next section about adding the convenience method. If the app doesn't have a custom URL scheme implemented, then you will need to perform the following instructions to register your app to receive custom URLs.

Find and open the application's Info.plist and right click any blank area and select Add Row to create a new key.

You will be prompted to select a key from a drop-down menu, scroll down to and select URL types. This creates an array item, which can be expanded by clicking the triangle disclosure icon.

Select Item 0, click on it and click the disclosure icon to expand it and show the URL identifier line. Double-click the value field and fill in your identifier, typically the same as the bundle ID.

Right-click on Item 0 and select Add Row from the context menu. In the dropdown menu, select URL Schemes to add the item.

Click on the disclosure icon of URL Schemes to expand the item, double-click the value field of Item 0 and key in the value for your application's custom scheme. For instance, the Glean sample app uses glean-sample-app, which allows for custom URLs to be crafted using that as a protocol, for example: glean-sample-app://glean?logPings=true

Add the Glean.handleCustomUrl() convenience function and necessary overrides

In order to handle the incoming debug commands, it is necessary to implement the override in the application's AppDelegate.swift file. Within that function, you can make use of the convenience function provided in Glean handleCustomUrl(url: URL).

An example of a simple implementation of this would look like this:

func application(_: UIApplication,
                 open url: URL,
                 options _: [UIApplication.OpenURLOptionsKey: Any] = [:]) -> Bool {
    // ...

    // This does nothing if the url isn't meant for Glean.
    Glean.shared.handleCustomUrl(url: url)

    // ...

    return true
}

If you need additional help setting up a custom URL scheme in your application, please refer to Apple's documentation.

Invoking the Glean-iOS debug commands

Now that the app has the debug functionality enabled, there are a few ways in which we can invoke the debug commands.

Using a web browser

Perhaps the simplest way to invoke the debug functionality is to open a web browser and type/paste the custom URL into the address bar. This is especially useful on an actual device because there isn't a good way to launch from the command line and process the URL for an actual device.

Using the glean-sample-app as an example: to activate ping logging, tag the pings to go to the Glean Debug View, and force the events ping to be sent, enter the following URL in a web browser on the iOS device:

glean-sample-app://glean?logPings=true&debugViewTag=My-ping-tag&sendPing=events

This should cause iOS to prompt you with a dialog asking if you want to open the URL in the Glean Sample App, and if you select "Okay" then it will launch (or resume if it's already running) the application with the indicated commands and parameters and immediately force the collection and submission of the events ping.

Note: This method does not work if the browser you are using to input the command is the same application you are attempting to pass the Glean debug commands to. So, you couldn't use Firefox for iOS to trigger commands within Firefox for iOS.

It is also possible to encode the URL into a 2D barcode or QR code and launch the app via the camera app. After scanning the encoded URL, the dialog prompting to launch the app should appear as if the URL were entered into the browser address bar.

Using the command line

This method is useful for testing via the Simulator, which typically requires a Mac with Xcode installed, including the Xcode command line tools. In order to perform the same command as above with using the browser to input the URL, you can use the following command in the command line terminal of the Mac:

xcrun simctl openurl booted "glean-sample-app://glean?logPings=true&debugViewTag=My-ping-tag&sendPing=events"

This will launch the simulator and again prompt the user with a dialog box asking if you want to open the URL in the Glean Sample App (or whichever app you are instrumenting and testing).

Glean log messages

The Glean Swift SDK integrates with the unified logging system available on iOS. There are various ways to retrieve log information, see the official documentation.

If debugging in the simulator, the logging messages can be seen in the console window within Xcode.

When running a Glean-powered app in the iOS Simulator or on a device connected to your computer via cable you can use Console.app to view the system log. You can filter the logs with category:glean to only see logs from the Glean SDK.

You can also use the command line utility log to stream the log output. Run the following in a shell:

log stream --predicate 'category contains "glean"'

See Diagnosing Issues Using Crash Reports and Device Logs for more information about debugging deployed iOS apps.

Debugging Python applications using the Glean SDK

Debugging features in Python can be enabled using environment variables. For more information on the available features and how to enable them, see the Debugging API reference.

Sending pings

Unlike other platforms, Python doesn't expose convenience methods to send pings on demand.

In case that is necessary, calling the submit function for a given ping, such as pings.custom_ping.submit(), will send it.

Logging pings

Glean offers two options for logging from Python:

  • Simple logging API: A simple API that only allows for setting the logging level, but includes all Glean log messages, including those from its networking subprocess. This is also the only mode in which GLEAN_LOG_PINGS can be used to display ping contents in the log.
  • Flexible logging API: Full use of the Python logging module, including its features for redirecting to files and custom handling of messages, but does not include messages from the networking subprocess about HTTP requests.

Simple logging API

You can set the logging level for Glean log messages by passing logging.DEBUG to Glean.initialize as follows:

import logging
from glean import Glean

Glean.initialize(..., log_level=logging.DEBUG)

If you want to see ping contents as well, set the GLEAN_LOG_PINGS environment variable to true.

Flexible logging API

You can set the logging level for the Python logging to DEBUG as follows:

import logging

logging.basicConfig(level=logging.DEBUG)

All log messages from the Glean Python SDK are on the glean logger, so if you need to control it independently, you can set a level for just the Glean Python SDK (but note that the global Python logging level also needs to be set as above):

logging.getLogger("glean").setLevel(logging.DEBUG)

The flexible logging API is unable to display networking-related log messages or ping contents with GLEAN_LOG_PINGS set to true.

See the Python logging documentation for more information.

Debugging JavaScript applications using Glean.js

Debugging features in JavaScript can be enabled through APIs exposed on the Glean object. For more information on the available features and how to enable them, see the Debugging API reference.

Debugging in the browser

Websites running Glean allow you to debug at runtime using the window.Glean object in the browser console. You can start debugging by simply:

  1. Opening the browser console
  2. Calling one of the window.Glean APIs: window.Glean.setLogPings, window.Glean.setDebugViewTag, window.Glean.setSourceTags.

These debugging options will persist for the length of the current page session. Once the tab is closed, you will need to make those API calls again.

Sending pings

Unlike other platforms, JavaScript doesn't expose convenience methods to send pings on demand.

In case that is necessary, calling the submit function for a given ping, such as pings.customPing.submit(), will send it.

Note that this method is only effective for custom pings. Glean internal pings are not exposed to users.

Logging pings

By calling Glean.logPings(true) all subsequent pings sent will be logged to the console.

To access the logs for web extensions on

Firefox

  1. Go to about:debugging#/runtime/this-firefox;
  2. Find the extension you want to see the logs for;
  3. Click on Inspect.

Chromium-based browsers

  1. Go to chrome://extensions;
  2. Find the extension you want to see the logs for;
  3. Click on background page.

How-tos

This chapter contains various how-tos and walkthroughs to help aid you in using Glean.

Server Knobs Walkthrough

A step-by-step guide in setting up and launching a Server Knobs Experiment

Server Knobs: A Complete Walkthrough

Purpose

This documentation serves as a step by step guide on how to create a Server Knobs configuration and make use of it in a Nimbus experiment or rollout. The intent is to explain everything from selecting the metrics or pings you wish to control all the way through launching the experiment or rollout and validating the data is being collected.

Audience

This documentation is aimed at the general users of Nimbus experimentation who wish to enable or disable specific metrics and/or pings as part of their deployment. This documentation assumes the reader has no special knowledge of the inner workings of either Glean or Nimbus, but it does assume that the audience has already undergone the prerequisite Nimbus training program and has access to Experimenter to create experiment and rollout definitions.

Let’s Get Started!

The first step in running a Server Knobs experiment or rollout is creating the definition for it in Experimenter. For the purposes of this walkthrough, a rollout will be used but the instructions are interchangeable if you are instead launching an experiment.

Experiment Setup

Create a new experiment

From the Experimenter landing page, we select the “Create New” button to begin defining the new rollout.

Experimenter landing page

Initial experiment definition

The initial setup requires a name, hypothesis, and the selection of a target application.

New experiment dialog

Here we enter a human readable name for the rollout and a brief synopsis of what we expect to learn in the “Hypothesis” section. In the final field we select our target application, Firefox Desktop. When that is complete, we click the “Next” button to proceed with the rollout definition.

The next screen we are presented with is the “Summary” page of the experiment/rollout, where we can add additional metadata like a longer description, and link to any briefs or other documentation related to the rollout. We complete the required information and then click on “Save and Continue”

Experiment summary page

Initial branch configuration

The next screen we are presented with is the branch configuration page. On this page we can select the glean feature, and check the box to indicate that this is a rollout.

Branch configuration page

At this point, we now need to create a bit of JSON configuration to put in the box seen here:

Branch payload text box

Building the Server Knobs configuration for metrics

This JSON configuration is the set of instructions for Glean that lets it know which metrics to enable or disable. In order to do this, we will visit the Glean Dictionary to help identify the metrics we wish to work with and get the identifiers from them needed for the configuration in Experimenter.

Upon arriving at the Glean Dictionary, we must first find the right application.

Glean Dictionary landing page

We are going to select “Firefox for Desktop” to match the application we previously selected in Experimenter. This brings us to a screen where we can search and filter for metrics which are defined in the application.

Glean Dictionary page for Firefox Desktop

To help locate the metrics we are interested in, start typing in the search box at the top of the list. For instance, if we were interested in urlbar metrics:

Using the Glean Dictionary search bar

From here we can select a metric, such as “urlbar.engagement” to see more information about it:

Detail view of urlbar.engagement metric

Here we can see this metric currently has active sampling configurations in both release and nightly. Let’s say we wish to add one for beta also. Our next step is to click the copy to clipboard button next to “Sampling Configuration Snippet”:

Sampling configuration snippet button

With the snippet copied to the clipboard, we return to Experimenter and our rollout configuration. We can now paste this snippet into the text-box like below:

Pasting into the branch configuration text box

That’s all that needs to be done here, if this is the only metric we need to configure. But what if we want to configure more than one? Then it’s back to Glean Dictionary to find the rest of the metrics we are interested in. Let’s say we are also interested in the “exposure” metric.

Searching for the urlbar.exposure metric

As we select the exposure metric from the list, we can see it isn’t currently being sampled by any experiments or rollouts, and we again find the button to copy the configuration snippet to the clipboard.

Back to the sampling configuration snippet button

Now, we can paste this just below the other snippet inside of Experimenter.

Pasting the new metric into Experimenter

As you can see, the JSON validator isn’t happy and there’s a red squiggle indicating that there’s a problem. We only need a part of the latest pasted snippet, so we copy the ”urlbar.exposure”: true portion of the snippet, and add a comma after the ”urlbar.engagement”: true in the snippet above.

The JSON validator isn't happy

We can then delete the remains of the snippet below, leaving us with a metric configuration with multiple metrics in it. This can be repeated for all the necessary metrics required by the rollout or experiment.

Cleaning up the branch payload

Adding pings to the configuration

This same procedure can be used to enable and disable pings, also. In order to copy a snippet for a ping, navigate to the “Pings” tab for the application on the Glean Dictionary.

Navigate to the Pings tab

From here, select a ping that is desired to be configured remotely. For instance, the “crash” ping:

Selecting the crash ping

Just like with the metric, select the button to copy the configuration snippet to your clipboard, then paste it into the Experimenter setup.

Adding the ping configuration to the Experimenter setup

This time we need to get everything for the ”pings_enabled” section, copy and paste it below the ”metrics_enabled” section in the snippet above. We also need to add a comma after the metrics section’s curly brace, like this:

Copying the ping config into the right spot

Then we can delete the remains of the pasted snippet at the bottom, leaving us with:

Cleaning up the ping configuration

Wrapping up

That should be everything needed to enable the two urlbar metrics, as well as the crash ping. Additional experiment branches can be configured in a similar fashion, or if this is a rollout, it should be ready to launch if the metric configuration is all that is needed.

YAML Registry Format

User defined Glean pings and metrics are declared in YAML files, which must be parsed by glean_parser to generate public APIs for said metrics and pings.

These files also serve the purpose of documenting metrics and pings. They are consumed by the probe-scraper tool, which generates a REST API to access metrics and pings information consumed by most other tools in the Glean ecosystem, such as GLAM and the Glean Dictionary.

Moreover, for products that do not wish to use the Glean Dictionary as their metrics and pings documentation source, glean_parser provides an option to generate Markdown documentation for metrics and pings based on these files. For more information of that, refer to the help output of the translate command, by running in your terminal:

$ glean_parser translate --help

metrics.yaml file

For a full reference on the metrics.yaml format, refer to the Metrics YAML Registry Format page.

pings.yaml file

For a full reference on the pings.yaml format, refer to the Pings YAML Registry Format page.

tags.yaml file

For a full reference on the tags.yaml format, refer to the Tags YAML Registry Format page.

Metrics YAML Registry Format

Metrics sent by an application or library are defined in YAML files which follow the metrics.yaml JSON schema.

This files must be parsed by glean_parser at build time in order to generate code in the target language (e.g. Kotlin, Swift, ...). The generated code is what becomes the public API to access the project's metrics.

For more information on how to introduce the glean_parser build step for a specific language / environment, refer to the "Adding Glean to your project" section of this book.

Note on the naming of these files

Although we refer to metrics definitions YAML files as metrics.yaml throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.

File structure

---
# Schema
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

$tags:
  - frontend

# Category
toolbar:
  # Name
  click:
    # Metric Parameters
    type: event
    description: |
      Event to record toolbar clicks.
    metadata:
      tags:
        - Interaction
    notification_emails:
      - CHANGE-ME@example.com
    bugs:
      - https://bugzilla.mozilla.org/123456789/
    data_reviews:
      - http://example.com/path/to/data-review
    expires: 2019-06-01

  double_click:
    ...

Schema

Declaring the schema at the top of a metrics definitions file is required, as it is what indicates that the current file is a metrics definitions file.

$tags

You may optionally declare tags at the file level that apply to all metrics in that file.

Category

Categories are the top-level keys on metrics definition files. One single definition file may contain multiple categories grouping multiple metrics. They serve the purpose of grouping related metrics in a project.

Categories can contain alphanumeric lower case characters as well as the . and _ characters which can be used to provide extra structure, for example category.subcategory is a valid category. Category lengths may not exceed 40 characters.

Categories may not start with the string glean. That prefix is reserved for Glean internal metrics.

See the "Capitalization" note to understand how the category is formatted in generated code.

Name

Metric names are the second-level keys on metrics definition files.

Names may contain alphanumeric lower case characters as well as the _ character. Metric name lengths may not exceed 30 characters.

"Capitalization" rules also apply to metric names on generated code.

Metric parameters

Specific metric types may have special required parameters in their definition, these parameters are documented in each "Metric Type" reference page.

Following are the parameters common to all metric types.

Required parameters

type

Specifies the type of a metric, like "counter" or "event". This defines which operations are valid for the metric, how it is stored and how data analysis tooling displays it. See the list of supported metric types.

Types should not be changed after release

Once a metric is defined in a product, its type should only be changed in rare circumstances. It's better to rename the metric with the new type instead. The ingestion pipeline will create a new column for a metric with a changed type. Any new analysis will need to use the new column going forward. The old column will still be populated with data from old clients.

description

A textual description of the metric for humans. It should describe what the metric does, what it means for analysts, and its edge cases or any other helpful information.

The description field may contain markdown syntax.

Imposed limits on line length

The Glean linter uses a line length limit of 80 characters. If your description is longer, e.g. because it includes longer links, you can disable yamllint using the following annotations (and make sure to enable yamllint again as well):

# yamllint disable
description: |
  Your extra long description, that's longer than 80 characters by far.
# yamllint enable

notification_emails

A list of email addresses to notify for important events with the metric or when people with context or ownership for the metric need to be contacted.

For example when a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com to the notification_emails addresses associated with the metric.

Consider adding both a group email address and an individual who is responsible for this metric.

bugs

A list of bugs (e.g. Bugzilla or GitHub) that are relevant to this metric. For example, bugs that track its original implementation or later changes to it.

Each entry should be the full URL to the bug in an issue tracker. The use of numbers alone is deprecated and will be an error in the future.

data_reviews

A list of URIs to any data collection review responses relevant to the metric.

expires

When the metric is set to expire.

After a metric expires, an application will no longer collect or send data related to it. May be one of the following values:

  • <build date>: An ISO date yyyy-mm-dd in UTC on which the metric expires. For example, 2019-03-13. This date is checked at build time. Except in special cases, this form should be used so that the metric automatically "sunsets" after a period of time. Emails will be sent to the notification_emails addresses when the metric is about to expire. Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.
  • <major version>: An integer greater than 0 representing the major version the metric expires in, For example, 11. The version is checked at build time against the major provided to the glean_parser (see e.g. Build configuration for Android, Build configuration for iOS) and is only valid if a major version is provided at built time. If no major version is provided at build time and expiration by major version is used for a metric, an error is raised. Note that mixing expiration by date and version is not allowed within a product.
  • never: This metric never expires.
  • expired: This metric is manually expired.

Optional parameters

tags

default: []

A list of tag names associated with this metric. Must correspond to an entry specified in a tags file.

lifetime

default: ping

Defines the lifetime of the metric. Different lifetimes affect when the metrics value is reset.

ping (default)

The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.

application

The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.

user

NOTE: Reach out to the Glean team before using this.

The metric is part of the user's profile and will live as long as the profile lives. This is often not the best choice unless the metric records a value that really needs to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id, the day the product was first executed. It is rare to use this lifetime outside of some metrics that are built in to the Glean SDK.

send_in_pings

default: events|metrics

Defines which pings the metric should be sent on. If not specified, the metric is sent on the default ping, which is the events ping for events and the metrics ping for everything else.

Most metrics don't need to specify this unless they are sent on custom pings.

The special value default may be used, in case it's required for a metric to be sent on the default ping as well as in a custom ping.

Adding metrics to every ping

For the small number of metrics that should be in every ping the Glean SDKs will eventually provide a solution. See bug 1695236 for details.

send_in_pings:
  - my-custom-ping
  - default

disabled

default: false

Data collection for this metric is disabled.

This is useful when you want to temporarily disable the collection for a specific metric without removing references to it in your source code.

Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.

version

default: 0

The version of the metric. A monotonically increasing integer value. This should be bumped if the metric changes in a backward-incompatible way.

data_sensitivity

default: []

A list of data sensitivity categories that the metric falls under. There are four data collection categories related to data sensitivity defined in Mozilla's data collection review process:

Category 1: Technical Data (technical)

Information about the machine or Firefox itself. Examples include OS, available memory, crashes and errors, outcome of automated processes like updates, safe browsing, activation, versions, and build id. This also includes compatibility information about features and APIs used by websites, add-ons, and other 3rd-party software that interact with Firefox during usage.

Category 2: Interaction Data (interaction)

Information about the user’s direct engagement with Firefox. Examples include how many tabs, add-ons, or windows a user has open; uses of specific Firefox features; session length, scrolls and clicks; and the status of discrete user preferences. It also includes information about the user's in-product journeys and product choices helpful to understand engagement (attitudes). For example, selections of add-ons or tiles to determine potential interest categories etc.

Category 3: Stored Content & Communications (stored_content)

(formerly Web activity data, web_activity)

Information about what people store, sync, communicate or connect to where the information is generally considered to be more sensitive and personal in nature. Examples include users' saved URLs or URL history, specific web browsing history, general information about their web browsing history (such as TLDs or categories of webpages visited over time) and potentially certain types of interaction data about specific web pages or stories visited (such as highlighted portions of a story). It also includes information such as content saved by users to an individual account like saved URLs, tags, notes, passwords and files as well as communications that users have with one another through a Mozilla service.

Category 4: Highly sensitive data or clearly identifiable personal data (highly_sensitive)

Information that directly identifies a person, or if combined with other data could identify a person. This data may be embedded within specific website content, such as memory contents, dumps, captures of screen data, or DOM data. Examples include account registration data like name, password, and email address associated with an account, payment data in connection with subscriptions or donations, contact information such as phone numbers or mailing addresses, email addresses associated with surveys, promotions and customer support contacts. It also includes any data from different categories that, when combined, can identify a person, device, household or account. For example Category 1 log data combined with Category 3 saved URLs. Additional examples are: voice audio commands (including a voice audio file), speech-to-text or text-to-speech (including transcripts), biometric data, demographic information, and precise location data associated with a persistent identifier, individual or small population cohorts. This is location inferred or determined from mechanisms other than IP such as wi-fi access points, Bluetooth beacons, cell phone towers or provided directly to us, such as in a survey or a profile.

Pings YAML Registry Format

Custom pings sent by an application or library are defined in YAML files which follow the pings.yaml JSON schema.

This files must be parsed by glean_parser at build time in order to generate code in the target language (e.g. Kotlin, Swift, ...). The generated code is what becomes the public API to access the project's custom pings.

For more information on how to introduce the glean_parser build step for a specific language / environment, refer to the "Adding Glean to your project" section of this book.

Note on the naming of these files

Although we refer to pings definitions YAML files as pings.yaml throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.

File structure

---
# Schema
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0

# Name
search:
  # Ping parameters
  description: >
    A ping to record search data.
  include_client_id: false
  notification_emails:
    - CHANGE-ME@example.com
  bugs:
    - http://bugzilla.mozilla.org/123456789/
  data_reviews:
    - http://example.com/path/to/data-review

Schema

Declaring the schema at the top of a pings definitions file is required, as it is what indicates that the current file is a pings definitions file.

Name

Ping names are the top-level keys on pings definitions files. One single definition file may contain multiple ping declarations.

Ping names are limited to lowercase letters from the ISO basic Latin alphabet and hyphens and a maximum of 30 characters.

Pings may not contain the words custom or ping in their names. These are considered redundant words and will trigger a REDUNDANT_PING lint failure on glean_parser.

"Capitalization" rules apply to ping names on generated code.

Reserved ping names

The names baseline, metrics, events, deletion-request, default and all-pings are reserved and may not be used as the name of a custom ping.

Ping parameters

Required parameters

description

A textual description of the purpose of the ping. It may contain markdown syntax.

metadata

default: {}

A dictionary of extra metadata associated with this ping.

tags

default: []

A list of tag names associated with this ping. Must correspond to an entry specified in a tags file.

ping_schedule

default: []

A list of ping names. When one of those pings is sent, then this ping is also sent, with the same reason. This is useful if you want a ping to be scheduled and sent at the same frequency as another ping, like baseline.

Pings cannot list themselves under ping_schedule, however it is possible to accidentally create cycles of pings where Ping A schedules Ping B, which schedules Ping C, which in turn schedules Ping A. This can result in a constant stream of pings being sent. Please use caution with ping_schedule, and ensure that you have not accidentally created any cycles with the ping references.

include_client_id

A boolean indicating whether to include the client_id in the client_info section of the ping.

notification_emails

A list of email addresses to notify for important events with the ping or when people with context or ownership for the ping need to be contacted.

Consider adding both a group email address and an individual who is responsible for this ping.

bugs

A list of bugs (e.g. Bugzilla or GitHub) that are relevant to this ping. For example, bugs that track its original implementation or later changes to it.

Each entry should be the full URL to the bug in an issue tracker. The use of numbers alone is deprecated and will be an error in the future.

data_reviews

A list of URIs to any data collection review responses relevant to the metric.

Optional parameters

send_if_empty

default: false

A boolean indicating if the ping is sent if it contains no metric data.

reasons

default: {}

The reasons that this ping may be sent. The keys are the reason codes, and the values are a textual description of each reason. The ping payload will (optionally) contain one of these reasons in the ping_info.reason field.

Tags YAML Registry Format

Any number of custom "tags" can be added to any metric or ping. This can be useful in data discovery tools like the Glean Dictionary. The tags for an application are defined in YAML files which follow the tags.yaml JSON schema.

These files must be parsed by glean_parser at build time in order to generate the metadata.

For more information on how to introduce the glean_parser build step for a specific language / environment, refer to the "Adding Glean to your project" section of this book.

Note on the naming of these files

Although we refer to tag definitions YAML files as tags.yaml throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.

File structure

---
# Schema
$schema: moz://mozilla.org/schemas/glean/tags/1-0-0

Search:
  description: Metrics or pings in the "search" domain

Schema

Declaring the schema at the top of a tags definitions file is required, as it is what indicates that the current file is a tag definitions file.

Name

Tag names are the top-level keys on tag definitions files. One single definition file may contain multiple tag declarations.

There is no restriction on the name of a tag, aside from the fact that they have a maximum of 80 characters.

Tag parameters

Required parameters

description

A textual description of the tag. It may contain markdown syntax.

The General API

The Glean SDKs have a minimal API available on their top-level Glean object called the General API. This API allows, among other things, to enable and disable upload, register custom pings and set experiment data.

Only initialize in the main application!

Glean should only be initialized from the main application, not individual libraries. If you are adding Glean support to a library, you can safely skip this section.

The API

The Glean SDKs provide a general API that supports the following operations. See API reference pages for SDK-specific details.

OperationDescriptionNotes
initializeConfigure and initialize the Glean SDK.Initializing the Glean SDK
setUploadEnabledEnable or disable Glean collection and upload.Toggling upload status
registerPingsRegister custom pings generated from pings.yaml.Custom pings
setExperimentActiveIndicate that an experiment is running.Using the Experiments API
setExperimentInactiveIndicate that an experiment is no longer running..Using the Experiments API
registerEventListenerRegister a callback by which a consumer can be notified of all event metrics being recorded.Glean Event Listener

Initializing

Glean needs to be initialized in order to be able to send pings, record metrics and perform maintenance tasks. Thus it is advised that Glean be initialized as soon as possible in an application's lifetime and importantly, before any other libraries in the application start using Glean.

Libraries are not required to initialize Glean

Libraries rely on the same Glean singleton as the application in which they are embedded. Hence, they are not expected to initialize Glean as the application should already do that.

Behavior when uninitialized

Any API called before Glean is initialized is queued and applied at initialization. To avoid unbounded memory growth the queue is bounded (currently to a maximum of 100 tasks), and further calls are dropped.

The number of calls dropped, if any, is recorded in the glean.error.preinit_tasks_overflow metric.

Behavior once initialized

When upload is enabled

Once initialized, if upload is enabled, Glean applies all metric recordings and ping submissions, for both user-defined and builtin metrics and pings.

This always happens asynchronously.

When upload is disabled

If upload is disabled, any persisted metrics, events and pings (other than first_run_date) are cleared. Pending deletion-request pings are sent. Subsequent calls to record metrics and submit pings will be no-ops. Because Glean does that as part of its initialization, users are required to always initialize Glean.

Glean must be initialized even if upload is disabled.

This does not apply to special builds where telemetry is disabled at build time. In that case, it is acceptable to not call initialize at all.

API

Glean.initialize(configuration)

Initializes Glean.

May only be called once. Subsequent calls to initialize are no-op.

Configuration

The available initialize configuration options may vary depending on the SDK. Below are listed the configuration options available on most SDKs. Note that on some SDKs some of the options are taken as a configuration object. Check the respective SDK documentation for details.

Configuration OptionDefault valueDescription
applicationIdOn Android/iOS: determined automatically. Otherwise required.Application identifier. For Android and iOS applications, this is the id used on the platform's respective app store and is extracted automatically from the application context.
uploadEnabledRequiredThe user preference on whether or not data upload is enabled.
channel-The application's release channel. When present, the app_channel will be reported in all pings' client_info sections.
appBuildOn Android/iOS: determined automatically. Otherwise: -A build identifier e.g. the build identifier generated by a CI system (e.g. "1234/A"). If not present, app_build will be reported as "Unknown" on all pings' client_info sections.
appDisplayVersion-The user visible version string for the application running Glean. If not present, app_display_version will be reported as "Unknown" on all pings' client_info sections.
serverEndpointhttps://incoming.telemetry.mozilla.orgThe server pings are sent to.
maxEventsGlean.js: 1.
Other SDKs: 500.
The maximum number of events the Glean storage will hold on to before submitting the 'events' ping. Refer to the events ping documentation for more information on its scheduling.
httpUploader-A custom HTTP uploader instance, that will overwrite Glean's provided uploader. Useful for users that wish to use specific uploader implementations. See Custom Uploaders for more information on how and when the use this feature.
logLevel-The level for how verbose the internal logging is. The level filter options in order from least to most verbose are: Off, Error, Warn, Info, Debug, Trace. See the log crate docs for more information.
enableEventTimestampstrueWhether to add a wall clock timestamp to all events.
rateLimit15 pings per 60s intervalSpecifies the maximum number of pings that can be uploaded per interval of a specified number of seconds.
experimentationId-Optional. An identifier derived by the application to be sent in all pings for the purpose of experimentation. See the experiments API documentation for more information.
enableInternalPingstrueWhether to enable the internal "baseline", "events", and "metrics" pings.
delayPingLifetimeIofalseWhether Glean should delay persistence of data from metrics with ping lifetime. On Android data is automatically persisted every 1000 writes and on backgrounding when enabled.

To learn about SDK specific configuration options available, refer to the Reference section.

Always initialize Glean with the correct upload preference

Glean must always be initialized with real values.

Always pass the user preference, e.g. Glean.initialize(uploadEnabled=userSettings.telemetryEnabled) or the equivalent for your application.

Calling Glean.setUploadEnabled(false) at a later point will trigger deletion-request pings and regenerate client IDs. This should only be done if the user preference actually changes.

An excellent place to initialize Glean is within the onCreate method of the class that extends Android's Application class.

import org.mozilla.yourApplication.GleanMetrics.GleanBuildInfo
import org.mozilla.yourApplication.GleanMetrics.Pings

class SampleApplication : Application() {

    override fun onCreate() {
        super.onCreate()

        // If you have custom pings in your application, you must register them
        // using the following command. This command should be omitted for
        // applications not using custom pings.
        Glean.registerPings(Pings)

        // Initialize the Glean library.
        Glean.initialize(
            applicationContext,
            // Here, `settings()` is a method to get user preferences, specific to
            // your application and not part of the Glean API.
            uploadEnabled = settings().isTelemetryEnabled,
            configuration = Configuration(),
            buildInfo = GleanBuildInfo.buildInfo
        )
    }
}

The Glean Kotlin SDK supports use across multiple processes. This is enabled by setting a dataPath value in the Glean.Configuration object passed to Glean.initialize. You do not need to set a dataPath for your main process. This configuration should only be used by a non-main process.

Requirements for a non-main process:

  • Glean.initialize must be called with the dataPath value set in the Glean.Configuration.
  • The default dataPath for Glean is {context.applicationInfo.dataDir}/glean_data. If you try to use this path, Glean.initialize will fail and throw an error.
  • Set the default process name as your main process. If this is not set up correctly, pings from the non-main process will not send. Configuration.Builder().setDefaultProcessName(<main_process_name>)

Note: When initializing from a non-main process with a specified dataPath, the lifecycle observers will not be set up. This means you will not receive otherwise scheduled baseline or metrics pings.

Consuming Glean through Android Components

When the Glean Kotlin SDK is consumed through Android Components, it is required to configure an HTTP client to be used for upload.

For example:

// Requires `org.mozilla.components:concept-fetch`
import mozilla.components.concept.fetch.Client

// Requires `org.mozilla.components:lib-fetch-httpurlconnection`.
// This can be replaced by other implementations, e.g. `lib-fetch-okhttp`
// or an implementation from `browser-engine-gecko`.
import mozilla.components.lib.fetch.httpurlconnection.HttpURLConnectionClient
import mozilla.components.service.glean.config.Configuration
import mozilla.components.service.glean.net.ConceptFetchHttpUploader

val httpClient = ConceptFetchHttpUploader(lazy { HttpURLConnectionClient() as Client })
val config = Configuration(httpClient = httpClient)
Glean.initialize(
   context,
   uploadEnabled = true,
   configuration = config,
   buildInfo = GleanBuildInfo.buildInfo
)

An excellent place to initialize Glean is within the application(_:) method of the class that extends the UIApplicationDelegate class.

import Glean
import UIKit

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
    func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // If you have custom pings in your application, you must register them
        // using the following command. This command should be omitted for
        // applications not using custom pings.
        Glean.shared.registerPings(GleanMetrics.Pings)

        // Initialize the Glean library.
        Glean.shared.initialize(
            // Here, `Settings` is a method to get user preferences specific to
            // your application, and not part of the Glean API.
            uploadEnabled = Settings.isTelemetryEnabled,
            configuration = Configuration(),
            buildInfo = GleanMetrics.GleanBuild.info
        )
    }
}

The Glean Swift SDK supports use across multiple processes. This is enabled by setting a dataPath value in the Glean.Configuration object passed to Glean.initialize. You do not need to set a dataPath for your main process. This configuration should only be used by a non-main process.

Requirements for a non-main process:

  • Glean.initialize must be called with the dataPath value set in the Glean.Configuration.
  • On iOS devices, Glean stores data in the Application Support directory. The default dataPath Glean uses is {FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask)[0]}/glean_data. If you try to use this path, Glean.initialize will fail and throw an error.

Note: When initializing from a non-main process with a specified dataPath, the lifecycle observers will not be set up. This means you will not receive otherwise scheduled baseline or metrics pings.

The main control for the Glean Python SDK is on the glean.Glean singleton.

from glean import Glean, Configuration

Glean.initialize(
    application_id="my-app-id",
    application_version="0.1.0",
    # Here, `is_telemetry_enabled` is a method to get user preferences specific to
    # your application, and not part of the Glean API.
    upload_enabled=is_telemetry_enabled(),
    configuration=Configuration(),
)

Unlike in other implementations, the Python SDK does not automatically send any pings. See the custom pings documentation about adding custom pings and sending them.

The Glean Rust SDK should be initialized as soon as possible.

use glean::{ClientInfoMetrics, Configuration};
let cfg = Configuration {
    data_path,
    application_id: "my-app-id".into(),
    // Here, `is_telemetry_enabled` is a method to get user preferences specific to
    // your application, and not part of the Glean API.
    upload_enabled: is_telemetry_enabled(),
    max_events: None,
    delay_ping_lifetime_io: false,
    server_endpoint: Some("https://incoming.telemetry.mozilla.org".into()),
    uploader: None,
    use_core_mps: true,
};

let client_info = ClientInfoMetrics {
    app_build: env!("CARGO_PKG_VERSION").to_string(),
    app_display_version: env!("CARGO_PKG_VERSION").to_string(),
    channel: None,
    locale: None,
};

glean::initialize(cfg, client_info);

The Glean Rust SDK does not support use across multiple processes, and must only be initialized on the application's main process.

Unlike in other implementations, the Rust SDK does not provide a default uploader. See PingUploader for details.

import Glean from "@mozilla/glean/<platform>";

Glean.initialize(
    "my-app-id",
    // Here, `isTelemetryEnabled` is a method to get user preferences specific to
    // your application, and not part of the Glean API.
    isTelemetryEnabled(),
    {
      appDisplayVersion: "0.1.0"
    }
);

Custom Uploaders

A custom HTTP uploader may be provided at initialization time in order to overwrite Glean's native ping uploader implementation. Each SDK exposes a base class for Glean users to extend into their own custom uploaders.

See BaseUploader for details on how to implement a custom upload on Kotlin.

See HttpPingUploader for details on how to implement a custom upload on Swift.

See BaseUploader for details on how to implement a custom upload on Python.

See PingUploader for details on how to implement a custom upload on Rust.

import { Uploader, UploadResult, UploadResultStatus } from "@mozilla/glean/uploader";
import Glean from "@mozilla/glean/<platform>";

/**
 * My custom uploader implementation
 */
export class MyCustomUploader extends Uploader {
  async post(url: string, body: string, headers): Promise<UploadResult> {
    // My custom POST request code
  }
}

Glean.initialize(
    "my-app-id",
    // Here, `isTelemetryEnabled` is a method to get user preferences specific to
    // your application, and not part of the Glean API.
    isTelemetryEnabled(),
    {
      httpClient: new MyCustomUploader()
    }
);

Testing API

When unit testing metrics and pings, Glean needs to be put in testing mode. Initializing Glean for tests is referred to as "resetting". It is advised that Glean is reset before each unit test to prevent side effects of one unit test impacting others.

How to do that and the definition of "testing mode" varies per Glean SDK. Refer to the information below for SDK specific information.

Using the Glean Kotlin SDK's unit testing API requires adding Robolectric 4.0 or later as a testing dependency. In Gradle, this can be done by declaring a testImplementation dependency:

dependencies {
    testImplementation "org.robolectric:robolectric:4.3.1"
}

In order to put the Glean Kotlin SDK into testing mode apply the JUnit GleanTestRule to your test class. Testing mode will prevent issues with async calls when unit testing the Glean SDK on Kotlin. It also enables uploading and clears the recorded metrics at the beginning of each test run.

The rule can be used as shown:

@RunWith(AndroidJUnit4::class)
class ActivityCollectingDataTest {
    // Apply the GleanTestRule to set up a disposable Glean instance.
    // Please note that this clears the Glean data across tests.
    @get:Rule
    val gleanRule = GleanTestRule(ApplicationProvider.getApplicationContext())

    @Test
    fun checkCollectedData() {
      // The Glean Kotlin SDK testing APIs can be called here.
    }
}

This will ensure that metrics are done recording when the other test functions are used.

Note: There's no automatic test rule for Glean tests implemented in Swift.

In order to prevent issues with async calls when unit testing the Glean SDK, it is important to put the Glean Swift SDK into testing mode. When the Glean Swift SDK is in testing mode, it enables uploading and clears the recorded metrics at the beginning of each test run.

Activate it by resetting Glean in your test's setup:

// All pings and metrics testing APIs are marked as `internal`
// so you need to import `Glean` explicitly in test mode.
import XCTest

class GleanUsageTests: XCTestCase {
    override func setUp() {
        Glean.shared.resetGlean(clearStores: true)
    }

    // ...
}

This will ensure that metrics are done recording when the other test functions are used.

The Glean Python SDK contains a helper function glean.testing.reset_glean() for resetting Glean for tests. It has two required arguments: the application ID, and the application version.

Each reset of the Glean Python SDK will create a new temporary directory for Glean to store its data in. This temporary directory is automatically cleaned up the next time the Glean Python SDK is reset or when the testing framework finishes.

The instructions below assume you are using pytest as the test runner. Other test-running libraries have similar features, but are different in the details.

Create a file conftest.py at the root of your test directory, and add the following to reset Glean at the start of every test in your suite:

import pytest
from glean import testing

@pytest.fixture(name="reset_glean", scope="function", autouse=True)
def fixture_reset_glean():
    testing.reset_glean(application_id="my-app-id", application_version="0.1.0")
Note

Glean uses a global singleton object. Tests need to run single-threaded or need to ensure exclusivity using a lock.

The Glean Rust SDK contains a helper function test_reset_glean() for resetting Glean for tests. It has three required arguments:

  1. the configuration to use
  2. the client info to use
  3. whether to clear stores before initialization

You can call it like below in every test:

#![allow(unused)]
fn main() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().to_path_buf();

let glean::Configuration {
  data_path: tmpname,
  application_id: "app-id".into(),
  upload_enabled: true,
  max_events: None,
  delay_ping_lifetime_io: false,
  server_endpoint: Some("invalid-test-host".into()),
  uploader: None,
  use_core_mps: false,
};
let client_info = glean::ClientInfoMetrics::unknown();
glean::test_reset_glean(cfg, client_info, false);
}

The Glean JavaScript SDK contains a helper function testResetGlean() for resetting Glean for tests. It expects the same list of arguments as Glean.initialize.

Each reset of the Glean JavaScript SDK will clear stores. Calling testResetGlean will also make metrics and pings testing APIs available and replace ping uploading with a mock implementation that does not make real HTTP requests.

import { testResetGlean } from "@mozilla/glean/testing"

describe("myTestSuite", () => {
    beforeEach(async () => {
        await testResetGlean("my-test-id");
    });
});

Reference

Toggling upload status

The Glean SDKs provide an API for toggling Glean's upload status after initialization.

Applications instrumented with Glean are expected to provide some form of user interface to allow for toggling the upload status.

Disabling upload

When upload is disabled, the Glean SDK will perform the following tasks:

  1. Submit a deletion-request ping.
  2. Cancel scheduled ping uploads.
  3. Clear metrics and pings data from the client, except for the first_run_date metric.

While upload is disabled, metrics aren't recorded and no data is uploaded.

Enabling upload

When upload is enabled, the Glean SDK will re-initialize its core metrics. The only core metric that is not re-initialized is the first_run_date metric.

While upload is enabled all metrics are recorded as expected and pings are sent to the telemetry servers.

API

Glean.setUploadEnabled(boolean)

Enables or disables upload.

If called prior to initialize this function is a no-op.

If the upload state is not actually changed in between calls to this function, it is also a no-op.

import mozilla.telemetry.glean.Glean

open class MainActivity : AppCompatActivity() {
    override fun onCreate() {
        // ...

        uploadSwitch.setOnCheckedChangeListener { _, isChecked ->
            if (isChecked) {
                Glean.setUploadEnabled(true)
            } else {
                Glean.setUploadEnabled(false)
            }
        }
    }
}
import mozilla.telemetry.glean.Glean

Glean.INSTANCE.setUploadEnabled(false);
import Glean
import UIKit

class ViewController: UIViewController {
    @IBOutlet var enableSwitch: UISwitch!

    // ...

    @IBAction func enableToggled(_: Any) {
        Glean.shared.setUploadEnabled(enableSwitch.isOn)
    }
}
from glean import Glean

Glean.set_upload_enabled(false)
use glean;

glean::set_upload_enabled(false);
import Glean from "@mozilla/glean/web";

const uploadSwitch = document.querySelector("input[type=checkbox].upload-switch");
uploadSwitch.addEventListener("change", event => {
    if (event.target.checked) {
        Glean.setUploadEnabled(true);
    } else {
        Glean.setUploadEnabled(false);
    }
});

Reference

Using the experiments API

The Glean SDKs support tagging all their pings with experiments annotations. The annotations are useful to report that experiments were active at the time the measurement were collected. The annotations are reported in the optional experiments entry in the ping_info section of all pings.

Experiment annotations are not persisted

The experiment annotations set through this API are not persisted by the Glean SDKs. The application or consuming library is responsible for setting the relevant experiment annotations at each run.

It's not required to define experiment IDs and branches

Experiment IDs and branches don't need to be pre-defined in the Glean SDK registry files. Please also note that the extra map is a non-nested arbitrary String to String map. It also has limits on the size of the keys and values defined below.

Recording API

setExperimentActive

Annotates Glean pings with experiment data.

// Annotate Glean pings with experiments data.
Glean.setExperimentActive(
  experimentId = "blue-button-effective",
  branch = "branch-with-blue-button",
  extra: mapOf(
    "buttonLabel" to "test"
  )
)
// Annotate Glean pings with experiments data.
Glean.shared.setExperimentActive(
  experimentId: "blue-button-effective",
  branch: "branch-with-blue-button",
  extra: ["buttonLabel": "test"]
)
from glean import Glean

Glean.set_experiment_active(
  experiment_id="blue-button-effective",
  branch="branch-with-blue-button",
  extra={
    "buttonLabel": "test"
  }
)
let mut extra = HashMap::new();
extra.insert("buttonLabel".to_string(), "test".to_string());
glean::set_experiment_active(
    "blue-button-effective".to_string(),
    "branch-with-blue-button".to_string(),
    Some(extra),
);

C++

At present there is no dedicated C++ Experiments API for Firefox Desktop

If you require one, please file a bug.

JavaScript

let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
FOG.setExperimentActive(
  "blue-button-effective",
  "branch-with-blue-button",
  {"buttonLabel": "test"}
);

Limits

  • experimentId, branch, and the keys and values of the extra field are fixed at a maximum length of 100 bytes. Longer strings are truncated. (Specifically, length is measured in the number of bytes when the string is encoded in UTF-8.)
  • extra map is limited to 20 entries. If passed a map which contains more elements than this, it is truncated to 20 elements. WARNING Which items are truncated is nondeterministic due to the unordered nature of maps. What's left may not necessarily be the first elements added.

Recorded errors

  • invalid_value: If the values of experimentId or branch are truncated for length, if the keys or values in the extra map are truncated for length, or if the extra map is truncated for the number of elements.

setExperimentInactive

Removes the experiment annotation. Should be called when the experiment ends.

Glean.setExperimentInactive("blue-button-effective")
Glean.shared.setExperimentInactive(experimentId: "blue-button-effective")
from glean import Glean

Glean.set_experiment_inactive("blue-button-effective")
glean::set_experiment_inactive("blue-button-effective".to_string());

C++

At present there is no dedicated C++ Experiments API for Firefox Desktop

If you require one, please file a bug.

JavaScript

let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
FOG.setExperimentInactive("blue-button-effective");

Set an experimentation identifier

An experimentation enrollment identifier that is derived and provided by the application can be set through the configuration object passed into the initialize function. See the section on Initializing Glean for more information on how to set this within the Configuration object.

This identifier will be set during initialization and sent along with all pings sent by Glean, unless that ping is has opted out of sending the client_id. This identifier is not persisted by Glean and must be persisted by the application if necessary for it to remain consistent between runs.

Limits

The experimentation ID is subject to the same limitations as a string metric type.

Recorded errors

The experimentation ID will produce the same errors as a string metric type.

Testing API

testIsExperimentActive

Reveals if the experiment is annotated in Glean pings.

assertTrue(Glean.testIsExperimentActive("blue-button-effective"))
XCTAssertTrue(Glean.shared.testIsExperimentActive(experimentId: "blue-button-effective"))
from glean import Glean

assert Glean.test_is_experiment_active("blue-button-effective")
assert!(glean::test_is_experiment_active("blue-button-effective".to_string());

testGetExperimentData

Returns the recorded experiment data including branch and extras.

assertEquals(
  "branch-with-blue-button", Glean.testGetExperimentData("blue-button-effective")?.branch
)
XCTAssertEqual(
  "branch-with-blue-button",
  Glean.testGetExperimentData(experimentId: "blue-button-effective")?.branch
)
from glean import Glean

assert (
    "branch-with-blue-button" ==
    Glean.test_get_experiment_data("blue-button-effective").branch
)
assert_eq!(
    "branch-with-blue-button",
    glean::test_get_experiment_data("blue-button-effective".to_string()).branch,
);

C++

At present there is no dedicated C++ Experiments API for Firefox Desktop

If you require one, please file a bug.

JavaScript

let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
Assert.equals(
  "branch-with-blue-button",
  FOG.testGetExperimentData("blue-button-effective").branch
);

testGetExperimentationId

Returns the current Experimentation ID, if any.

assertEquals("alpha-beta-gamma-delta", Glean.testGetExperimentationId())
XCTAssertEqual(
  "alpha-beta-gamma-delta",
  Glean.shared.testGetExperimentationId()!,
  "Experimenatation ids must match"
)
from glean import Glean

assert "alpha-beta-gamma-delta" == Glean.test_get_experimentation_id()
assert_eq!(
  "alpha-beta-gamma-delta".to_string(),
  glean_test_get_experimentation_id(),
  "Experimentation id must match"
);

Reference

Registering custom pings

After defining custom pings glean_parser is able to generate code from pings.yaml files in a Pings object, which must be instantiated so Glean can send pings by name.

API

registerPings

Loads custom ping metadata into your application or library.

In Kotlin, this object must be registered from your startup code before calling Glean.initialize (such as in your application's onCreate method or a function called from that method).

import org.mozilla.yourApplication.GleanMetrics.Pings

override fun onCreate() {
    Glean.registerPings(Pings)

    Glean.initialize(applicationContext, uploadEnabled = true)
}

In Swift, this object must be registered from your startup code before calling Glean.shared.initialize (such as in your application's UIApplicationDelegate application(_:didFinishLaunchingWithOptions:) method or a function called from that method).

import Glean

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
    func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        Glean.shared.registerPings(GleanMetrics.Pings)

        Glean.shared.initialize(uploadEnabled = true)
    }
}

For Python, the pings.yaml file must be available and loaded at runtime.

While the Python SDK does provide a Glean.register_ping_type function, if your project is a script (i.e. just Python files in a directory), you can load the pings.yaml before calling Glean.initialize using:

from glean import load_pings

pings = load_pings("pings.yaml")

Glean.initialize(
    application_id="my-app-id",
    application_version="0.1.0",
    upload_enabled=True,
)

If your project is a distributable Python package, you need to include the pings.yaml file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename() to get the filename at runtime.

from glean import load_pings
from pkg_resources import resource_filename

pings = load_pings(resource_filename(__name__, "pings.yaml"))

In Rust custom pings need to be registered individually. This should be done before calling glean::initialize.

use your_glean_metrics::pings;

glean::register_ping_type(&pings::custom_ping);
glean::register_ping_type(&pings::search);
glean::initialize(cfg, client_info);

Shut down

Provides a way for users to gracefully shut down Glean, by blocking until it is finished performing pending tasks such as recording metrics and uploading pings.

How the Glean SDKs execute tasks

Most calls to Glean APIs are dispatched1. This strategy is adopted because most tasks performed by the Glean SDKs involve file system read or write operations, HTTP requests and other time consuming actions.

Each Glean SDK has an internal structure called "Dispatcher" which makes sure API calls get executed in the order they were called, while not requiring the caller to block on the completion of each of these tasks.

1

Here, this term indicates the tasks are run asynchronously in JavaScript or in a different thread for all other SDKs.

API

shutdown

fn main() {
    let cfg = Configuration {
        // ...
    };
    let client_info = /* ... */;
    glean::initialize(cfg, client_info);

    // Ensure the dispatcher thread winds down
    glean::shutdown();
}
import Glean from "@mozilla/glean/webext";

async function onUninstall() {
  // Flips Glean upload status to `false`,
  // which triggers sending of a `deletion-request` ping.
  Glean.setUploadEnabled(false);

  // Block on shut down to guarantee all pending pings
  // (including the `deletion-request` sent above)
  // are sent before the extension is uninstalled.
  await Glean.shutdown();

  // Uninstall browser extension without asking for user approval before doing so.
  await browser.management.uninstallSelf({ showConfirmDialog: false });
}

The shutdown API is available for all JavaScript targets, even though the above example is explicitly using the webext target.

Reference

Glean Event Listener

Note: This API is currently experimental and subject to change or elimination. Please reach out to the Glean Team if you are planning on using this API in its experimental state.

Summary

Glean provides an API to register a callback by which a consumer can be notified of all event metrics being recorded.

Usage

Consumers can register a callback through this API which will be called with the base identifier of each event metric when it is recorded. The base identifier of the event consists of the category name and the event name with a dot separator:

<event_category>.<event_name>

Glean will execute the registered callbacks on a background thread independent of the thread in which the event is being recorded in order to not interfere with the collection of the event.

Glean will ensure that event recordings are reported to listeners in the same order that they are recorded by using the same dispatching mechanisms used to ensure events are recorded in the order they are received.

Examples

class TestEventListener : GleanEventListener {
    val listenerTag: String = "TestEventListener"
    var lastSeenId: String = ""
    var count: Int = 0

    override fun onEventRecorded(id: String) {
        this.lastSeenId = id
        this.count += 1
    }
}

val listener = TestEventListener()
Glean.registerEventListener(listener.listenerTag, listener)

// If necessary to unregister the listener:
Glean.unregisterEventListener(listener.listenerTag)
class TestEventListener: GleanEventListener {
    let listenerTag: String = "TestEventListener"
    var lastSeenId: String = ""
    var count: Int64 = 0

    func onEventRecorded(_ id: String) {
        self.lastSeenId = id
        self.count += 1
    }
}

let listener = TestEventListener()
Glean.shared.registerEventListener(tag: listener.listenerTag, listener: listener)

// If necessary to unregister the listener:
Glean.shared.unregisterEventListener(listener.listenerTag)

Debugging

Different platforms have different ways to enable each debug functionality. They may be enabled

  • through APIs exposed on the Glean singleton,
  • through environment variables set at run time,
  • or through platform specific debug tools.

Platform Specific Information

Check out the platform specific guides on how to use Glean's debug functionalities.

  1. Debugging applications using the Glean Android SDK
  2. Debugging applications using the Glean iOS SDK
  3. Debugging applications using the Glean Python SDK
  4. Debugging applications using the Glean JavaScript SDK

Features

The Glean SDKs provides four debugging features.

Log Pings

This is either true or false and will cause all subsequent pings that are submitted, to also be echoed to the device's log.

Debug View Tag

This will tag all subsequent outgoing pings with the provided value, in order to identify them in the Glean Debug View.

Source Tags

This will tag all subsequent outgoing pings with a maximum of 5 comma-separated tags.

Send Pings

This feature is only available for the Kotlin and Swift SDKs and in Firefox Desktop via about:glean.

This expects the name of a ping and forces its immediate submission.

Log pings

This flag causes all subsequent pings that are submitted to also be echoed to the product's log.

Once enabled, the only way to disable this feature is to restart or manually reset the application.

On how to access logs

The Glean SDKs log warnings and errors through platform-specific logging frameworks. See the platform-specific instructions for information on how to view the logs on the platform you are on.

Limits

  • The accepted values are true or false. Any other value will be ignored.

API

setLogPings

Enables or disables ping logging.

This API can safely be called before Glean.initialize. The tag will be applied upon initialization in this case.

use glean;

glean.set_log_pings(true);
import Glean from "@mozilla/glean/<platform>";

Glean.setLogPings(true);

Environment variable

GLEAN_LOG_PINGS

It is also possible to enable ping logging through the GLEAN_LOG_PINGS environment variable.

This variable must be set at runtime, not at compile time. It will be checked upon Glean initialization.

Xcode IDE scheme editor popup screenshot

To set environment variables to the process running your app in an iOS device or emulator you need to edit the scheme for your app. In the Xcode IDE, use the shortcut Cmd + < to open the scheme editor popup. The environment variables editor is under the Arguments tab on this popup.

$ GLEAN_LOG_PINGS=true python my_application.py
$ GLEAN_LOG_PINGS=true cargo run
$ GLEAN_LOG_PINGS=true ./mach run

Debug View Tag

Tag all subsequent outgoing pings with a given value, in order to redirect them to the Glean Debug View.

"To tag" a ping with the Debug View Tag means that the ping request will contain the X-Debug-Id header with the given tag.

Once enabled, the only way to disable this feature is to restart or manually reset the application.

Limits

  • Any valid HTTP header value is a valid debug view tag (e.g. any value that matches the regex [a-zA-Z0-9-]{1,20}). Invalid values will be ignored.

API

setDebugViewTag

Sets the Debug View Tag to a given value.

This API can safely be called before Glean.initialize. The tag will be applied upon initialization in this case.

import Glean

Glean.shared.setDebugViewTag("my-tag")
use glean;

glean.set_debug_view_tag("my-tag");
import Glean from "@mozilla/glean/<platform>";

Glean.setDebugViewTag("my-tag");

Environment variable

GLEAN_DEBUG_VIEW_TAG

It is also possible to set the debug view tag through the GLEAN_DEBUG_VIEW_TAG environment variable.

This variable must be set at runtime, not at compile time. It will be checked upon Glean initialization.

Xcode IDE scheme editor popup screenshot

To set environment variables to the process running your app in an iOS device or emulator you need to edit the scheme for your app. In the Xcode IDE, use the shortcut Cmd + < to open the scheme editor popup. The environment variables editor is under the Arguments tab on this popup.

$ GLEAN_DEBUG_VIEW_TAG="my-tag" python my_application.py
$ GLEAN_DEBUG_VIEW_TAG="my-tag" cargo run
$ GLEAN_DEBUG_VIEW_TAG="my-tag" ./mach run

Source Tags

Tag all subsequent outgoing pings with a maximum of 5 comma-separated tags.

"To tag" a ping with Source Tags means that the ping request will contain the X-Source-Tags header with a comma separated list of the given tags.

Once enabled, the only way to disable this feature is to restart or manually reset the application.

Limits

  • Any valid HTTP header value is a valid source tag (e.g. any value that matches the regex [a-zA-Z0-9-]{1,20}) and values starting with the substring glean are reserved for internal Glean usage and thus are also considered invalid. If any value in the list of source tags is invalid, the whole list will be ignored.
  • If the list of tags has more than five members, the whole list will be ignored.
  • The special value automation is meant for tagging pings generated on automation: such pings will be specially handled on the pipeline (i.e. discarded from non-live views).

API

setSourceTags

Sets the Source Tags to a given value.

This API can safely be called before Glean.initialize. The tag will be applied upon initialization in this case.

import mozilla.telemetry.glean.Glean

Glean.setSourceTags(setOf("my-tag", "your-tag", "our-tag"))
Glean.INSTANCE.setSourceTags(setOf("my-tag", "your-tag", "our-tag"))
import Glean

Glean.shared.setSourceTags(["my-tag", "your-tag", "our-tag"])
use glean;

glean.set_source_tags(["my-tag", "your-tag", "our-tag"]);
import Glean from "@mozilla/glean/<platform>";

Glean.setSourceTags(["my-tag", "your-tag", "our-tag"]);

Environment variable

GLEAN_SOURCE_TAGS

It is also possible to set the debug view tag through the GLEAN_SOURCE_TAGS environment variable.

This variable must be set at runtime, not at compile time. It will be checked upon Glean initialization.

Xcode IDE scheme editor popup screenshot

To set environment variables to the process running your app in an iOS device or emulator you need to edit the scheme for your app. In the Xcode IDE, use the shortcut Cmd + < to open the scheme editor popup. The environment variables editor is under the Arguments tab on this popup.

$ GLEAN_SOURCE_TAGS=my-tag,your-tag,our-tag python my_application.py
$ GLEAN_SOURCE_TAGS=my-tag,your-tag,our-tag cargo run
$ GLEAN_SOURCE_TAGS=my-tag,your-tag,our-tag ./mach run

Metrics

Not sure which metric type to use? These docs contain a series of questions that can help. Reference information about each metric type is linked below.

The parameters available that apply to any metric type are in the metric parameters page.

There are different metrics to choose from, depending on what you want to achieve:

  • Boolean: Records a single truth value, for example "is a11y enabled?"

  • Labeled boolean: Records truth values for a set of labels, for example "which a11y features are enabled?"

  • Counter: Used to count how often something happens, for example, how often a certain button was pressed.

  • Labeled counter: Used to count how often something happens, for example which kind of crash occurred ("uncaught_exception" or "native_code_crash").

  • String: Records a single Unicode string value, for example the name of the OS.

  • Labeled strings: Records multiple Unicode string values, for example to record which kind of error occurred in different stages of a login process.

  • String List: Records a list of Unicode string values, for example the list of enabled search engines.

  • Timespan: Used to measure how much time is spent in a single task.

  • Timing Distribution: Used to record the distribution of multiple time measurements.

  • Memory Distribution: Used to record the distribution of memory sizes.

  • UUID: Used to record universally unique identifiers (UUIDs), such as a client ID.

  • URL: Used to record URL-like strings.

  • Datetime: Used to record an absolute date and time, such as the time the user first ran the application.

  • Events: Records events e.g. individual occurrences of user actions, say every time a view was open and from where.

  • Custom Distribution: Used to record the distribution of a value that needs fine-grained control of how the histogram buckets are computed. Custom distributions are only available for values that come from Gecko.

  • Quantity: Used to record a single non-negative integer value. For example, the width of the display in pixels.

  • Rate: Used to record the rate something happens relative to some other thing. For example, the number of HTTP connections that experienced an error relative to the number of total HTTP connections made.

  • Text: Records a single long Unicode text, used when the limits on String are too low.

  • Object: Record structured data.

Labeled metrics

There are two types of metrics listed above - labeled and unlabeled metrics. If a metric is labeled, it means that for a single metric entry you define in metrics.yaml, you can record into multiple metrics under the same name, each of the same type and identified by a different string label.

This is useful when you need to break down metrics by a label known at build time or run time. For example:

  • When you want to count a different set of sub-views that users interact with, you could use viewCount["view1"].add() and viewCount["view2"].add().

  • When you want to count errors that might occur for a feature, you could use errorCount[errorName].add().

Labeled metrics come in two forms:

  • Static labels: The labels are specified at build time in the metrics.yaml file, in the labels parameter. If a label that isn't part of this set is used at run time, it is converted to the special label __other__. The number of static labels is limited to 4096 per metric.

  • Dynamic labels: The labels aren't known at build time, so are set at run time. Only the first 16 labels seen by Glean will be tracked. After that, any additional labels are converted to the special label __other__.

Note: Be careful with using arbitrary strings as labels and make sure they can't accidentally contain identifying data (like directory paths or user input).

Label format

Labels must not exceed 71 characters in length, and may comprise any printable ASCII characters.

Adding or changing metric types

Glean has a well-defined process for requesting changes to existing metric types or suggesting the implementation of new metric types:

  1. Glean consumers need to file a bug in the Data platforms & tools::Glean Metric Types component, filling in the provided form;
  2. The triage owner of the Bugzilla component prioritizes this within 6 business days and kicks off the decision making process.
  3. Once the decision process is completed, the bug is closed with a comment outlining the decision that was made.

Deprecated metrics

Boolean

Boolean metrics are used for reporting simple flags.

Recording API

set

Sets a boolean metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Flags

Flags.a11yEnabled.set(System.isAccesibilityEnabled())
import org.mozilla.yourApplication.GleanMetrics.Flags;

Flags.INSTANCE.a11yEnabled().set(System.isAccessibilityEnabled());
Flags.a11yEnabled.set(self.isAccessibilityEnabled)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.flags.a11y_enabled.set(is_accessibility_enabled())
use glean_metrics::flags;

flags::a11y_enabled.set(system.is_accessibility_enabled());
import * as flags from "./path/to/generated/files/flags.js";

flags.a11yEnabled.set(this.isAccessibilityEnabled());

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::flags::a11y_enabled.Set(false);

JavaScript

Glean.flags.a11yEnabled.set(false);

Recorded errors

  • invalid_type: if a non-boolean value is given (JavaScript only).

Testing API

testGetValue

Gets the recorded value for a given boolean metric.
Returns true or false if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Flags

assertTrue(Flags.a11yEnabled.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Flags;

assertTrue(Flags.INSTANCE.a11yEnabled().testGetValue());
XCTAssertTrue(Flags.a11yEnabled.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert True is metrics.flags.a11y_enabled.test_get_value()
use glean_metrics::flags;

assert!(flags::a11y_enabled.test_get_value(None).unwrap());
import * as flags from "./path/to/generated/files/flags.js";

assert(await flags.a11yEnabled.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_EQ(false, mozilla::glean::flags::a11y_enabled.TestGetValue().value());

JavaScript

Assert.equal(false, Glean.flags.a11yEnabled.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given boolean metric.

import org.mozilla.yourApplication.GleanMetrics.Flags

assertEquals(
    0, Flags.a11yEnabled.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Flags;

assertEquals(
    0, Flags.INSTANCE.a11yEnabled().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
XCTAssertEqual(0, Flags.a11yEnabled.testGetNumRecordedErrors(.invalidValue))
assert 0 == metrics.flags.a11y_enabled.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;

use glean_metrics::flags;

assert_eq!(
  0,
  flags::a11y_enabled.test_get_num_recorded_errors(
    ErrorType::InvalidValue
  )
);

Metric parameters

Example boolean metric definition:

flags:
  a11y_enabled:
    type: boolean
    description: >
      Records whether a11y is enabled on the device.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

N/A

Data questions

  • Is accessibility enabled?

Reference

Labeled Booleans

Labeled booleans are used to record different related boolean flags.

Recording API

set

Sets one of the labels in a labeled boolean metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Accessibility

Accessibility.features["screen_reader"].set(isScreenReaderEnabled())
Accessibility.features["high_contrast"].set(isHighContrastEnabled())
import org.mozilla.yourApplication.GleanMetrics.Accessibility;

Acessibility.INSTANCE.features()["screen_reader"].set(isScreenReaderEnabled());
Acessibility.INSTANCE.features()["high_contrast"].set(isHighContrastEnabled());
Accessibility.features["screen_reader"].set(isScreenReaderEnabled())
Accessibility.features["high_contrast"].set(isHighContrastEnabled())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.accessibility.features["screen_reader"].set(is_screen_reader_enabled())
metrics.accessibility.features["high_contrast"].set(is_high_contrast_enabled())
use glean_metrics::accessibility;

accessibility::features.get("screen_reader").set(is_screen_reader_enabled());
accessibility::features.get("high_contrast").set(is_high_contrast_enabled());
import * as acessibility from "./path/to/generated/files/acessibility.js";

acessibility.features["screen_reader"].set(this.isScreenReaderEnabled());
acessibility.features["high_contrast"].set(this.isHighContrastEnabled());

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::accessibility::features.Get("screen_reader"_ns).Set(true);
mozilla::glean::accessibility::features.Get("high_contrast"_ns).Set(false);

JavaScript

Glean.accessibility.features.screen_reader.set(true);
Glean.accessibility.features["high_contrast"].set(false);

Recorded Errors

  • invalid_type: if a non-boolean value is given.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Limits

  • Labels must conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • The list of labels is limited to:
    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled boolean metric.
Returns the count if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Accessibility

// Do the booleans have the expected values?
assertEquals(True, Accessibility.features["screen_reader"].testGetValue())
assertEquals(False, Accessibility.features["high_contrast"].testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Accessibility;

// Do the booleans have the expected values?
assertEquals(True, Acessibility.INSTANCE.features()["screen_reader"].testGetValue());
assertEquals(False, Acessibility.INSTANCE.features()["high_contrast"].testGetValue());
// Do the booleans have the expected values?
XCTAssertEqual(true, Accessibility.features["screen_reader"].testGetValue())
XCTAssertEqual(false, Accessibility.features["high_contrast"].testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Do the booleans have the expected values?
assert metrics.accessibility.features["screen_reader"].test_get_value()
assert not metrics.accessibility.features["high_contrast"].test_get_value()
use glean_metrics::accessibility;

// Do the booleans have the expected values?
assert!(accessibility::features.get("screen_reader").test_get_value(None).unwrap());
assert!(!accessibility::features.get("high_contrast").test_get_value(None).unwrap());
import * as accessibility from "./path/to/generated/files/acessibility.js";

assert(await accessibility.features["screen_reader"].testGetValue());
assert(!(await accessibility.features["high_contrast"].testGetValue()));

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_EQ(
    true,
    mozilla::glean::accessibility::features.Get("screen_reader"_ns).TestGetValue().unwrap().ref());
ASSERT_EQ(
    false,
    mozilla::glean::accessibility::features.Get("high_contrast"_ns).TestGetValue().unwrap().ref());

JavaScript

Assert.equal(true, Glean.accessibility.features["screen_reader"].testGetValue());
Assert.equal(false, Glean.accessibility.features.high_contrast.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled boolean metric in total.

import org.mozilla.yourApplication.GleanMetrics.Accessibility

// Did we record any invalid labels?
assertEquals(
    0,
    Accessibility.features.testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
)
import org.mozilla.yourApplication.GleanMetrics.Accessibility;

// Did we record any invalid labels?
assertEquals(
    0,
    Acessibility.INSTANCE.features().testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
);
// Were there any invalid labels?
XCTAssertEqual(0, Accessibility.features.testGetNumRecordedErrors(.invalidLabel))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Did we record any invalid labels?
assert 0 == metrics.accessibility.features.test_get_num_recorded_errors(
    ErrorType.INVALID_LABEL
)
use glean::ErrorType;
use glean_metrics::accessibility;

// Did we record any invalid labels?
assert_eq!(
  1,
  accessibility::features.test_get_num_recorded_errors(
    ErrorType::InvalidLabel
  )
);
import * as accessibility from "./path/to/generated/files/acessibility.js";
import { ErrorType } from "@mozilla/glean/error";

assert(
  1,
  await accessibility.features.testGetNumRecordedErrors(ErrorType.InvalidLabel)
);

Metric parameters

Example labeled boolean metric definition:

accessibility:
  features:
    type: labeled_boolean
    description: >
      a11y features enabled on the device.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    labels:
      - screen_reader
      - high_contrast
      ...

Extra metric parameters

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • Which accessibility features are enabled?

Reference

Counter

Used to count how often something happens, say how often a certain button was pressed. A counter always starts from 0. Each time you record to a counter, its value is incremented. Unless incremented by a positive value, a counter will not be reported in pings, that means: the value 0 is never sent in a ping.

If you find that you need to control the actual value sent in the ping, you may be measuring something, not just counting something, and a Quantity metric may be a better choice.

Let the Glean metric do the counting

When using a counter metric, it is important to let the Glean metric do the counting. Using your own variable for counting and setting the counter yourself could be problematic because it will be difficult to reset the value at the exact moment that the value is sent in a ping. Instead, just use counter.add to increment the value and let Glean handle resetting the counter.

Recording API

add

Increases the counter by a certain amount. If no amount is passed it defaults to 1.

import org.mozilla.yourApplication.GleanMetrics.Controls

Controls.refreshPressed.add() // Adds 1 to the counter.
Controls.refreshPressed.add(5) // Adds 5 to the counter.
import org.mozilla.yourApplication.GleanMetrics.Controls;

Controls.INSTANCE.refreshPressed().add(); // Adds 1 to the counter.
Controls.INSTANCE.refreshPressed().add(5); // Adds 5 to the counter.
Controls.refreshPressed.add() // Adds 1 to the counter.
Controls.refreshPressed.add(5) // Adds 5 to the counter.
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.controls.refresh_pressed.add()  # Adds 1 to the counter.
metrics.controls.refresh_pressed.add(5) # Adds 5 to the counter.
use glean_metrics::controls;

controls::refresh_pressed.add(1); // Adds 1 to the counter.
controls::refresh_pressed.add(5); // Adds 5 to the counter.
import * as controls from "./path/to/generated/files/controls.js";

controls.refreshPressed.add(); // Adds 1 to the counter.
controls.refreshPressed.add(5); // Adds 5 to the counter.

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::controls::refresh_pressed.Add(1);
mozilla::glean::controls::refresh_pressed.Add(5);

JavaScript

Glean.controls.refreshPressed.add(1);
Glean.controls.refreshPressed.add(5);

Recorded errors

  • invalid_value: If the counter is incremented by a negative value (or, in versions up to and including 54.0.0, 0).
  • invalid_type: If a floating point or non-number value is given.

Limits

  • Only increments;
  • Saturates at the largest value that can be represented as a 32-bit signed integer (2147483647).

Testing API

testGetValue

Gets the recorded value for a given counter metric.
Returns the count if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Controls

assertEquals(6, Controls.refreshPressed.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Controls;

assertEquals(6, Controls.INSTANCE.refreshPressed().testGetValue());
XCTAssertEqual(6, Controls.refreshPressed.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert 6 == metrics.controls.refresh_pressed.test_get_value()
use glean_metrics::controls;

assert_eq!(6, controls::refresh_pressed.test_get_value(None).unwrap());
import * as controls from "./path/to/generated/files/controls.js";

assert.strictEqual(6, await controls.refreshPressed.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_TRUE(mozilla::glean::controls::refresh_pressed.TestGetValue().isOk());
ASSERT_EQ(6, mozilla::glean::controls::refresh_pressed.TestGetValue().unwrap().value());

JavaScript

// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.equal(6, Glean.controls.refreshPressed.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given counter metric.

import org.mozilla.yourApplication.GleanMetrics.Controls

assertEquals(
    0,
    Controls.refreshPressed.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Controls;

assertEquals(
    0,
    Controls.INSTANCE.refreshPressed().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
XCTAssertEqual(0, Controls.refreshPressed.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

from glean.testing import ErrorType

assert 0 == metrics.controls.refresh_pressed.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::controls;

assert_eq!(
  0,
  controls::refresh_pressed.test_get_num_recorded_errors(
    ErrorType::InvalidValue
  )
);
import * as controls from "./path/to/generated/files/controls.js";
import { ErrorType } from "@mozilla/glean/error";
assert.strictEqual(
  0,
  await controls.refreshPressed.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example counter metric definition:

controls:
  refresh_pressed:
    type: counter
    description: >
      Counts how often the refresh button is pressed.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

N/A

Data questions

  • How often was a certain button pressed?

Reference

Labeled Counters

Labeled counters are used to record different related counts that should sum up to a total. Each counter always starts from 0. Each time you record to a labeled counter, its value is incremented. Unless incremented by a positive value, a counter will not be reported in pings, that means: the value 0 is never sent in a ping.

Recording API

add

Increases one of the labels in a labeled counter metric by a certain amount. If no amount is passed it defaults to 1.

import org.mozilla.yourApplication.GleanMetrics.Stability

Stability.crashCount["uncaught_exception"].add() // Adds 1 to the "uncaught_exception" counter.
Stability.crashCount["native_code_crash"].add(3) // Adds 3 to the "native_code_crash" counter.
import org.mozilla.yourApplication.GleanMetrics.Stability;

Stability.INSTANCE.crashCount()["uncaught_exception"].add(); // Adds 1 to the "uncaught_exception" counter.
Stability.INSTANCE.crashCount()["native_code_crash"].add(3); // Adds 3 to the "native_code_crash" counter.
Stability.crashCount["uncaught_exception"].add() // Adds 1 to the "uncaught_exception" counter.
Stability.crashCount["native_code_crash"].add(3) // Adds 3 to the "native_code_crash" counter.
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Adds 1 to the "uncaught_exception" counter.
metrics.stability.crash_count["uncaught_exception"].add()
# Adds 3 to the "native_code_crash" counter.
metrics.stability.crash_count["native_code_crash"].add(3)
use glean_metrics::stability;

stability::crash_count.get("uncaught_exception").add(1); // Adds 1 to the "uncaught_exception" counter.
stability::crash_count.get("native_code_crash").add(3); // Adds 3 to the "native_code_crash" counter.
import * as stability from "./path/to/generated/files/stability.js";

// Adds 1 to the "uncaught_exception" counter.
stability.crashCount["uncaught_exception"].add();
// Adds 3 to the "native_code_crash" counter.
stability.crashCount["native_code_crash"].add(3);

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::stability::crash_count.Get("uncaught_exception"_ns).Add(1);
mozilla::glean::stability::crash_count.Get("native_code_crash"_ns).Add(3);

JavaScript

Glean.stability.crashCount.uncaught_exception.add(1);
Glean.stability.crashCount["native_code_crash"].add(3);

Recorded Errors

  • invalid_value: if the counter is incremented by a negative value (or, in versions up to and including 54.0.0, 0).
  • invalid_type: if a floating point or non-number value is given.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Limits

  • Only increments
  • Saturates at the largest value that can be represented as a 32-bit signed integer.
  • Labels must conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • The list of labels is limited to:
    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled counter metric.
Returns the count if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Stability

// Do the counters have the expected values?
assertEquals(1, Stability.crashCount["uncaught_exception"].testGetValue())
assertEquals(3, Stability.crashCount["native_code_crash"].testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Stability;

// Do the counters have the expected values?
assertEquals(1, Stability.INSTANCE.crashCount()["uncaught_exception"].testGetValue());
assertEquals(3, Stability.INSTANCE.crashCount()["native_code_crash"].testGetValue());
// Do the counters have the expected values?
XCTAssertEqual(1, Stability.crashCount["uncaught_exception"].testGetValue())
XCTAssertEqual(3, Stability.crashCount["native_code_crash"].testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Do the counters have the expected values?
assert 1 == metrics.stability.crash_count["uncaught_exception"].test_get_value()
assert 3 == metrics.stability.crash_count["native_code_crash"].test_get_value()
use glean_metrics::stability;

// Do the counters have the expected values?
assert_eq!(1, stability::crash_count.get("uncaught_exception").test_get_value().unwrap());
assert_eq!(3, stability::crash_count.get("native_code_crash").test_get_value().unwrap());
import * as stability from "./path/to/generated/files/stability.js";

// Do the counters have the expected values?
assert.strictEqual(1, await stability.crashCount["uncaught_exception"].testGetValue());
assert.strictEqual(3, await stability.crashCount["native_code_crash"].testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_EQ(
    1,
    mozilla::glean::stability::crash_count.Get("uncaught_exception"_ns).TestGetValue().unwrap().ref());
ASSERT_EQ(
    3,
    mozilla::glean::stability::crash_count.Get("native_code_crash"_ns).TestGetValue().unwrap().ref());

JavaScript

Assert.equal(1, Glean.stability.crashCount["uncaught_exception"].testGetValue());
Assert.equal(3, Glean.stability.crashCount.native_code_crash.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled counter metric in total.

import org.mozilla.yourApplication.GleanMetrics.Stabilit

// Were there any invalid labels?
assertEquals(
    0,
    Stability.crashCount.testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
)
import org.mozilla.yourApplication.GleanMetrics.Stability;

// Were there any invalid labels?
assertEquals(
    0,
    Stability.INSTANCE.crashCount().testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
);
// Were there any invalid labels?
XCTAssertEqual(0, Stability.crashCount.testGetNumRecordedErrors(.invalidLabel))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Were there any invalid labels?
assert 0 == metrics.stability.crash_count.test_get_num_recorded_errors(
    ErrorType.INVALID_LABEL
)
use glean::ErrorType;

use glean_metrics::stability;

// Were there any invalid labels?
assert_eq!(
  0,
  stability::crash_count.test_get_num_recorded_errors(
    ErrorType::InvalidLabel
  )
);
import * as stability from "./path/to/generated/files/stability.js";
import { ErrorType } from "@mozilla/glean/error";

// Were there any invalid labels?
assert(
  0,
  await stability.crashCount.testGetNumRecordedErrors(ErrorType.InvalidLabel)
);

Metric parameters

Example labeled counter metric definition:

accessibility:
  features:
    type: labeled_counter
    description: >
      Counts the number of crashes that occur in the application.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    labels:
      - uncaught_exception
      - native_code_crash
      ...

Extra metric parameters

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • How many times did different types of crashes occur?

Reference

Strings

String metrics allow recording a Unicode string value with arbitrary content.

This metric type does not support recording JSON blobs

  • please get in contact with the Glean team if you're missing a type.

Important

Be careful using arbitrary strings and make sure they can't accidentally contain identifying data (like directory paths or user input).

Recording API

set

Set a string metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Record a value into the metric.
SearchDefault.name.set("duck duck go")
// If it changed later, you can record the new value:
SearchDefault.name.set("wikipedia")
import org.mozilla.yourApplication.GleanMetrics.SearchDefault;

// Record a value into the metric.
SearchDefault.INSTANCE.name().set("duck duck go");
// If it changed later, you can record the new value:
SearchDefault.INSTANCE.name().set("wikipedia");
// Record a value into the metric.
SearchDefault.name.set("duck duck go")
// If it changed later, you can record the new value:
SearchDefault.name.set("wikipedia")
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Record a value into the metric.
metrics.search_default.name.set("duck duck go")
# If it changed later, you can record the new value:
metrics.search_default.name.set("wikipedia")
use glean_metrics::search_default;

// Record a value into the metric.
search_default::name.set("duck duck go");
// If it changed later, you can record the new value:
search_default::name.set("wikipedia");
import * as searchDefault from "./path/to/generated/files/searchDefault.js";

// Record a value into the metric.
searchDefault.name.set("duck duck go");
// If it changed later, you can record the new value:
searchDefault.name.set("wikipedia");

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::search_default::name.Set("wikipedia"_ns);

JavaScript

Glean.searchDefault.name.set("wikipedia");

Recorded errors

  • invalid_overflow: if the string is too long. (Prior to Glean 31.5.0, this recorded an invalid_value).
  • invalid_type: if a non-string value is given.

Limits

  • Fixed maximum string length: 255. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.
    • Prior to Glean v60.4.0 the limit was 100 bytes.

Testing API

testGetValue

Get the recorded value for a given string metric.
Returns the string if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

The recorded value may have been truncated. See "Limits" section above.

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Does the string metric have the expected value?
assertEquals("wikipedia", SearchDefault.name.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.SearchDefault;

// Does the string metric have the expected value?
assertEquals("wikipedia", SearchDefault.INSTANCE.name().testGetValue());
// Does the string metric have the expected value?
XCTAssertEqual("wikipedia", SearchDefault.name.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Does the string metric have the expected value?
assert "wikipedia" == metrics.search_default.name.test_get_value()
use glean_metrics::search_default;

// Does the string metric have the expected value?
assert_eq!(6, search_default::name.test_get_value(None).unwrap());
import * as searchDefault from "./path/to/generated/files/searchDefault.js";

assert.strictEqual("wikipedia", await searchDefault.name.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

// Is it clear of errors?
ASSERT_TRUE(mozilla::glean::search_default::name.TestGetValue().isOk());
// Does it have the expected value?
ASSERT_STREQ(
  "wikipedia",
  mozilla::glean::search_default::name.TestGetValue().unwrap().value().get()
);

JavaScript

// Does it have the expected value?
// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.equal("wikipedia", Glean.searchDefault.name.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given string metric.

import org.mozilla.yourApplication.GleanMetrics.SearchDefault

// Was the string truncated, and an error reported?
assertEquals(
    0,
    SearchDefault.name.testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
)
import org.mozilla.yourApplication.GleanMetrics.SearchDefault;

// Was the string truncated, and an error reported?
assertEquals(
    0,
    SearchDefault.INSTANCE.name().testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
);
// Was the string truncated, and an error reported?
XCTAssertEqual(0, SearchDefault.name.testGetNumRecordedErrors(.invalidOverflow))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Was the string truncated, and an error reported?
assert 0 == metrics.search_default.name.test_get_num_recorded_errors(
    ErrorType.INVALID_OVERFLOW
)
use glean::ErrorType;
use glean_metrics::search_default;

// Was the string truncated, and an error reported?
assert_eq!(
  0,
  search_default::name.test_get_num_recorded_errors(
    ErrorType::InvalidOverflow
  )
);
import * as searchDefault from "./path/to/generated/files/searchDefault.js";
import { ErrorType } from "@mozilla/glean/error";

// Was the string truncated, and an error reported?
assert.strictEqual(
  0,
  await searchDefault.name.testGetNumRecordedErrors(ErrorType.InvalidOverflow)
);

Metric parameters

Example string metric definition:

controls:
  refresh_pressed:
    type: string
    description: >
      The name of the default search engine.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

N/A

Data questions

  • Record the operating system name with a value of "android".

  • Recording the device model with a value of "SAMSUNG-SGH-I997".

Reference

Labeled Strings

Labeled strings record multiple Unicode string values, each under a different label.

Recording API

set

Sets one of the labels in a labeled string metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Login

Login.errorsByStage["server_auth"].set("Invalid password")
import org.mozilla.yourApplication.GleanMetrics.Login;

Login.INSTANCE.errorsByStage()["server_auth"].set("Invalid password");
Login.errorsByStage["server_auth"].set("Invalid password")
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.login.errors_by_stage["server_auth"].set("Invalid password")
use glean_metrics::login;

login::errors_by_stage.get("server_auth").set("Invalid password");
import * as login from "./path/to/generated/files/login.js";

login.errorsByStage["server_auth"].set("Invalid password");

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::login::errors_by_stage.Get("server_auth"_ns).Set("Invalid password"_ns);

JavaScript

Glean.login.errorsByStage["server_auth"].set("Invalid password");

Recorded Errors

  • invalid_overflow: if the string is too long, see limits below.
  • invalid_type: if a non-string value is given.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Limits

  • Fixed maximum string length: 100. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.
  • Labels must conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • The list of labels is limited to:
    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled string metric.
Returns the string if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Login

// Does the metric have the expected value?
assertTrue(Login.errorsByStage["server_auth"].testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Login;

// Does the metric have the expected value?
assertTrue(Login.INSTANCE.errorsByStage()["server_auth"].testGetValue());
// Does the metric have the expected value?
XCTAssert(Login.errorsByStage["server_auth"].testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Does the metric have the expected value?
assert "Invalid password" == metrics.login.errors_by_stage["server_auth"].testGetValue())
use glean_metrics::login;

// Does the metric have the expected value?
assert!(login::errors_by_stage.get("server_auth").test_get_value());
import * as login from "./path/to/generated/files/login.js";

// Does the metric have the expected value?
assert.strictEqual("Invalid password", await metrics.login.errorsByStage["server_auth"].testGetValue())

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_STREQ("Invalid password",
             mozilla::glean::login::errors_by_stage.Get("server_auth"_ns)
                .TestGetValue()
                .unwrap()
                .ref()
                .get());

JavaScript

Assert.equal("Invalid password", Glean.login.errorsByStage["server_auth"].testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled string metric in total.

import org.mozilla.yourApplication.GleanMetrics.Login

// Were there any invalid labels?
assertEquals(
    0,
    Login.errorsByStage.testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
)
import org.mozilla.yourApplication.GleanMetrics.Login;

// Were there any invalid labels?
assertEquals(
    0,
    Login.INSTANCE.errorsByStage().testGetNumRecordedErrors(ErrorType.INVALID_LABEL)
);
// Were there any invalid labels?
XCTAssertEqual(0, Login.errorsByStage.testGetNumRecordedErrors(.invalidLabel))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Were there any invalid labels?
assert 0 == metrics.login.errors_by_stage.test_get_num_recorded_errors(
    ErrorType.INVALID_LABEL
)
use glean::ErrorType;
use glean_metrics::login;

// Were there any invalid labels?
assert_eq!(
  0,
  login::errors_by_stage.test_get_num_recorded_errors(
    ErrorType::InvalidLabel
  )
);
import * as login from "./path/to/generated/files/login.js";
import { ErrorType } from "@mozilla/glean/error";

// Were there any invalid labels?
assert(
  0,
  await login.errorsByStage.testGetNumRecordedErrors(ErrorType.InvalidLabel)
);

Metric parameters

Example labeled boolean metric definition:

login:
  errors_by_stage:
    type: labeled_string
    description: Records the error type, if any, that occur in different stages of the login process.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    labels:
      - server_auth
      - enter_email
      ...

Extra metric parameters

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • What kinds of errors occurred at each step in the login process?

Reference

String List

Strings lists are used for recording a list of Unicode string values, such as the names of the enabled search engines.

Important

Be careful using arbitrary strings and make sure they can't accidentally contain identifying data (like directory paths or user input).

Recording API

add

Add a new string to the list.

import org.mozilla.yourApplication.GleanMetrics.Search

Search.engines.add("wikipedia")
Search.engines.add("duck duck go")
import org.mozilla.yourApplication.GleanMetrics.Search;

Search.INSTANCE.engines().add("wikipedia");
Search.INSTANCE.engines().add("duck duck go");
Search.engines.add("wikipedia")
Search.engines.add("duck duck go")
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.search.engines.add("wikipedia")
metrics.search.engines.add("duck duck go")
use glean_metrics::search;

search::engines.add("wikipedia".to_string());
search::engines.add("duck duck go".to_string());
Glean.search.engines.add("wikipedia");
Glean.search.engines.add("duck duck go");

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::search::engines.Add("wikipedia"_ns);
mozilla::glean::search::engines.Add("duck duck go"_ns);

JavaScript

Glean.search.engines.add("wikipedia");
Glean.search.engines.add("duck duck go");

Recorded errors

Limits

  • Fixed maximum string length: 100. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.
  • Fixed maximum list length: 100 items. Additional strings are dropped.

set

Set the metric to a specific list of strings. An empty list is accepted.

import org.mozilla.yourApplication.GleanMetrics.Search

Search.engines.set(listOf("wikipedia", "duck duck go"))
import org.mozilla.yourApplication.GleanMetrics.Search;

Search.INSTANCE.engines().set(listOf("wikipedia", "duck duck go"));
Search.engines.set(["wikipedia", "duck duck go"])
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.search.engines.set(["wikipedia", "duck duck go"])
use glean_metrics::search;

search::engines.set(vec!["wikipedia".to_string(), "duck duck go".to_string()])
Glean.search.engines.set(["wikipedia", "duck duck go"]);

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::search::engines.Set({"wikipedia"_ns, "duck duck go"_ns});

JavaScript

Glean.search.engines.set(["wikipedia", "duck duck go"]);

Recorded errors

Limits

  • Fixed maximum string length: 100. Longer strings are truncated. This is measured in the number of bytes when the string is encoded in UTF-8.
  • Fixed maximum list length: 100 items. Additional strings are dropped.

Testing API

testGetValue

Gets the recorded value for a given string list metric.
Returns the list of strings if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Search

assertEquals(listOf("Google", "DuckDuckGo"), Search.engines.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Search;

assertEquals(
    Arrays.asList("Google", "DuckDuckGo"),
    Search.INSTANCE.engines().testGetValue()
);
XCTAssertEqual(["Google", "DuckDuckGo"], Search.engines.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert ["Google", "DuckDuckGo"] == metrics.search.engines.test_get_value()
use glean_metrics::search;

assert_eq!(
    vec!["Google".to_string(), "DuckDuckGo".to_string()],
    search::engines.test_get_value(None).unwrap()
);
// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
const engines = Glean.search.engines.testGetValue();
Assert.ok(engines.includes("wikipedia"));
Assert.ok(engines.includes("duck duck go"));

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_EQUAL(mozilla::glean::search::engines.TestGetValue().isOk());
nsTArray<nsCString> list = mozilla::glean::search::engines.TestGetValue().unwrap();
ASSERT_TRUE(list.Contains("wikipedia"_ns));
ASSERT_TRUE(list.Constains("duck duck go"_ns));

JavaScript

// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
const engines = Glean.search.engines.testGetValue();
Assert.ok(engines.includes("wikipedia"));
Assert.ok(engines.includes("duck duck go"));

testGetNumRecordedErrors

Gets the number of errors recorded for a given string list metric.

import org.mozilla.yourApplication.GleanMetrics.Search

assertEquals(
    0,
    Search.engines.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Search;

assertEquals(
    0,
    Search.INSTANCE.engines().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
// Were any of the values too long, and thus an error was recorded?
XCTAssertEqual(0, Search.engines.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.search.engines.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::search;

assert_eq!(
    0,
    search::engines.test_get_num_recorded_errors(ErrorType::InvalidValue)
);

Metric parameters

Example string list metric definition:

search:
  engines:
    type: string_list
    description: >
      Records the name of the enabled search engines.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

N/A

Data questions

  • Which search engines are enabled?

Reference

Timespan

Timespans are used to make a measurement of how much time is spent in a particular task. Irrespective of the timespan's lifetime, both start and stop must occur within the same application session.

To measure the distribution of multiple timespans, see Timing Distributions. To record absolute times, see Datetimes.

It is not recommended to use timespans in multiple threads, since calling start or stop out of order will be recorded as an invalid_state error.

Recording API

start

Starts tracking time. Uses an internal monotonic timer.

import org.mozilla.yourApplication.GleanMetrics.Auth

fun onShowLogin() {
    Auth.loginTime.start()
    // ...
}
import org.mozilla.yourApplication.GleanMetrics.Auth;

void onShowLogin() {
    Auth.INSTANCE.loginTime().start();
    // ...
}
func onShowLogin() {
    Auth.loginTime.start()
    // ...
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

def on_show_login():
    metrics.auth.login_time.start()
    # ...
use glean_metrics::auth;

fn show_login() {
    auth::login_time.start();
    // ...
}
import * as auth from "./path/to/generated/files/auth.js";

function onShowLogin() {
    auth.loginTime.start();
    // ...
}

C++

#include "mozilla/glean/GleanMetrics.h"
void OnShowLogin() {
  mozilla::glean::auth::login_time.Start();
  // ...
}

JavaScript

function onShowLogin() {
  Glean.auth.loginTime.start();
  // ...
}

Recorded errors

  • invalid_state: If the metric is already tracking time (start has already been called and not canceled).

Limits

  • The maximum resolution of the elapsed duration is limited by the clock used on each platform.
  • This also determines the behavior of a timespan over sleep:
    • On Android, the SystemClock.elapsedRealtimeNanos() function is used, so it is limited by the accuracy and performance of that timer. The time measurement includes time spent in sleep.
    • On iOS, the mach_absolute_time function is used, so it is limited by the accuracy and performance of that timer. The time measurement does not include time spent in sleep.
    • On Python 3.7 and later, time.monotonic_ns() is used. On earlier versions of Python, time.monotonics() is used, which is not guaranteed to have nanosecond resolution.
    • On other platforms time::precise_time_ns is used, which uses a high-resolution performance counter in nanoseconds provided by the underlying platform.

stop

Stops tracking time. The metric value is set to the elapsed time.

import org.mozilla.yourApplication.GleanMetrics.Auth

fun onLogin() {
    Auth.loginTime.stop()
    // ...
}
import org.mozilla.yourApplication.GleanMetrics.Auth;

void onLogin() {
    Auth.INSTANCE.loginTime().stop();
    // ...
}
func onLogin() {
    Auth.loginTime.stop()
    // ...
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

def on_login():
    metrics.auth.login_time.stop()
    # ...
use glean_metrics::auth;;

fn login() {
    auth::login_time.stop();
    // ...
}
import * as auth from "./path/to/generated/files/auth.js";

function onLogin() {
    auth.login_time.stop();
    // ...
}

C++

#include "mozilla/glean/GleanMetrics.h"
void OnLogin() {
  mozilla::glean::auth::login_time.Stop();
  // ...
}

JavaScript

function onLogin() {
  Glean.auth.loginTime.stop();
  // ...
}

Recorded errors

  • invalid_state: Calling stop without calling start first, e.g. if the start happened on a previous application run.

cancel

Cancels a previous start. No error is recorded if there was no previous start.

import org.mozilla.yourApplication.GleanMetrics.Auth

fun onLoginCancel() {
    Auth.loginTime.cancel()
    // ...
}
import org.mozilla.yourApplication.GleanMetrics.Auth;

void onLoginCancel() {
    Auth.INSTANCE.loginTime().cancel();
    // ...
}
func onLoginCancel() {
    Auth.loginTime.cancel()
    // ...
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

def on_login_cancel():
    metrics.auth.login_time.cancel()
    # ...
use glean_metrics::auth;

fn login_cancel() {
    auth::login_time.cancel();
    // ...
}
import * as auth from "./path/to/generated/files/auth.js";

function onLoginCancel() {
    auth.login_time.cancel();
    // ...
}

C++

#include "mozilla/glean/GleanMetrics.h"
void OnLoginCancel() {
  mozilla::glean::auth::login_time.Cancel();
  // ...
}

JavaScript

function onLoginCancel() {
  Glean.auth.loginTime.cancel();
  // ...
}

measure

Some languages support convenient auto timing of blocks of code. measure is treated as a start and stop pair for the purposes of error recording. Exceptions (if present in the language) are treated as a cancel.

import org.mozilla.yourApplication.GleanMetrics.Auth

Auth.loginTime.measure {
    // Process login flow
}
import org.mozilla.yourApplication.GleanMetrics.Auth

Auth.INSTANCE.loginTime().measure() -> {
    // Process login flow
    return null;
});
Auth.loginTime.measure {
    // Process login flow
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

with metrics.auth.login_time.measure():
    # ... Do the login ...

setRawNanos

Explicitly sets the timespan's value.

Regardless of the time unit chosen for the metric, this API expects the raw value to be in nanoseconds.

Only use this if you have to

This API should only be used if the code being instrumented cannot make use of start, stop, and cancel or measure. Time is hard, and this API can't help you with it.

import org.mozilla.yourApplication.GleanMetrics.Auth

fun afterLogin(loginElapsedNs: Long) {
    Auth.loginTime.setRawNanos(loginElapsedNs)
    // ...
}
import org.mozilla.yourApplication.GleanMetrics.Auth;

void afterLogin(long loginElapsedNs) {
    Auth.INSTANCE.loginTime().setRawNanos(loginElapsedNs);
    // ...
}
func afterLogin(_ loginElapsedNs: UInt64) {
    Auth.loginTime.setRawNanos(loginElapsedNs)
    // ...
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

def after_login(login_elapsed_ns):
    metrics.auth.login_time.set_raw_nanos(login_elapsed_ns)
    # ...
use std::time::duration;
use glean_metrics::auth;

fn after_login(login_elapsed: Duration) {
    auth::login_time.set_raw(login_elapsed);
    // ...
}
import * as auth from "./path/to/generated/files/auth.js";

function onAfterLogin(loginElapsedNs) {
    auth.loginTime.setRawNanos(loginElapsedNs);
    // ...
}

These are different

Firefox Desktop's setRaw uses the units specified in the metric definition. e.g. if the Timespan's time_unit is millisecond, then the duration parameter is a count of milliseconds.

C++

#include "mozilla/glean/GleanMetrics.h"

void AfterLogin(uint32_t aDuration) {
  mozilla::glean::auth::login_time.SetRaw(aDuration);
  // ...
}

JavaScript

function afterLogin(aDuration) {
  Glean.auth.loginTime.setRaw(aDuration);
  // ...
}

Recorded errors

  • invalid_value: if attempting to record a negative elapsed duration.
  • invalid_state: if this method is called after calling start or this method is called multiple times.
  • invalid_type: if a negative, floating point or non-number value is given.

Testing API

testGetValue

Get the currently-stored value.
Returns the timespan as a integer in the metric's time unit if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Auth

assertTrue(Auth.loginTime.testGetValue() > 0)
import org.mozilla.yourApplication.GleanMetrics.Auth;

assertTrue(Auth.INSTANCE.loginTime().testGetValue() > 0);
XCTAssert(Auth.loginTime.testGetValue() > 0)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert metrics.auth.login_time.test_get_value() > 0
use glean_metrics::auth;

assert!(auth::login_time.test_get_value(None).unwrap() > 0);
import * as auth from "./path/to/generated/files/auth.js";

assert(await auth.loginTime.testGetValue() > 0);

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_TRUE(mozilla::glean::auth::login_time.TestGetValue().isOk());
ASSERT_GE(mozilla::glean::auth::login_time.TestGetValue().unwrap().value(), 0);

JavaScript

// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.ok(Glean.auth.loginTime.testGetValue() > 0);

testGetNumRecordedErrors

Gets the number of errors recorded during operations on this metric.

import org.mozilla.yourApplication.GleanMetrics.Auth

assertEquals(
    0,
    Auth.loginTime.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Auth;

assertEquals(
    0,
    Auth.INSTANCE.loginTime().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
XCTAssertEqual(0, Auth.loginTime.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.auth.local_time.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean_metrics::auth;

assert_eq!(1, auth::login_time.test_get_num_recorded_errors(ErrorType::InvalidValue));
import * as auth from "./path/to/generated/files/auth.js";
import { ErrorType } from "@mozilla/glean/error";;

assert.strictEqual(
  1,
  await auth.loginTime.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example timespan metric definition:

auth:
  login_time:
    type: timespan
    description: >
      Measures the time spent logging in.
    time_unit: millisecond
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-01-01
    data_sensitivity:
      - interaction

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

time_unit

Timespans have an optional time_unit parameter to specify the smallest unit of resolution that the timespan will record. The allowed values for time_unit are:

  • nanosecond
  • microsecond
  • millisecond (default)
  • second
  • minute
  • hour
  • day

Consider the resolution that is required by your metric, and use the largest possible value that will provide useful information so as to not leak too much fine-grained information from the client.

Values are truncated

It is important to note that the value sent in the ping is truncated down to the nearest unit. Therefore, a measurement of 500 nanoseconds will be truncated to 0 microseconds.

Data questions

  • How long did it take for the user to log in?

Reference

Timing Distribution

Timing distributions are used to accumulate and store time measurement, for analyzing distributions of the timing data.

To measure the distribution of single timespans, see Timespans. To record absolute times, see Datetimes.

Timing distributions are recorded in a histogram where the buckets have an exponential distribution, specifically with 8 buckets for every power of 2. That is, the function from a value \( x \) to a bucket index is:

\[ \lfloor 8 \log_2(x) \rfloor \]

This makes them suitable for measuring timings on a number of time scales without any configuration.

Note Check out how this bucketing algorithm would behave on the Simulator.

Timings always span the full length between start and stopAndAccumulate. If the Glean upload is disabled when calling start, the timer is still started. If the Glean upload is disabled at the time stopAndAccumulate is called, nothing is recorded.

Multiple concurrent timings in different threads may be measured at the same time.

Timings are always stored and sent in the payload as nanoseconds. However, the time_unit parameter controls the minimum and maximum values that will recorded:

  • nanosecond: 1ns <= x <= 10 minutes
  • microsecond: 1μs <= x <= ~6.94 days
  • millisecond: 1ms <= x <= ~19 years

Overflowing this range is considered an error and is reported through the error reporting mechanism. Underflowing this range is not an error and the value is silently truncated to the minimum value.

Additionally, when a metric comes from GeckoView (the geckoview_datapoint parameter is present), the time_unit parameter specifies the unit that the samples are in when passed to Glean. Glean will convert all of the incoming samples to nanoseconds internally.

Recording API

start

Start tracking time for the provided metric. Multiple timers can run simultaneously. Returns a unique TimerId for the new timer.

import mozilla.components.service.glean.GleanTimerId
import org.mozilla.yourApplication.GleanMetrics.Pages

val timerId : GleanTimerId

fun onPageStart(e: Event) {
    timerId = Pages.pageLoad.start()
}
import mozilla.components.service.glean.GleanTimerId;
import org.mozilla.yourApplication.GleanMetrics.Pages;

GleanTimerId timerId;

void onPageStart(Event e) {
    timerId = Pages.INSTANCE.pageLoad().start();
}
import Glean

var timerId : GleanTimerId

func onPageStart() {
    timerId = Pages.pageLoad.start()
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

class PageHandler:
    def __init__(self):
        self.timer_id = None

    def on_page_start(self, event):
        self.timer_id = metrics.pages.page_load.start()
use glean_metrics::pages;

fn on_page_start() {
    self.timer_id = pages::page_load.start();
}
import * as pages from "./path/to/generated/files/pages.js";

function onPageStart() {
    // store this ID, you will need it later to stop or cancel your timer
    const timerId = pages.pageLoad.start();
}

C++

#include "mozilla/glean/GleanMetrics.h"

auto timerId = mozilla::glean::pages::page_load.Start();

JavaScript

let timerId = Glean.pages.pageLoad.start();

stopAndAccumulate

Stops tracking time for the provided metric and associated timer id.

Adds a count to the corresponding bucket in the timing distribution. This will record an error if start was not called.

import org.mozilla.yourApplication.GleanMetrics.Pages

fun onPageLoaded(e: Event) {
    Pages.pageLoad.stopAndAccumulate(timerId)
}
import org.mozilla.yourApplication.GleanMetrics.Pages;

void onPageLoaded(Event e) {
    Pages.INSTANCE.pageLoad().stopAndAccumulate(timerId);
}
import Glean

func onPageLoaded() {
    Pages.pageLoad.stopAndAccumulate(timerId)
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

class PageHandler:
    def on_page_loaded(self, event):
        metrics.pages.page_load.stop_and_accumulate(self.timer_id)
use glean_metrics::pages;

fn on_page_loaded() {
    pages::page_load.stop_and_accumulate(self.timer_id);
}
import * as pages from "./path/to/generated/files/pages.js";

function onPageLoaded() {
    pages.pageLoad.stopAndAccumulate(timerId);
}

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::pages::page_load.StopAndAccumulate(std::move(timerId));

JavaScript

Glean.pages.pageLoad.stopAndAccumulate(timerId);

accumulateSamples

Accumulates the provided signed samples in the metric. This is required so that the platform-specific code can provide us with 64 bit signed integers if no u64 comparable type is available. This will take care of filtering and reporting errors for any provided negative sample.

Please note that this assumes that the provided samples are already in the "unit" declared by the instance of the metric type (e.g. if the instance this method was called on is using TimeUnit::Second, then samples are assumed to be in that unit).

import org.mozilla.yourApplication.GleanMetrics.Pages

fun onPageLoaded(e: Event) {
    Pages.pageLoad.accumulateSamples(samples)
}
import org.mozilla.yourApplication.GleanMetrics.Pages;

void onPageLoaded(Event e) {
    Pages.INSTANCE.pageLoad().accumulateSamples(samples);
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

class PageHandler:
    def on_page_loaded(self, event):
        metrics.pages.page_load.accumulate_samples(samples)
use glean_metrics::pages;

fn on_page_loaded() {
    pages::page_load.accumulate_samples(samples);
}
import * as pages from "./path/to/generated/files/pages.js";

function onPageLoaded() {
    pages.pageLoad.accumulateSamples(samples);
}

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::pages::page_load.AccumulateRawSamples(sample);

JavaScript

This operation is not currently supported in JavaScript.

accumulateSingleSample

Accumulates a single signed sample and appends it to the metric. Prefer this for the common use case of having a single value to avoid having to pass a collection over a foreign language interface.

A signed value is required so that the platform-specific code can provide us with a 64 bit signed integer if no u64 comparable type is available. This will take care of filtering and reporting errors for a negative sample.

Please note that this assumes that the provided sample is already in the "unit" declared by the instance of the metric type (e.g. if the instance this method was called on is using TimeUnit::Second, then sample is assumed to be in that unit).

import org.mozilla.yourApplication.GleanMetrics.Pages

fun onPageLoaded(e: Event) {
    Pages.pageLoad.accumulateSingleSample(sample)
}
import org.mozilla.yourApplication.GleanMetrics.Pages;

void onPageLoaded(Event e) {
    Pages.INSTANCE.pageLoad().accumulateSingleSample(sample);
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

class PageHandler:
    def on_page_loaded(self, event):
        metrics.pages.page_load.accumulate_single_sample(sample)
use glean_metrics::pages;

fn on_page_loaded() {
    pages::page_load.accumulate_single_sample(sample);
}
import * as pages from "./path/to/generated/files/pages.js";

function onPageLoaded() {
    pages.pageLoad.accumulateSingleSample(sample);
}

This API is not currently exposed in Firefox Desktop, see Bug 1884183.

Limits

  • Samples are limited to the maximum value for the given time unit.
  • Only non-negative values may be recorded (>= 0).
  • Negative values are discarded and an ErrorType::InvalidValue is generated for each instance.
  • Samples that are longer than maximum sample time for the given unit generate an ErrorType::InvalidOverflow error for each instance.

Recorded errors

measure

For convenience one can measure the time of a function or block of code.

import org.mozilla.yourApplication.GleanMetrics.Pages

Pages.pageLoad.measure {
    // Load a page
}
import Glean

Pages.pageLoad.measure {
    // Load a page
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

with metrics.pages.page_load.measure():
    # Load a page

cancel

Aborts a previous start call. No error is recorded if start was not called.

import org.mozilla.yourApplication.GleanMetrics.Pages

fun onPageError(e: Event) {
    Pages.pageLoad.cancel(timerId)
}
import org.mozilla.yourApplication.GleanMetrics.Pages;

fun onPageError(e: Event) {
    Pages.INSTANCE.pageLoad().cancel(timerId);
}
import Glean

func onPageError() {
    Pages.pageLoad.cancel(timerId)
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

class PageHandler:
    def on_page_error(self, event):
        metrics.pages.page_load.cancel(self.timer_id)
use glean_metrics::pages;

fn on_page_error() {
    pages::page_load.cancel(self.timer_id);
}
import * as pages from "./path/to/generated/files/pages.js";

function onPageError() {
    pages.pageLoad.cancel(timerId);
}

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::pages::page_load.Cancel(std::move(timerId));

JavaScript

Glean.pages.pageLoad.cancel(timerId);

Testing API

testGetValue

Gets the recorded value for a given timing distribution metric.
Returns a struct with counts per buckets and total sum if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Pages

// Get snapshot.
val snapshot = Pages.pageLoad.testGetValue()

// Does the sum have the expected value?
assertEquals(11, snapshot.sum)

// Usually you don't know the exact timing values,
// but how many should have been recorded.
assertEquals(2L, snapshot.count)
import org.mozilla.yourApplication.GleanMetrics.Pages;

// Get snapshot.
DistributionData snapshot = pages.INSTANCE.pageLoad().testGetValue();

// Does the sum have the expected value?
assertEquals(11, snapshot.sum);

// Usually you don't know the exact timing values,
// but how many should have been recorded.
assertEquals(2L, snapshot.getCount());
// Get snapshot.
let snapshot = pages.pageLoad.testGetValue()

// Does the sum have the expected value?
XCTAssertEqual(11, snapshot.sum)

// Usually you don't know the exact timing values,
// but how many should have been recorded.
XCTAssertEqual(2, snapshot.count)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Get snapshot.
snapshot = metrics.pages.page_load.test_get_value()

# Does the sum have the expected value?
assert 11 == snapshot.sum

# Usually you don't know the exact timing values,
# but how many should have been recorded.
assert 2 == snapshot.count
use glean::ErrorType;
use glean_metrics::pages;

// Get snapshot
let snapshot = pages::page_load.test_get_value(None).unwrap();

// Does the sum have the expected value?
assert_eq!(11, snapshot.sum);

// Usually you don't know the exact timing values,
// but how many should have been recorded.
assert_eq!(2, snapshot.count);
import * as pages from "./path/to/generated/files/pages.js";

const snapshot = await pages.pageLoad.testGetValue();

// Usually you don't know the exact timing values,
// but how many should have been recorded.
assert.equal(1, snapshot.count);

C++

#include "mozilla/glean/GleanMetrics.h"

// Does it have an expected values?
const data = mozilla::glean::pages::page_load.TestGetValue().value().unwrap();
ASSERT_TRUE(data.sum > 0);

JavaScript

Assert.ok(Glean.pages.pageLoad.testGetValue().sum > 0);

testGetNumRecordedErrors

Gets the number of errors recorded for a given timing distribution metric.

import org.mozilla.yourApplication.GleanMetrics.Pages

// Assert that no errors were recorded.
assertEquals(
    0,
    Pages.pageLoad.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Pages;

// Assert that no errors were recorded.
assertEquals(
    0,
    Pages.INSTANCE.pageLoad().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
// Assert that no errors were recorded.
XCTAssertEqual(0, Pages.pageLoad.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Assert that no errors were recorded.
assert 0 == metrics.pages.page_load.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::pages;

assert_eq!(
    0,
    pages::page_load.test_get_num_recorded_errors(ErrorType::InvalidValue)
);
import * as pages from "./path/to/generated/files/pages.js";
import { ErrorType } from "@mozilla/glean/<platform>";

assert.equal(1, await pages.pageLoad.testGetNumRecordedErrors(ErrorType.InvalidValue));

Metric parameters

Example timing distribution metric definition:

pages:
  page_load:
    type: timing_distribution
    time_unit: millisecond
    description: >
      Counts how long each page takes to load
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

Extra metric parameters

time_unit

Timing distributions have an optional time_unit parameter to specify the smallest unit of resolution that the timespan will record. The allowed values for time_unit are:

  • nanosecond (default)
  • microsecond
  • millisecond
  • second
  • minute
  • hour
  • day

Limits

  • Timings are recorded in nanoseconds.

    • On Android, the SystemClock.elapsedRealtimeNanos() function is used, so it is limited by the accuracy and performance of that timer. The time measurement includes time spent in sleep.

    • On iOS, the mach_absolute_time function is used, so it is limited by the accuracy and performance of that timer. The time measurement does not include time spent in sleep.

    • On Python 3.7 and later, time.monotonic_ns() is used. On earlier versions of Python, time.monotonics() is used, which is not guaranteed to have nanosecond resolution.

    • In Rust, time::precise_time_ns() is used.

  • The maximum timing value that will be recorded depends on the time_unit parameter:

    • nanosecond: 1ns <= x <= 10 minutes
    • microsecond: 1μs <= x <= ~6.94 days
    • millisecond: 1ms <= x <= ~19 years

    Longer times will be truncated to the maximum value and an error will be recorded.

Data questions

  • How long does it take a page to load?

Reference

Simulator

Please, insert your custom data below as a JSON array.

Data options

Properties

Note The data provided, is assumed to be in the configured time unit. The data recorded, on the other hand, is always in nanoseconds. This means that, if the configured time unit is not nanoseconds, the data will be transformed before being recorded. Notice this, by using the select field above to change the time unit and see the mean of the data recorded changing.

Labeled Timing Distributions

Labeled timing distributions are used to record different related distributions of time measurements.

See the Timing Distribution reference for details on bucket distribution, specifics about how Glean records time, and a histogram simulator.

Recording API

start

Start tracking time for the provided metric for the given label. Multiple timers for multiple labels can run simultaneously.

Returns a unique TimerId for the new timer.

use glean_metrics::devtools;

self.start = devtools::cold_toolbox_open_delay
    .get(toolbox_id)
    .start();

Recorded Errors

  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

stopAndAccumulate

Stops tracking time for the provided timer from the metric for the given label.

Adds a count to the corresponding bucket in the label's timing distribution.

Do not use the provided TimerId after passing it to this method.

use glean_metrics::devtools;

devtools::cold_toolbox_open_delay
    .get(toolbox_id)
    .stop_and_accumulate(self.start);

Recorded errors

  • invalid_state: If a non-existing, cancelled, or already-stopped timer is stopped again.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

cancel

Aborts a previous start call, consuming the supplied timer id.

use glean_metrics::devtools;

devtools::cold_toolbox_open_delay
    .get(toolbox_id)
    .cancel(self.start);

Recorded errors

  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

accumulateSamples

Accumulates the provided, signed samples in the metric for a given label. Where possible, have Glean do the timing for you and don't use methods like this one. If you are doing timing yourself, ensure your time source is monotonic and behaves consistently across platforms.

This is required so that the platform-specific code can provide us with 64 bit signed integers if no u64 comparable type is available. This will take care of filtering and reporting errors for any provided negative sample.

Please note that this assumes that the provided samples are already in the "unit" declared by the instance of the metric type (e.g. if the instance this method was called on is using TimeUnit::Second, then samples are assumed to be in that unit).

use glean_metrics::devtools;

devtools::cold_toolbox_open_delay
    .get(toolbox_id)
    .accumulate_samples(samples);

Recorded errors

  • invalid_value: If recording a negative sample.
  • invalid_overflow: If recording a sample longer than the maximum for the given time_unit.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

accumulateSingleSample

Accumulates a single signed sample and appends it to the metric for the provided label. Prefer start() and stopAndAccumulate() where possible, but if you must record time externally please prefer this method for individual samples (avoids having to allocate and pass collections).

A signed value is required so that the platform-specific code can provide us with a 64 bit signed integer if no u64 comparable type is available. This will take care of filtering and reporting errors for a negative sample.

Please note that this assumes that the provided sample is already in the "unit" declared by the instance of the metric type (e.g. if the instance this method was called on is using TimeUnit::Second, then sample is assumed to be in that unit).

use glean_metrics::devtools;

devtools::cold_toolbox_open_delay
    .get(toolbox_id)
    .accumulate_single_sample(sample);

Recorded errors

  • invalid_value: If recording a negative sample.
  • invalid_overflow: If recording a sample longer than the maximum for the given time_unit.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled timing distribution metric. Returns a struct with counts per buckets and total sum if data is stored. Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

use glean_metrics::devtools;

// Get the current snapshot of stored values.
let snapshot = devtools::cold_toolbox_open_delay.get("webconsole").test_get_value(None).unwrap();

// Usually you don't know the exact timing values,
// but you do know how many samples there are:
assert_eq!(2, snapshot.count);
// ...and the lower bound of how long they all took:
assert_ge!(400, snapshot.sum);

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled timing distribution metric in total.

use glean::ErrorType;
use glean_metrics::network;

// Assert there were no negative values instrumented.
assert_eq!(
    0,
    devtools::cold_toolbox_open_delay.test_get_num_recorded_errors(
        ErrorType::InvalidValue,
        None
    )
);

Metric parameters

Example labeled timing distribution metric definition:

devtools:
  cold_toolbox_open_delay:
    type: labeled_timing_distribution
    description: >
      Time taken to open the first DevTools toolbox, per tool being opened.
    time_unit: millisecond
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 175
    labels:
      - inspector
      - webconsole
      - jsdebugger
      ...

Extra metric parameters

time_unit

Labeled timing distributions have an optional time_unit parameter to specify the smallest unit of resolution that it will record. The allowed values for time_unit are:

  • nanosecond (default)
  • microsecond
  • millisecond
  • second
  • minute
  • hour
  • day

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • What is the distribution of initial load times of devtools toolboxes, per tool?
  • What is the distribution of how long it takes to load extensions' content scripts, by addon id?

Limits

  • Timings are recorded in nanoseconds

  • The maximum timing value that will be recorded depends on the time_unit parameter:

    • nanosecond: 1ns <= x <= 10 minutes
    • microsecond: 1μs <= x <= ~6.94 days
    • millisecond: 1ms <= x <= ~19 years

    Longer times will be truncated to the maximum value and an error will be recorded.

  • Labels must conform to the label format.

  • Each label must have a maximum of 71 characters.

  • Each label must only contain printable ASCII characters.

  • The list of labels is limited to:

    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Reference

Memory Distribution

Memory distributions are used to accumulate and store memory sizes.

Memory distributions are recorded in a histogram where the buckets have an exponential distribution, specifically with 16 buckets for every power of 2. That is, the function from a value \( x \) to a bucket index is:

\[ \lfloor 16 \log_2(x) \rfloor \]

This makes them suitable for measuring memory sizes on a number of different scales without any configuration.

Note Check out how this bucketing algorithm would behave on the Simulator.

Recording API

accumulate

Accumulates the provided sample in the metric.

import org.mozilla.yourApplication.GleanMetrics.Memory

fun allocateMemory(nbytes: Int) {
    // ...
    Memory.heapAllocated.accumulate(nbytes / 1024)
}
import org.mozilla.yourApplication.GleanMetrics.Memory;

fun allocateMemory(nbytes: Int) {
    // ...
    Memory.INSTANCE.heapAllocated().accumulate(nbytes / 1024);
}
import Glean

func allocateMemory(nbytes: UInt64) {
    // ...
    Memory.heapAllocated.accumulate(nbytes / 1024)
}
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

def allocate_memory(nbytes):
    # ...
    metrics.memory.heap_allocated.accumulate(nbytes / 1024)
use glean_metrics::memory;

fn allocate_memory(bytes: u64) {
    // ...
    memory::heap_allocated.accumulate(bytes / 1024);
}
import * as memory from "./path/to/generated/files/memory.js";

function allocateMemory() {
    // ...
    memory.heapAllocated.accumulate(nbytes / 1024);
}

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::memory::heap_allocated.Accumulate(bytes / 1024);

JavaScript

Glean.memory.heapAllocated.accumulate(bytes / 1024);

Recorded errors

Testing API

testGetValue

Gets the recorded value for a given memory distribution metric.
Returns a struct with counts per buckets and total sum if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Memory

// Get snapshot
val snapshot = Memory.heapAllocated.testGetValue()

// Does the sum have the expected value?
assertEquals(11, snapshot.sum)

// Usually you don't know the exact memory values,
// but how many should have been recorded.
assertEquals(2L, snapshot.count)
import org.mozilla.yourApplication.GleanMetrics.Memory;

// Get snapshot
val snapshot = Memory.INSTANCE.heapAllocated().testGetValue();

// Does the sum have the expected value?
assertEquals(11, snapshot.sum);

// Usually you don't know the exact memory values,
// but how many should have been recorded.
assertEquals(2L, snapshot.getCount());
// Get snapshot
let snapshot = try! Memory.heapAllocated.testGetValue()

// Does the sum have the expected value?
XCTAssertEqual(11, snapshot.sum)

// Usually you don't know the exact memory values,
// but how many should have been recorded.
XCTAssertEqual(2, snapshot.count)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Get snapshot.
snapshot = metrics.memory.heap_allocated.test_get_value()

# Does the sum have the expected value?
assert 11 == snapshot.sum

# Usually you don't know the exact memory values,
# but how many should have been recorded.
assert 2 == snapshot.count
use glean::ErrorType;
use glean_metrics::memory;

// Get snapshot
let snapshot = memory::heap_allocated.test_get_value(None).unwrap();

// Does the sum have the expected value?
assert_eq!(11, snapshot.sum);

// Usually you don't know the exact timing values,
// but how many should have been recorded.
assert_eq!(2, snapshot.count);
import * as memory from "./path/to/generated/files/memory.js";

// Get snapshot
const snapshot = await memory.heapAllocated.testGetValue();

// Does the sum have the expected value?
assert.equal(11, snapshot.sum);

// Usually you don't know the exact memory values,
// but know how many should have been recorded.
assert.equal(2, snapshot.count);

C++

#include "mozilla/glean/GleanMetrics.h"

// Does it have an expected values?
const data = mozilla::glean::memory::heap_allocated.TestGetValue().value().unwrap()
ASSERT_EQ(11 * 1024, data.sum);

JavaScript

const data = Glean.memory.heapAllocated.testGetValue();
Assert.equal(11 * 1024, data.sum);

testGetNumRecordedErrors

Gets the number of errors recorded for a given memory distribution metric.

import org.mozilla.yourApplication.GleanMetrics.Memory

// Did this record a negative value?
assertEquals(
    0,
    Memory.heapAllocated.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Memory;

// Assert that no errors were recorded.
assertEquals(
    0,
    Memory.INSTANCE.heapAllocated().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
// Did this record a negative value?
XCTAssertEqual(0, Memory.heapAllocated.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Did this record a negative value?
assert 0 == metrics.memory.heap_allocated.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::pages;

assert_eq!(
    0,
    pages::page_load.test_get_num_recorded_errors(ErrorType::InvalidValue)
);
import * as memory from "./path/to/generated/files/memory.js";
import { ErrorType } from "@mozilla/glean/<platform>";

// Did this record a negative value?
assert.equal(
    0,
    await memory.heapAllocated.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example memory distribution metric definition:

memory:
  heap_allocated:
    type: memory_distribution
    memory_unit: kilobyte
    description: >
      The heap memory allocated
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

Extra metric parameters

memory_unit

Memory distributions have an optional memory_unit parameter, which specifies the unit the incoming memory size values are recorded in. The allowed values for memory_unit are:

  • byte (default)
  • kilobyte (= 2^10 = 1,024 bytes)
  • megabyte (= 2^20 = 1,048,576 bytes)
  • gigabyte (= 2^30 = 1,073,741,824 bytes)

Limits

  • The maximum memory size that can be recorded is 1 Terabyte (240 bytes). Larger sizes will be truncated to 1 Terabyte.

Data questions

  • What is the distribution of the size of heap allocations?

Reference

Simulator

Please, insert your custom data below as a JSON array.

Data options

Properties

Note The data provided, is assumed to be in the configured memory unit. The data recorded, on the other hand, is always in bytes. This means that, if the configured memory unit is not byte, the data will be transformed before being recorded. Notice this, by using the select field above to change the memory unit and see the mean of the data recorded changing.

Labeled Memory Distributions

Labeled memory distributions are used to record different related distributions of memory sizes.

See the Memory Distribution reference for details on bucket distribution, and a histogram simulator.

Recording API

accumulate

Accumulate the provided sample in the metric.

use glean_metrics::network;

network::http_upload_bandwidth
    .get(http_version)
    .accumulate(self.request_size * 8.0 / 1048576.0 / send_time.as_secs());

Recorded Errors

  • invalid_value: if recording a memory size that is negative or over 1 TB.
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled memory distribution metric. Returns a struct with counts per buckets and total sum if data is stored. Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

use glean_metrics::network;

// Assert the sum of all HTTP2 samples is 42MBps.
assert_eq!(42, network::http_upload_bandwidth.get("h2").test_get_value(None).unwrap().sum);

// Assert there's only the one sample
assert_eq!(1, network::http_upload_badwidth.get("h2").test_get_value(None).unwrap().count);

// Buckets are indexed by their lower bound.
assert_eq!(1, network::http_upload_bandwidth.get("h2").test_get_value(None).unwrap().values[41]);

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled custom distribution metric in total.

use glean::ErrorType;
use glean_metrics::network;

// Assert there were no negative or overlarge values instrumented.
assert_eq!(
    0,
    network::http_upload_bandwidth.test_get_num_recorded_errors(
        ErrorType::InvalidValue,
        None
    )
);

Metric parameters

Example labeled memory distribution metric definition:

network:
  http_upload_bandwidth:
    type: labeled_memory_distribution
    description: >
      The upload bandwidth for requests larger than 10MB,
      per HTTP protocol version.
    memory_unit: megabyte
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 175
    labels:
      - h3
      - h2
      - http/1.0
      - http/1.1

Extra metric parameters

memory_unit

Memory distributions have an optional memory_unit parameter, which specifies the unit the incoming memory size values are recorded in.

The allowed values for memory_unit are:

  • byte (default)
  • kilobyte (= 2^10 = 1,024 bytes)
  • megabyte (= 2^20 = 1,048,576 bytes)
  • gigabyte (= 2^30 = 1,073,741,824 bytes)

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • What is the distribution of upload bandwidth rates per HTTP protocol version?
  • What is the distribution of bytes received per DOM network API?

Limits

  • The maximum memory size that can be recorded is 1 Terabyte (240 bytes). Larger sizes will be truncated to 1 Terabyte.
  • Labels must conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • The list of labels is limited to:
    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Reference

UUID

UUIDs metrics are used to record values that uniquely identify some entity, such as a client id.

Recording API

generateAndSet

Sets a UUID metric to a randomly generated UUID value (UUID v4) .

import org.mozilla.yourApplication.GleanMetrics.User

// Generate a new UUID and record it
User.clientId.generateAndSet()
import org.mozilla.yourApplication.GleanMetrics.User;

// Generate a new UUID and record it
User.INSTANCE.clientId().generateAndSet();
// Generate a new UUID and record it
User.clientId.generateAndSet()
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Generate a new UUID and record it
metrics.user.client_id.generate_and_set()
use uuid::Uuid;
use glean_metrics::user;

// Generate a new UUID and record it
user::client_id.generate_and_set();
import * as user from "./path/to/generated/files/user.js";

user.clientId.generateAndSet();

C++

#include "mozilla/glean/GleanMetrics.h"

// Generate a new UUID and record it.
mozilla::glean::user::client_id.GenerateAndSet();

JavaScript

// Generate a new UUID and record it.
Glean.user.clientId.generateAndSet();

set

Sets a UUID metric to a specific value. Accepts any UUID version.

import org.mozilla.yourApplication.GleanMetrics.User

// Set a UUID explicitly
User.clientId.set(UUID.randomUUID())  // Set a UUID explicitly
import org.mozilla.yourApplication.GleanMetrics.User;

// Set a UUID explicitly
User.INSTANCE.clientId().set(UUID.randomUUID());
User.clientId.set(UUID())  // Set a UUID explicitly
import uuid

from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Set a UUID explicitly
metrics.user.client_id.set(uuid.uuid4())
use uuid::Uuid;
use glean_metrics::user;

// Set a UUID explicitly
user::client_id.set(Uuid::new_v4());
import * as user from "./path/to/generated/files/user.js";

const uuid = "decafdec-afde-cafd-ecaf-decafdecafde";
user.clientId.set(uuid);

C++

#include "mozilla/glean/GleanMetrics.h"

// Set a specific value.
nsCString kUuid("decafdec-afde-cafd-ecaf-decafdecafde");
mozilla::glean::user::client_id.Set(kUuid);

JavaScript

// Set a specific value.
const uuid = "decafdec-afde-cafd-ecaf-decafdecafde";
Glean.user.clientId.set(uuid);

Recorded errors

  • invalid_value: if the value is set to a string that is not a UUID (only applies for dynamically-typed languages, such as Python).
  • invalid_type: if a non-string or non-UUID value is given.

Testing API

testGetValue

Gets the recorded value for a given UUID metric.
Returns a UUID if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.User

// Was it the expected value?
assertEquals(uuid, User.clientId.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.User;

// Was it the expected value?
assertEquals(uuid, User.INSTANCE.clientId().testGetValue());
// Was it the expected value?
XCTAssertEqual(uuid, try User.clientId.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Was it the expected value?
assert uuid == metrics.user.client_id.test_get_value()
use uuid::Uuid;
use glean_metrics::user;

let u = Uuid::new_v4();
// Does it have the expected value?
assert_eq!(u, user::client_id.test_get_value(None).unwrap());
import * as user from "./path/to/generated/files/user.js";

const uuid = "decafdec-afde-cafd-ecaf-decafdecafde";
assert(uuid, await user.clientId.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

// Is it clear of errors?
ASSERT_TRUE(mozilla::glean::user::client_id.TestGetValue().isOk());
// Does it have an expected values?
ASSERT_STREQ(kUuid.get(), mozilla::glean::user::client_id.TestGetValue().unwrap().value().get());

JavaScript

const uuid = "decafdec-afde-cafd-ecaf-decafdecafde";
// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.equal(Glean.user.clientId.testGetValue(), uuid);

testGetNumRecordedErrors

Gets the number of errors recorded for a given UUID metric.

import org.mozilla.yourApplication.GleanMetrics.User

assertEquals(
    0,
    User.clientId.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.User;

assertEquals(
    0,
    User.INSTANCE.clientId().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
XCTAssertEqual(0, User.clientId.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

from glean.testing import ErrorType

assert 0 == metrics.user.client_id.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::user;

assert_eq!(
  0,
  user::client_id.test_get_num_recorded_errors(
    ErrorType::InvalidValue
  )
);
import * as user from "./path/to/generated/files/user.js";
import { ErrorType } from "@mozilla/glean/error";

// Was the string truncated, and an error reported?
assert.strictEqual(
  0,
  await user.clientId.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric Parameters

You first need to add an entry for it to the metrics.yaml file:

user:
  client_id:
    type: uuid
    description: >
      A unique identifier for the client's profile
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

N/A

Data questions

  • A unique identifier for the client.

Reference

URL

URL metrics allow recording URL-like1 strings.

This metric type does not support recording data URLs - please get in contact with the Glean team if you're missing a type.

Important

Be careful using arbitrary URLs and make sure they can't accidentally contain identifying data (like directory paths, user input or credentials).

1

The Glean SDKs specifically do not validate if a URL is fully spec compliant, all the validations performed are the ones listed in the "Recorded errors" section of this page.

Recording API

set

Set a URL metric to a specific string value.

import org.mozilla.yourApplication.GleanMetrics.Search

Search.template.set("https://mysearchengine.com/")
import org.mozilla.yourApplication.GleanMetrics.Search;

Search.INSTANCE.template().set("https://mysearchengine.com/");
Search.template.set("https://mysearchengine.com")

// Swift's URL type is supported
let url = URL(string: "https://mysearchengine.com")!
Search.template.set(url: url)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.search.template.set("https://mysearchengine.com/")
use glean_metrics::search;

search::template.set("https://mysearchengine.com/");
import * as search from "./path/to/generated/files/search.js";

search.template.set("https://mysearchengine.com/");

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::search::template.Set("https://mysearchengine.com/"_ns);

JavaScript

Glean.search.template.set("https://mysearchengine.com/");

setUrl

Set a URL metric to a specific URL value.

import * as search from "./path/to/generated/files/search.js";

search.template.setUrl(new URL("https://mysearchengine.com/"));

Recorded errors

  • invalid_value:
    • If the URL passed does not start with a scheme followed by a : character.
    • If the URL passed uses the data: protocol.
  • invalid_overflow: if the URL passed is longer than 8192 characters (before encoding).
  • invalid_type: if a non-string value is given.

Limits

  • Fixed maximum URL length: 8192. Longer URLs are truncated and recorded along with an invalid_overflow error.

Testing API

testGetValue

Gets the recorded value for a given URL metric as a (unencoded) string.
Returns a URL if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Search

assertEquals("https://mysearchengine.com/", Search.template.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Search

assertEquals("https://mysearchengine.com/", Search.INSTANCE.template().testGetValue());
XCTAssertEqual("https://mysearchengine.com/", try Search.template.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert "https://mysearchengine.com/" == metrics.search.template.test_get_value()
use glean_metrics::search;

assert_eq!("https://mysearchengine.com/", search::template.test_get_value(None).unwrap());
import * as search from "./path/to/generated/files/search.js";

assert.strictEqual("https://mysearchengine.com/", await search.template.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given counter metric.

import org.mozilla.yourApplication.GleanMetrics.Search

assertEquals(0, Search.template.testGetNumRecordedErrors(ErrorType.INVALID_VALUE))
assertEquals(0, Search.template.testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW))
import org.mozilla.yourApplication.GleanMetrics.Search;

assertEquals(
    0,
    Search.INSTANCE.template().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);

assertEquals(
    0,
    Search.INSTANCE.template().testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
);
XCTAssertEqual(0, Search.template.testGetNumRecordedErrors(.invalidValue))
XCTAssertEqual(0, Search.template.testGetNumRecordedErrors(.invalidOverflow))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.search.template.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)

assert 0 == metrics.search.template.test_get_num_recorded_errors(
    ErrorType.INVALID_OVERFLOW
)
use glean::ErrorType;
use glean_metrics::search;

assert_eq!(
  0,
  search::template.test_get_num_recorded_errors(
    ErrorType::InvalidValue
  )
);

assert_eq!(
  0,
  search::template.test_get_num_recorded_errors(
    ErrorType::InvalidOverflow
  )
);
import * as search from "./path/to/generated/files/search.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  0,
  await search.template.testGetNumRecordedErrors(ErrorType.InvalidValue)
);
assert.strictEqual(
  0,
  await search.template.testGetNumRecordedErrors(ErrorType.InvalidOverflow)
);

Metric parameters

Example URL metric definition:

search:
  template:
    type: url
    description: >
      The base URL used to build the search query for the search engine.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    data_sensitivity:
      - web_activity

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML format reference page.

Note on data_sensitivity of URL metrics

URL metrics can only either be on categories 3 or 4, namely "Stored Content & Communications" or "Highly sensitive data".

Extra metric parameters

N/A

Data questions

  • What is the base URL used to build the search query for the search engine?

Datetime

Datetimes are used to record an absolute date and time, for example the date and time that the application was first run.

The device's offset from UTC is recorded and sent with the Datetime value in the ping.

To record a single elapsed time, see Timespan. To measure the distribution of multiple timespans, see Timing Distributions.

Recording API

set

Sets a datetime metric to a specific date value. Defaults to now.

import org.mozilla.yourApplication.GleanMetrics.Install

Install.firstRun.set() // Records "now"
Install.firstRun.set(Date(2019, 3, 25)) // Records a custom datetime
import org.mozilla.yourApplication.GleanMetrics.Install;

Install.INSTANCE.firstRun().set(); // Records "now"
Install.INSTANCE.firstRun().set(new Date(2019, 3, 25)); // Records a custom datetime
Install.firstRun.set() // Records "now"
let dateComponents = DateComponents(
                        calendar: Calendar.current,
                        year: 2004, month: 12, day: 9, hour: 8, minute: 3, second: 29
                     )
Install.firstRun.set(dateComponents.date!) // Records a custom datetime
import datetime

from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.install.first_run.set() # Records "now"
metrics.install.first_run.set(datetime.datetime(2019, 3, 25)) # Records a custom datetime
use glean_metrics::install;

use chrono::{FixedOffset, TimeZone};

install::first_run.set(None); // Records "now"
let custom_date = FixedOffset::east(0).ymd(2019, 3, 25).and_hms(0, 0, 0);
install::first_run.set(Some(custom_date)); // Records a custom datetime
import * as install from "./path/to/generated/files/install.js";

install.firstRun.set(); // Records "now"
install.firstRun.set(new Date("March 25, 2019 00:00:00")); // Records a custom datetime

C++

#include "mozilla/glean/GleanMetrics.h"

PRExplodedTime date = {0, 35, 10, 12, 6, 10, 2020, 0, 0, {5 * 60 * 60, 0}};
mozilla::glean::install::first_run.Set(&date);

JavaScript

const value = new Date("2020-06-11T12:00:00");
Glean.install.firstRun.set(value.getTime() * 1000);

Recorded errors

Testing API

testGetValue

Get the recorded value for a given datetime metric as a language-specific Datetime object.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Install

assertEquals(Install.firstRun.testGetValue(), Date(2019, 3, 25))
import org.mozilla.yourApplication.GleanMetrics.Install;

assertEquals(Install.INSTANCE.firstRun().testGetValue(), Date(2019, 3, 25));
let expectedDate = DateComponents(
                      calendar: Calendar.current,
                      year: 2004, month: 12, day: 9, hour: 8, minute: 3, second: 29
                   )
XCTAssertEqual(expectedDate.date!, try Install.firstRun.testGetValue())
import datetime

from glean import load_metrics
metrics = load_metrics("metrics.yaml")

value = datetime.datetime(1993, 2, 23, 5, 43, tzinfo=datetime.timezone.utc)
assert value == metrics.install.first_run.test_get_value()
use glean_metrics::install;

use chrono::{FixedOffset, TimeZone};

let expected_date = FixedOffset::east(0).ymd(2019, 3, 25).and_hms(0, 0, 0);
assert_eq!(expected_date, metrics.install.first_run.test_get_value(None));
import * as install from "./path/to/generated/files/install.js";

const expectedDate = new Date("March 25, 2019 00:00:00");
assert.deepStrictEqual(expectedDate, await install.firstRun.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

PRExplodedTime date{0, 35, 10, 12, 6, 10, 2020, 0, 0, {5 * 60 * 60, 0}};
ASSERT_TRUE(mozilla::glean::install::first_run.TestGetValue().isOk());
ASSERT_EQ(
    0,
    std::memcmp(
        &date,
        mozilla::glean::install::first_run.TestGetValue().unwrap().ptr(),
        sizeof(date)));

JavaScript

const value = new Date("2020-06-11T12:00:00");
// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.equal(Glean.install.firstRun.testGetValue().getTime(), value.getTime());

testGetValueAsString

Get the recorded value for a given datetime metric as an ISO Date String.
Returns a ISO 8601 date string if data is stored.
Returns a language-specific empty/null value if no data is stored.

The returned string will be truncated to the metric's time unit and will include the timezone offset from UTC, e.g. 2019-03-25-05:00 (in this example, time_unit is day).

import org.mozilla.yourApplication.GleanMetrics.Install

assertEquals("2019-03-25-05:00", Install.firstRun.testGetValueAsString())
import org.mozilla.yourApplication.GleanMetrics.Install;

assertEquals("2019-03-25-05:00", Install.INSTANCE.firstRun().testGetValueAsString());
assertEquals("2019-03-25-05:00", try Install.firstRun.testGetValueAsString())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert "2019-03-25-05:00" == metrics.install.first_run.test_get_value_as_str()
import * as install from "./path/to/generated/files/install.js";

assert.strictEqual("2019-03-25-05:00", await install.firstRun.testGetValueAsString());

testGetNumRecordedErrors

Get number of errors recorded for a given datetime metric.

import mozilla.telemetry.glean.testing.ErrorType
import org.mozilla.yourApplication.GleanMetrics.Install

assertEquals(0, Install.firstRun.testGetNumRecordedErrors(ErrorType.INVALID_VALUE))
import mozilla.telemetry.glean.testing.ErrorType
import org.mozilla.yourApplication.GleanMetrics.Install;

assertEquals(
    0,
    Install.INSTANCE.firstRun().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
XCTAssertEqual(0, Install.firstRun.getNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.install.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::install;

assert_eq!(
    0,
    install::first_run.test_get_num_recorded_errors(
        ErrorType::InvalidValue
    )
);
import * as install from "./path/to/generated/files/install.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  0,
  await install.firstRun.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example datetime metric definition:

install:
  first_run:
    type: datetime
    time_unit: day
    description: >
      Records the date when the application was first run
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

time_unit

Datetimes have an optional time_unit parameter to specify the smallest unit of resolution that the metric will record. The allowed values for time_unit are:

  • nanosecond
  • microsecond
  • millisecond (default)
  • second
  • minute
  • hour
  • day

Carefully consider the required resolution for recording your metric, and choose the coarsest resolution possible.

Data questions

  • When did the user first run the application?

Reference

Events

Events allow recording of e.g. individual occurrences of user actions, say every time a view was open and from where.

Each event contains the following data:

  • A timestamp, in milliseconds. The first event in any ping always has a value of 0, and subsequent event timestamps are relative to it.
    • If sending events in custom pings, see note on event timestamp calculation throughout restarts.
  • The name of the event.
  • A set of key-value pairs, where the keys are predefined in the extra_keys metric parameter, and the values are strings.

Immediate submission or batching?

In the Glean JavaScript SDK (Glean.js), since version 2.0.2, events are submitted immediately by default. In all the other SDKs, events are batched and sent together by default in the events ping.

Recording API

record(object)

Record a new event, with optional typed extra values. See Extra metrics parameters.

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Extra added.

import org.mozilla.yourApplication.GleanMetrics.Views

Views.loginOpened.record(Views.loginOpenedExtra(sourceOfLogin = "toolbar"))

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Extra added.

Views.loginOpened.record(LoginOpenedExtra(sourceOfLogin: "toolbar"))

Note that a class has been generated for handling the extra_keys: it has the same name as the event metric, with Extra added.

from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.views.login_opened.record(metrics.views.LoginOpenedExtra(sourceOfLogin="toolbar"))

Note that an enum has been generated for handling the extra_keys: it has the same name as the event metric, with Keys added.

use metrics::views::{self, LoginOpenedExtra};

let extra = LoginOpenedExtra { source_of_login: Some("toolbar".to_string()) };
views::login_opened.record(extra);
import * as views from "./path/to/generated/files/views.js";

views.loginOpened.record({ sourceOfLogin: "toolbar" });

C++

#include "mozilla/glean/GleanMetrics.h"

using mozilla::glean::views::LoginOpenedExtra;
LoginOpenedExtra extra = { .source_of_login = Some("value"_ns) };
mozilla::glean::views::login_opened.Record(std::move(extra))

JavaScript

const extra = { source_of_login: "toolbar" }; // Extra Keys are *NOT* conjugated to camelCase
Glean.views.loginOpened.record(extra);

Recorded errors

  • invalid_overflow: if any of the values in the extras object are greater than 500 bytes in length. (Prior to Glean 31.5.0, this recorded an invalid_value).
  • invalid_value: if there is an attempt to record to an extra key which is not allowed i.e. an extra key that has not been listed in the YAML registry file.
  • invalid_type: if the extra value given is not the expected type.

Testing API

testGetValue

Get the list of recorded events. Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

Note: By default as of v2.0.2 Glean.js sets maxEvents=1 by default. If you try and call testGetValue() for a recorded event with maxEvents=1, snapshot will not include your event. For your testing instance, you can set maxEvents to a value greater than 1 to test recording events with testGetValue().

import org.mozilla.yourApplication.GleanMetrics.Views

val snapshot = Views.loginOpened.testGetValue()
assertEquals(2, snapshot.size)
val first = snapshot.single()
assertEquals("login_opened", first.name)
assertEquals("toolbar", first.extra?.getValue("source_of_login"))
import org.mozilla.yourApplication.GleanMetrics.Views

assertEquals(Views.INSTANCE.loginOpened().testGetValue().size)
val snapshot = try! Views.loginOpened.testGetValue()
XCTAssertEqual(2, snapshot.size)
val first = snapshot[0]
XCTAssertEqual("login_opened", first.name)
XCTAssertEqual("toolbar", first.extra?["source_of_login"])
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

snapshot = metrics.views.login_opened.test_get_value()
assert 2 == len(snapshot)
first = snapshot[0]
assert "login_opened" == first.name
assert "toolbar" == first.extra["source_of_login"]
use metrics::views;

var snapshot = views::login_opened.test_get_value(None).unwrap();
assert_eq!(2, snapshot.len());
let first = &snapshot[0];
assert_eq!("login_opened", first.name);

let extra = event.extra.unwrap();
assert_eq!(Some(&"toolbar".to_string()), extra.get("source_of_login"));
import * as views from "./path/to/generated/files/views.js";

const snapshot = await views.loginOpened.testGetValue();
assert.strictEqual(2, snapshot.length);
const first = snapshot[0];
assert.strictEqual("login_opened", first.name);
assert.strictEqual("toolbar", first.extra.source_of_login);

C++

#include "mozilla/glean/GleanMetrics.h"

auto optEvents = mozilla::glean::views::login_opened.TestGetValue();
auto events = optEvents.extract();
ASSERT_EQ(2UL, events.Length());
ASSERT_STREQ("login_opened", events[0].mName.get());

// Note that the list of extra key/value pairs can be in any order.
ASSERT_EQ(1UL, events[0].mExtra.Length());
auto extra = events[0].mExtra[0];

auto key = std::get<0>(extra);
auto value = std::get<1>(extra);

ASSERT_STREQ("source_of_login"_ns, key.get())
ASSERT_STREQ("toolbar", value.get());
}

JavaScript

var events = Glean.views.loginOpened.testGetValue();
Assert.equal(2, events.length);
Assert.equal("login_opened", events[0].name);

Assert.equal("toolbar", events[0].extra.source_of_login);

testGetNumRecordedErrors

Get the number of errors recorded for a given event metric.

import mozilla.telemetry.glean.testing.ErrorType
import org.mozilla.yourApplication.GleanMetrics.Views

assertEquals(
    0,
    Views.loginOpened.testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
)
import mozilla.telemetry.glean.testing.ErrorType
import org.mozilla.yourApplication.GleanMetrics.Views

assertEquals(
    0,
    Views.INSTANCE.loginOpened().testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
)
XCTAssertEqual(0, Views.loginOpened.testGetNumRecordedErrors(.invalidOverflow))
from glean import load_metrics
from glean.testing import ErrorType
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.views.login_opened.test_get_num_recorded_errors(
    ErrorType.INVALID_OVERFLOW
)
use glean::ErrorType;
use metrics::views;

assert_eq!(
    0,
    views::login_opened.test_get_num_recorded_errors(
        ErrorType::InvalidOverflow,
        None
    )
);
import * as views from "./path/to/generated/files/views.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  0,
  await views.loginOpened.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example event metric definition:

views:
  login_opened:
    type: event
    description: |
      Recorded when the login view is opened.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    extra_keys:
      source_of_login:
        description: The source from which the login view was opened, e.g. "toolbar".
        type: string

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Events require lifetime: ping.

Recorded events are always sent in their respective pings and then cleared. They cannot be persisted longer. The glean_parser will reject any other lifetime.

Extra metric parameters

extra_keys

The acceptable keys on the "extra" object sent with events. A maximum of 50 extra keys is allowed.

Each extra key contains additional metadata:

  • description: Required. A description of the key.
  • type: The type of value this extra key can hold. One of string, boolean, quantity. Defaults to string. Note: If not specified only the legacy API on record is available.

Data questions

  • When and from where was the login view opened?

Limits

  • In Glean.js the default value for maxEvents is 1. In all other SDKs it is 500.
  • Once the maxEvents threshold is reached on the client an "events" ping is immediately sent.
  • The extra_keys allows for a maximum of 50 keys.
  • The keys in the extra_keys list must be in dotted snake case, with a maximum length of 40 bytes, when encoded as UTF-8.
  • The values in the extras object have a maximum length of 500 bytes when serialized and encoded as UTF-8. Longer values are truncated, and an invalid_overflow error is recorded.

Reference

Custom Distribution

Custom distributions are used to record the distribution of arbitrary values.

It should be used only when direct control over how the histogram buckets are computed is required. Otherwise, look at the standard distribution metric types:

Note: Custom distributions are currently not universally supported. See below for available APIs.

Recording API

accumulateSamples

Accumulate the provided samples in the metric.

import org.mozilla.yourApplication.GleanMetrics.Graphics

Graphics.checkerboardPeak.accumulateSamples([23])
import org.mozilla.yourApplication.GleanMetrics.Graphics;

Graphics.INSTANCE.checkerboardPeak().accumulateSamples(listOf(23));
use glean_metrics::graphics;

graphics::checkerboard_peak.accumulate_samples_signed(vec![23]);
import * as graphics from "./path/to/generated/files/graphics.js";

graphics.checkerboardPeak.accumulateSamples([23]);

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::graphics::checkerboard_peak.AccumulateSamples({ 23 });

JavaScript

Glean.graphics.checkerboardPeak.accumulateSamples([23])

accumulateSingleSample

Accumulates one sample and appends it to the metric.

import org.mozilla.yourApplication.GleanMetrics.Graphics

Graphics.checkerboardPeak.accumulateSingleSample(23)
import org.mozilla.yourApplication.GleanMetrics.Graphics;

Graphics.INSTANCE.checkerboardPeak().accumulateSingleSample(23);
use glean_metrics::graphics;

graphics::checkerboard_peak.accumulate_single_sample(23);
import * as graphics from "./path/to/generated/files/graphics.js";

graphics.checkerboardPeak.accumulateSingleSample(23);

This API is not currently exposed in Firefox Desktop, see Bug 1884183.

Limits

  • The maximum value of bucket_count is 100.
  • Only non-negative values may be recorded (>= 0).

Recorded errors

  • invalid_value: If recording a negative value.

Testing API

testGetValue

Gets the recorded value for a given custom distribution metric.
Returns a struct with counts per buckets and total sum if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Graphics

// Get snapshot
val snapshot = Graphics.checkerboardPeak.testGetValue()

// Does the sum have the expected value?
assertEquals(23, snapshot.sum)

// Does the count have the expected value?
assertEquals(1L, snapshot.count)

// Buckets are indexed by their lower bound.
assertEquals(1L, snapshot.values[19])
import org.mozilla.yourApplication.GleanMetrics.Graphics;

// Get snapshot
val snapshot = Graphics.INSTANCE.checkerboardPeak().testGetValue();

// Does the sum have the expected value?
assertEquals(23, snapshot.sum);

// Does the count have the expected value?
assertEquals(1L, snapshot.count);
use glean_metrics::graphics;

// Does the sum have the expected value?
assert_eq!(23, graphics::checkerboard_peak.test_get_value(None).unwrap().sum);

// Does the count have the expected value?
assert_eq!(1, graphics::checkerboard_peak.test_get_value(None).unwrap().count);

// Buckets are indexed by their lower bound.
assert_eq!(1, snapshot.values[19])
import * as graphics from "./path/to/generated/files/graphics.js";

// Get snapshot
const snapshot = await graphics.checkerboardPeak.testGetValue();

// Does the sum have the expected value?
assert.equal(23, snapshot.sum);

// Buckets are indexed by their lower bound.
assert.equal(1, snapshot.values[19]);

C++

#include "mozilla/glean/GleanMetrics.h"

auto data = mozilla::glean::graphics::checkerboard_peak.TestGetValue().value();
ASSERT_EQ(23UL, data.sum);

JavaScript

let data = Glean.graphics.checkerboardPeak.testGetValue();
Assert.equal(23, data.sum);

testGetNumRecordedErrors

Gets the number of errors recorded for a the custom distribution metric.

import org.mozilla.yourApplication.GleanMetrics.Graphics

/// Did the metric receive a negative value?
assertEquals(
    0,
    Graphics.checkerboardPeak.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Graphics;

/// Did the metric receive a negative value?
assertEquals(
    0,
    Graphics.INSTANCE.checkerboardPeak().testGetNumRecordedErrors(
        ErrorType.INVALID_VALUE
    )
)
use glean::ErrorType;
use glean_metrics::graphics;

// Were any of the values negative and thus caused an error to be recorded?
assert_eq!(
    0,
    graphics::checkerboard_peak.test_get_num_recorded_errors(
        ErrorType::InvalidValue,
        None
    )
);
import * as graphics from "./path/to/generated/files/graphics.js";
import { ErrorType } from "@mozilla/glean/<platform>";

// Did the metric receive a negative value?
assert.equal(
    0,
    graphics.checkerboardPeak.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric Parameters

Example custom distribution metric definition:

graphics:
  checkerboard_peak:
    type: custom_distribution
    description: >
      Peak number of CSS pixels checkerboarded during a checkerboard event.
    range_min: 1
    range_max: 66355200
    bucket_count: 50
    histogram_type: exponential
    unit: pixels
    gecko_datapoint: CHECKERBOARD_PEAK
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

Extra metric parameters

Custom distributions have the following required parameters:

  • range_min: (Integer) The minimum value of the first bucket
  • range_max: (Integer) The minimum value of the last bucket
  • bucket_count: (Integer) The number of buckets
  • histogram_type:
    • linear: The buckets are evenly spaced
    • exponential: The buckets follow a natural logarithmic distribution

Note Check out how these bucketing algorithms would behave on the Custom distribution simulator.

Custom distributions have the following optional parameters:

  • unit: (String) The unit of the values in the metric. For documentation purposes only -- does not affect data collection.
  • gecko_datapoint: (String) This is a Gecko-specific property. It is the name of the Gecko metric to accumulate the data from, when using a Glean SDK in a product using GeckoView.

Reference

Simulator

Please, insert your custom data below as a JSON array.

Data options

Properties

Labeled Custom Distributions

Labeled custom distributions are used to record different related distributions of arbitrary values.

If your data is timing or memory based and you don't need direct control over histogram buckets, consider instead:

Recording API

accumulateSamples

Accumulate the provided samples in the metric.

use glean_metrics::network;

network::http3_late_ack_ratio
    .get("ack")
    .accumulateSamples(vec![(stats.late_ack * 10000) / stats.packets_tx]);
network::http3_late_ack_ratio
    .get("pto")
    .accumulateSamples(vec![(stats.pto_ack * 10000) / stats.packets_tx]);

Recorded Errors

  • invalid_value: if recording any negative samples
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

accumulateSingleSample

Accumulates one sample and appends it to the metric.

use glean_metrics::network;

network::http3_late_ack_ratio
    .get("ack")
    .accumulateSingleSample((stats.late_ack * 10000) / stats.packets_tx);
network::http3_late_ack_ratio
    .get("pto")
    .accumulateSingleSample((stats.pto_ack * 10000) / stats.packets_tx);

Recorded Errors

  • invalid_value: if recording a negative sample
  • invalid_label:
    • If the label contains invalid characters. Data is still recorded to the special label __other__.
    • If the label exceeds the maximum number of allowed characters. Data is still recorded to the special label __other__.

Testing API

testGetValue

Gets the recorded value for a given label in a labeled custom distribution metric. Returns a struct with counts per buckets and total sum if data is stored. Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

use glean_metrics::network;

// Assert the sum of all samples is 42.
assert_eq!(42, network::http3_late_ack_ratio.get("ack").test_get_value(None).unwrap().sum);

// Assert there's only the one sample
assert_eq!(1, network::http3_late_ack_ratio.get("ack").test_get_value(None).unwrap().count);

// Buckets are indexed by their lower bound.
assert_eq!(1, etwork::http3_late_ack_ratio.get("ack").test_get_value(None).unwrap().values[41]);

testGetNumRecordedErrors

Gets the number of errors recorded for a given labeled custom distribution metric in total.

use glean::ErrorType;
use glean_metrics::network;

// Assert there were no negative values instrumented.
assert_eq!(
    0,
    network::http3_late_ack_ratio.test_get_num_recorded_errors(
        ErrorType::InvalidValue,
        None
    )
);

Metric parameters

Example labeled custom distribution metric definition:

network:
  http3_late_ack_ratio:
    type: labeled_custom_distribution
    description: >
      HTTP3: The ratio of spurious retransmissions per packets sent,
      represented as an integer permdecimille:
      `(spurious_retransmission / packet sent * 10000)`
    range_min: 1
    range_max: 2000
    bucket_count 100
    histogram_type: exponential
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 175
    labels:
      - ack
      - pto

Extra metric parameters

range_min, range_max, bucket_count, and histogram_type (Required)

Labeled custom distributions have the following required parameters:

  • range_min: (Integer) The minimum value of the first bucket
  • range_max: (Integer) The minimum value of the last bucket
  • bucket_count: (Integer) The number of buckets
  • histogram_type:
    • linear: The buckets are evenly spaced
    • exponential: The buckets follow a natural logarithmic distribution

labels

Labeled metrics may have an optional labels parameter, containing a list of known labels. The labels in this list must match the following requirements:

  • Conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • This list itself is limited to 100 labels.
Important

If the labels are specified in the metrics.yaml, using any label not listed in that file will be replaced with the special value __other__.

If the labels are not specified in the metrics.yaml, only 16 different dynamic labels may be used, after which the special value __other__ will be used.

Removing or changing labels, including their order in the registry file, is permitted. Avoid reusing labels that were removed in the past. It is best practice to add documentation about removed labels to the description field so that analysts will know of their existence and meaning in historical data. Special care must be taken when changing GeckoView metrics sent through the Glean SDK, as the index of the labels is used to report Gecko data through the Glean SDK.

Data questions

  • What is the distribution of retransmission ratios per connection?
  • What is the distribution of how long it takes to load extensions' content scripts, by addon id?

Limits

  • The maximum value of bucket_count is 100.
  • Only non-negative integer values may be recorded (>=0).
  • Labels must conform to the label format.
  • Each label must have a maximum of 71 characters.
  • Each label must only contain printable ASCII characters.
  • The list of labels is limited to:
    • 16 different dynamic labels if no static labels are defined. Additional labels will all record to the special label __other__.
    • 100 labels if specified as static labels in metrics.yaml, see Labels. Unknown labels will be recorded under the special label __other__.

Reference

Quantity

Used to record a single non-negative integer value or 0. For example, the width of the display in pixels.

Do not use Quantity for counting

If you need to count something (e.g. the number of times a button is pressed) prefer using the Counter metric type, which has a specific API for counting things.

Recording API

set

Sets a quantity metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Display

Display.width.set(width)
import org.mozilla.yourApplication.GleanMetrics.Display;

Display.INSTANCE.width().set(width);
Display.width.set(width)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.display.width.set(width)
use glean_metrics::display;

display::width.set(width);
import * as display from "./path/to/generated/files/display.js";

display.width.set(window.innerWidth);

C++

#include "mozilla/glean/GleanMetrics.h"

mozilla::glean::display::width.Set(innerWidth);

JavaScript

Glean.display.width.set(innerWidth);

Limits

  • Quantities must be non-negative integers or 0.

Recorded errors

Testing API

testGetValue

Gets the recorded value for a given quantity metric.
Returns the quantity value if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Display

// Does the quantity have the expected value?
assertEquals(433, Display.width.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Display;

// Does the quantity have the expected value?
assertEquals(433, Display.INSTANCE.width().testGetValue());
// Does the quantity have the expected value?
XCTAssertEqual(433, try Display.width.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Does the quantity have the expected value?
assert 433 == metrics.display.width.test_get_value()
use glean_metrics::display;

// Was anything recorded?
assert_eq!(433, display::width.test_get_value(None).unwrap());
import * as display from "./path/to/generated/files/display.js";

assert.strictEqual(433, await display.width.testGetValue());

C++

#include "mozilla/glean/GleanMetrics.h"

ASSERT_TRUE(mozilla::glean::display::width.TestGetValue().isOk());
ASSERT_EQ(433, mozilla::glean::display::width.TestGetValue().unwrap().value());

JavaScript

// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.equal(433, Glean.display.width.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given quantity metric.

import org.mozilla.yourApplication.GleanMetrics.Display

// Did it record an error due to a negative value?
assertEquals(0, Display.width.testGetNumRecordedErrors(ErrorType.INVALID_VALUE))
import org.mozilla.yourApplication.GleanMetrics.Display;

// Did the quantity record a negative value?
assertEquals(
    0,
    Display.INSTANCE.width().testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
);
// Did the quantity record a negative value?
XCTAssertEqual(0, Display.width.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Did the quantity record an negative value?
from glean.testing import ErrorType
assert 0 == metrics.display.width.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::display;

assert_eq!(
  0,
  display::width.test_get_num_recorded_errors(
    ErrorType::InvalidValue,
    None
  )
);
import * as display from "./path/to/generated/files/display.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  0,
  await display.width.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example quantity metric definition:

controls:
  refresh_pressed:
    type: quantity
    description: >
      The width of the display, in pixels.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01
    unit: pixels

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Extra metric parameters

unit

Quantities have the required unit parameter, which is a free-form string for documentation purposes.

Data questions

  • What is the width of the display, in pixels?

Reference

Rate

Used to count how often something happens relative to how often something else happens. Like how many documents use a particular CSS Property, or how many HTTP connections had an error. You can think of it like a fraction, with a numerator and a denominator.

All rates start without a value. A rate with a numerator of 0 is valid and will be sent to ensure we capture the "no errors happened" or "no use counted" cases.

Let the Glean metric do the counting

When using a rate metric, it is important to let the Glean metric do the counting. Using your own variable for counting and setting the metric yourself could be problematic: ping scheduling will make it difficult to ensure the metric is at the correct value at the correct time. Instead, count to the numerator and denominator as you go.

Recording API

addToNumerator / addToDenominator

Numerators and denominators need to be counted individually.

import org.mozilla.yourApplication.GleanMetrics.Network

if (connectionHadError) {
    Network.httpConnectionError.addToNumerator(1)
}

Network.httpConnectionError.addToDenominator(1)

External Denominators

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

import org.mozilla.yourApplication.GleanMetrics.Network

if (connectionHadError) {
    Network.httpConnectionError.addToNumerator(1)
}

Network.httpConnections.add(1)
import org.mozilla.yourApplication.GleanMetrics.Network;

if (connectionHadError) {
    Network.INSTANCE.httpConnectionError().addToNumerator(1);
}

Network.IMPORTANT.httpConnectionError().addToDenominator(1);

External Denominators

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

import org.mozilla.yourApplication.GleanMetrics.Network;

if (connectionHadError) {
    Network.INSTANCE.httpConnectionError().addToNumerator(1);
}

Network.INSTANCE.httpConnections().add(1)
if (connectionHadError) {
    Network.httpConnectionError.addToNumerator(1)
}

Network.httpConnectionError.addToDenominator(1)

External Denominators

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

if (connectionHadError) {
    Network.httpConnectionError.addToNumerator(1)
}

Network.httpConnections.add(1)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

if connection_had_error:
    metrics.network.http_connection_error.add_to_numerator(1)

metrics.network.http_connection_error.add_to_denominator(1)

External Denominators

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

from glean import load_metrics
metrics = load_metrics("metrics.yaml")

if connection_had_error:
    metrics.network.http_connection_error.add_to_numerator(1)

metrics.network.http_connections.add(1)
use glean_metrics::network;

if connection_had_error {
    network::http_connection_error.add_to_numerator(1);
}

network::http_connection_error.add_to_denominator(1);

External Denominators

If the rate uses an external denominator, adding to the denominator must be done through the denominator's counter API:

use glean_metrics::network;

if connection_had_error {
    network::http_connection_error.add_to_numerator(1);
}

network::http_connections.add(1);
import * as network from "./path/to/generated/files/network.js";

if (connectionHadError) {
  network.httpConnectionError.addToNumerator(1);
}

network.httpConnectionError.addToDenominator(1);

C++

#include "mozilla/glean/GleanMetrics.h"

if (aHadError) {
mozilla::glean::network::http_connection_error.AddToNumerator(1);
}
mozilla::glean::network::http_connection_error.AddToDenominator(1);

JavaScript

if (aHadError) {
Glean.network.httpConnectionError.addToNumerator(1);
}
Glean.network.httpConnectionError.addToDenominator(1);

Recorded errors

  • invalid_value: If either numerator or denominator is incremented by a negative value.
  • invalid_type: If a floating point or non-number value is given.

Limits

  • Numerator and Denominator only increment.
  • Numerator and Denominator saturate at the largest value that can be represented as a 32-bit signed integer (2147483647).

Testing API

testGetValue

Gets the recorded value for a given rate metric.
Returns the numerator/denominator pair if data is stored.
Returns a language-specific empty/null value if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Network

assertEquals(Rate(1, 1), Network.httpConnectionError.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Network;

assertEquals(Rate(1, 1), Network.INSTANCE.httpConnectionError().testGetValue());
XCTAssertEqual(
    Rate(numerator: 1, denominator: 1),
    Network.httpConnectionError.testGetValue()
)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert Rate(1, 1) == metrics.network.http_connection_error.test_get_value()
use glean_metrics::network;

let rate = network::http_connection_error.test_get_value(None).unwrap();
assert_eq!(1, rate.numerator);
assert_eq!(1, rate.denominator);
import * as network from "./path/to/generated/files/network.js";

const { numerator, denominator } = await network.httpConnectionError.testGetValue();
assert.strictEqual(numerator, 1);
assert.strictEqual(denominator, 1);

C++

#include "mozilla/glean/GleanMetrics.h"

auto pair = mozilla::glean::network::http_connection_error.TestGetValue().unwrap();
ASSERT_EQ(1, pair.first);
ASSERT_EQ(1, pair.second);

JavaScript

// testGetValue will throw NS_ERROR_LOSS_OF_SIGNIFICANT_DATA on error.
Assert.deepEqual(
{ numerator: 1, denominator: 1 },
Glean.network.httpConnectionError.testGetValue()
);

testGetNumRecordedErrors

import org.mozilla.yourApplication.GleanMetrics.Network

assertEquals(
    0,
    Network.httpConnectionError.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
import org.mozilla.yourApplication.GleanMetrics.Network;

assertEquals(
    0,
    Network.INSTANCE.httpConnectionError().testGetNumRecordedErrors(
        ErrorType.INVALID_VALUE
    )
);
XCTAssertEqual(
    0,
    Network.httpConnectionError.testGetNumRecordedErrors(.invalidValue)
)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

from glean.testing import ErrorType

assert 0 == metrics.network.http_connection_error.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)
use glean::ErrorType;
use glean_metrics::network;

assert_eq!(
    0,
    network::http_connection_error.test_get_num_recorded_errors(
        ErrorType::InvalidValue,
        None
    )
);
import * as network from "./path/to/generated/files/network.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  1,
  await network.httpConnectionError.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example rate metric definition:

network:
  http_connection_error:
    type: rate
    description: >
      How many HTTP connections error out out of the total connections made.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

External Denominators

If several rates share the same denominator then the denominator should be defined as a counter and shared between rates using the denominator_metric property:

network:
  http_connections:
    type: counter
    description: >
      Total number of http connections made.
    ...

  http_connection_error:
    type: rate
    description: >
      How many HTTP connections error out out of the total connections made.
    denominator_metric: network.http_connections
    ...

  http_connection_slow:
    type: rate
    description: >
      How many HTTP connections were slow, out of the total connections made.
    denominator_metric: network.http_connections
    ...

The Glean JavaScript SDK does not support external denominators for Rate metrics, yet. Follow Bug 1745753 for updates on that features development.

Data Questions

  • How often did an HTTP connection error?
  • How many documents used a given CSS Property?

Reference

Text

Records a single long Unicode text, used when the limits on String are too low.

Important

This type should only be used in special cases when other metrics don't fit. See limitations below.
Reach out to the Glean team before using this.

Recording API

set

Sets a text metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Article

Article.content.set(extractedText)
import org.mozilla.yourApplication.GleanMetrics.Article;

Article.INSTANCE.content().set(extractedText);
Article.content.set(extractedText)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

metrics.article.content.set(extracted_text)
use glean_metrics::article;

article::content.set(extracted_text);
import * as article from "./path/to/generated/files/article.js";

article.content.set(extractedText);

Limits

  • Text metrics can only be sent in custom pings.
  • Text metrics are always of data collection category 3 (web_activity) or category 4 (highly_sensitive).
  • Only ping and application lifetimes are allowed.
  • Fixed maximum text length: 200 kilobytes. Longer text is truncated. This is measured in the number of bytes when the string is encoded in UTF-8.

Recorded errors

Testing API

testGetValue

Gets the recorded value for a given text metric.
Returns the string if data is stored.
Returns null if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Article

assertEquals("some content", Article.content.testGetValue())
import org.mozilla.yourApplication.GleanMetrics.Article;

assertEquals("some content", Article.INSTANCE.content().testGetValue());
XCTAssertEqual("some content", Article.content.testGetValue())
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

assert "some content" == metrics.article.content.test_get_value()
use glean_metrics::article;

assert_eq!("some content", article::content.test_get_value(None).unwrap());
import * as article from "./path/to/generated/files/article.js";

assert.strictEqual("some content", await article.content.testGetValue());

testGetNumRecordedErrors

Gets the number of errors recorded for a given text metric.

import org.mozilla.yourApplication.GleanMetrics.Article

// Was the string truncated, and an error reported?
assertEquals(
    0,
    Article.content.testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
)
import org.mozilla.yourApplication.GleanMetrics.Article;

// Was the string truncated, and an error reported?
assertEquals(
    0,
    Article.content.testGetNumRecordedErrors(ErrorType.INVALID_OVERFLOW)
);
// Was the string truncated, and an error reported?
XCTAssertEqual(0, Article.content.testGetNumRecordedErrors(.invalidOverflow))
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

# Was the string truncated, and an error reported?
assert 0 == metrics.article.content.test_get_num_recorded_errors(
    ErrorType.INVALID_OVERFLOW
)
use glean::ErrorType;
use glean_metrics::article;

// Was the string truncated, and an error reported?
assert_eq!(
  0,
  article::content.test_get_num_recorded_errors(
    ErrorType::InvalidOverflow,
    None
  )
);
import * as article from "./path/to/generated/files/article.js";
import { ErrorType } from "@mozilla/glean/error";

assert.strictEqual(
  0,
  await article.content.testGetNumRecordedErrors(ErrorType.InvalidValue)
);

Metric parameters

Example text metric definition:

article:
  content:
    type: text
    lifetime: ping
    send_in_pings:
      - research
    data_sensitivity:
      - web_activity
    description: >
      The plaintext content of the displayed article.
    bugs:
      - https://bugzilla.mozilla.org/000000
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=000000#c3
    notification_emails:
      - me@mozilla.com
    expires: 2020-10-01

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Data questions

  • What article content was displayed to the user?

Object

Record structured data.

Recording API

set

Sets an object metric to a specific value.

import org.mozilla.yourApplication.GleanMetrics.Party

var balloons = Party.BalloonsObject()
balloons.add(Party.BalloonsObjectItem(colour = "red", diameter = 5))
balloons.add(Party.BalloonsObjectItem(colour = "green"))
Party.balloons.set(balloons)
var balloons: Party.BalloonsObject = []
balloons.append(Party.BalloonsObjectItem(colour: "red", diameter: 5))
balloons.append(Party.BalloonsObjectItem(colour: "green"))
Party.balloons.set(balloons)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

balloons = metrics.BalloonsObject()
balloons.append(BalloonsObjectItem(colour="red", diameter=5))
balloons.append(BalloonsObjectItem(colour="green"))
metrics.party.balloons.set(balloons)

C++

Not yet implemented.

JavaScript

let balloons = [
  { colour: "red", diameter: 5 },
  { colour: "blue", diameter: 7 },
];
Glean.party.balloons.set(balloons);

Limits

  • Only objects matching the specified structure will be recorded

Recorded errors

  • invalid_value: if the passed value doesn't match the predefined structure

Testing API

testGetValue

Gets the recorded value for a given object metric.
Returns the data as a JSON object if data is stored.
Returns null if no data is stored. Has an optional argument to specify the name of the ping you wish to retrieve data from, except in Rust where it's required. None or no argument will default to the first value found for send_in_pings.

import org.mozilla.yourApplication.GleanMetrics.Party

val snapshot = metric.testGetValue()!!
assertEquals(1, snapshot.jsonArray.size)
let snapshot = (try! Party.balloons.testGetValue()) as! [Any]
XCTAssertEqual(1, snapshot.size)
from glean import load_metrics
metrics = load_metrics("metrics.yaml")

snapshot = metrics.party.balloons.test_get_value()
assert 2 == len(snapshot)

C++

Not yet implemented.

JavaScript

// testGetValue will throw a data error on invalid value.
Assert.equal(
  [
    { colour: "red", diameter: 5 },
    { colour: "blue", diameter: 7 },
  ],
  Glean.party.balloons.testGetValue()
);

testGetNumRecordedErrors

Gets the number of errors recorded for a given text metric.

import org.mozilla.yourApplication.GleanMetrics.Party

assertEquals(
    0,
    Party.balloons.testGetNumRecordedErrors(ErrorType.INVALID_VALUE)
)
XCTAssertEqual(0, Party.balloons.testGetNumRecordedErrors(.invalidValue))
from glean import load_metrics
from glean.testing import ErrorType
metrics = load_metrics("metrics.yaml")

assert 0 == metrics.party.balloons.test_get_num_recorded_errors(
    ErrorType.INVALID_VALUE
)

Metric parameters

The definition for an object metric type accepts a structure parameter. This defines the accepted structure of the object using a subset of JSON schema.

The allowed types are:

  • string
  • number
  • boolean
  • array
  • object

The array type takes an items parameter, that does define the element types it can hold.
The object type takes a properties parameter, that defines the nested object structure.

array and object metrics can be nested.
No other schema parameters are allowed.
All fields are optional.

Data is validated against this schema at recording time.
Missing values will not be serialized into the payload.

Example object metric definition:

party:
  balloons:
    type: object
    description: A collection of balloons
    bugs:
      - https://bugzilla.mozilla.org/TODO
    data_reviews:
      - http://example.com/reviews
    notification_emails:
      - CHANGE-ME@example.com
    expires: never
    structure:
      type: array
      items:
        type: object
        properties:
          colour:
            type: string
          diameter:
            type: number

For a full reference on metrics parameters common to all metric types, refer to the metrics YAML registry format reference page.

Data questions

  • What is the crash stack after a Firefox main process crash?

Pings

Glean-owned pings are submitted automatically

Products do not need to submit Glean built-in pings, as their scheduling is managed internally. The APIs on this page are only relevant for products defining custom pings.

Submission API

submit

Collect and queue a custom ping for eventual uploading.

By default, if the ping doesn't currently have any events or metrics set, submit will do nothing. However, if the send_if_empty flag is set to true in the ping definition, it will always be submitted.

It is not necessary for the caller to check if Glean is disabled before calling submit. If Glean is disabled submit is a no-op.

For example, to submit the custom ping defined in Adding new custom pings:

import org.mozilla.yourApplication.GleanMetrics.Pings
Pings.search.submit(
    Pings.searchReasonCodes.performed
)
import org.mozilla.yourApplication.GleanMetrics.Pings

Pings.INSTANCE.search.submit(
    Pings.INSTANCE.searchReasonCodes.performed
);
import Glean

GleanMetrics.Pings.shared.search.submit(
    reason: .performed
)
from glean import load_pings

pings = load_pings("pings.yaml")

pings.search.submit(pings.search_reason_codes.PERFORMED)
use glean::Pings;

pings::search.submit(pings::SearchReasonCodes::Performed);
import * as pings from "./path/to/generated/files/pings.js";

pings.search.submit(pings.searchReasonCodes.Performed);

C++

mozilla::glean_pings::Search.Submit("performed"_ns);

JavaScript

GleanPings.search.submit("performed");

Testing API

testBeforeNextSubmit

Runs a validation function before the ping is collected.

import org.mozilla.yourApplication.GleanMetrics.Search
import org.mozilla.yourApplication.GleanMetrics.Pings

// Record some data.
Search.defaultEngine.add(5);

// Instruct the ping API to validate the ping data.
var validatorRun = false
Pings.search.testBeforeNextSubmit { reason ->
    assertEquals(Pings.searchReasonCodes.performed, reason)
    assertEquals(5, Search.defaultEngine.testGetValue())
    validatorRun = true
}

// Submit the ping.
Pings.search.submit(
    Pings.searchReasonCodes.performed
)

// Verify that the validator run.
assertTrue(validatorRun)
import org.mozilla.yourApplication.GleanMetrics.Search
import org.mozilla.yourApplication.GleanMetrics.Pings

// Record some data.
Search.INSTANCE.defaultEngine.add(5);

// Instruct the ping API to validate the ping data.
boolean validatorRun = false;
Pings.INSTANCE.search.testBeforeNextSubmit((reason) -> {
    assertEquals(Pings.searchReasonCodes.performed, reason);
    assertEquals(5, Search.INSTANCE.defaultEngine.testGetValue());
    validatorRun = true;
});

// Submit the ping.
Pings.INSTANCE.search.submit(
    Pings.INSTANCE.searchReasonCodes.performed
);

// Verify that the validator run.
assertTrue(validatorRun);
// Record some data.
Search.defaultEngine.add(5)

// Instruct the ping API to validate the ping data.
var validatorRun = false
GleanMetrics.pings.shared.search.testBeforeNextSubmit { reason in
    XCTAssertEqual(.performed, reason, "Unexpected reason for search ping submitted")
    XCTAssertEqual(5, try Search.defaultEngine.testGetValue(), "Unexpected value for default engine in search ping")
    validatorRun = true
}

// Submit the ping.
GleanMetrics.Pings.shared.search.submit(
    reason: .performed
)

// Verify that the validator run.
XCTAssert(validatorRun, "Expected validator to be called by now.")
from glean import load_metrics, load_pings

pings = load_pings("pings.yaml")
metrics = load_metrics("metrics.yaml")

# Record some data.
metrics.search.default_engine.add(5)

# Need a mutable object and plain booleans are not.
callback_was_called = [False]

def check_custom_ping(reason):
    assert reason == pings.search_reason_codes.PERFORMED
    assert 5 == metrics.search.default_engine.test_get_value()
    callback_was_called[0] = True

# Instruct the ping API to validate the ping data.
pings.search.test_before_next_submit(check_custom_ping)

# Submit the ping.
pings.search.submit(pings.search_reason_codes.PERFORMED)

# Verify that the validator run.
assert callback_was_called[0]
use glean_metrics::{search, pings};

// Record some data.
search::default_engine.add(5);

// Instruct the ping API to validate the ping data.
pings::search.test_before_next_submit(move |reason| {
    assert_eq!(pings::SearchReasonCodes::Performed, reason);
    assert_eq!(5, search::default_engine.test_get_value(None).unwrap());
});

// When the `submit` API is not directly called by the
// test code, it may be worth checking that the validator
// function run by using a canary boolean in the closure
// used in `test_before_next_submit` and asserting on its
// value after submission.

// Submit the ping.
pings::search.submit(pings::SearchReasonCodes::Performed);
import * as search from "./path/to/generated/files/search.js";
import * as pings from "./path/to/generated/files/pings.js";

// Record some data.
search.defaultEngine.add(5);

// Instruct the ping API to validate the ping data.
let validatorRun = false;
const p = pings.search.testBeforeNextSubmit(async reason => {
  assert.strictEqual(reason, "performed");
  assert.strictEqual(await search.defaultEngine.testGetValue(), 5);
  validatorRun = true;
});

// Submit the ping.
pings.search.submit("performed");
// Wait for the validation to finish.
assert.doesNotThrow(async () => await p);

// Verify that the validator run.
assert.ok(validatorRun);

JavaScript:

Glean.search.defaultEngine.add(5);
let submitted = false;
GleanPings.search.testBeforeNextSubmit(reason => {
  submitted = true;
  Assert.equal(5, Glean.search.defaultEngine.testGetValue());
});
GleanPings.search.submit();
Assert.ok(submitted);

C++:

mozilla::glean::search::default_engine.Add(5);
bool submitted = false;
mozilla::glean_pings::Search.TestBeforeNextSubmit([&submitted](const nsACString& aReason) {
  submitted = true;
  ASSERT_EQ(false, mozilla::glean::search::default_engine.TestGetValue().unwrap().ref());
});
mozilla::glean_pings::Search.Submit();
ASSERT_TRUE(submitted);

Overview

The Glean SDKs are available for several programming languages and development environments.

Although there are different SDKs, each of them is based off of either a JavaScript core or a Rust core. These cores contain the bulk of the logic of the client libraries. Thin wrappers around them expose APIs for the target platforms. Each SDK may also send a different set of default pings and collect a different set of default metrics.

Finally, each SDK may also be1 accompanied by glean_parser, a Python command line utility that provides a list of useful development tools for developers instrumenting a project using Glean.

1

Some SDKs are not bundled with glean_parser and it is left to the user to install it separately.

Rust Core-based SDKs

The code for the Rust Core-based SDKs is available on the mozilla/glean repository.

These group of SDKs were previously referred to as "language bindings" i.e. "the Kotlin language bindings" or "the Python language bindings".

Rust

The Glean Rust SDK can be used with any Rust application. It can serve a wide variety of usage patterns, such as short-lived CLI applications as well as longer-running desktop or server applications. It can optionally send builtin pings at startup. It does not assume an interaction model and the integrating application is responsible to connect the respective hooks.

It is available as the glean crate on crates.io.

Kotlin

The Glean Kotlin SDK is primarily used for integration with Android applications. It assumes a common interaction model for mobile applications. It sends builtin pings at startup of the integrating application.

It is available standalone as org.mozilla.telemetry:glean or via Android Components as org.mozilla.components:service-glean from the Mozilla Maven instance.

The Kotlin SDK can also be used from Java.

See Android for more on integrating Glean on Android.

Swift

The Glean Swift SDK is primarily used for integration with iOS applications. It assumes a common interaction model for mobile applications. It sends builtin pings at startup of the integrating application.

It is available as a standalone Xcode framework from the Glean releases page or bundled with the AppServices framework.

Python

The Glean Python SDK allows integration with any Python application. It can serve a wide variety of usage patterns, such as short-lived CLI applications as well as longer-running desktop or server applications.

It is available as glean-sdk on PyPI.

JavaScript Core-based SDKs

The code for the JavaScript Core-based SDKs is available on the mozilla/glean.js repository.

This collection of SDKs is commonly referred to as Glean.js.

JavaScript

The Glean JavaScript SDK allows integration with three distint JavaScript environments: websites, web extension and Node.js.

It is available as @mozilla/glean on npm. This package has different entry points to access the different SDKs.

  • @mozilla/glean/web gives access to the websites SDK
  • @mozilla/glean/webext gives access to the web extension SDK
  • @mozilla/glean/node gives access to the Node.js SDK

QML

The Glean QML SDK allows integration with Qt/QML applications and libraries.

It is available as a compressed file attached to each new Glean.js release and may be downloaded from the project GitHub releases page.

Android-specific information

The Glean Kotlin SDK can be used in Android applications. The integration into the build system for Android applications can be tuned to your needs. See the following chapters for details.

Android build script configuration options

This chapter describes build configuration options that control the behavior of the Glean Kotlin SDK's Gradle plugin. These options are not usually required for normal use.

Options can be turned on by setting a variable on the Gradle ext object before applying the Glean Gradle plugin.

gleanBuildDate

Overwrite the auto-generated build date.

If set to 0 a static UNIX epoch time will be used. If set to a ISO8601 datetime string it will use that date. Note that any timezone offset will be ignored and UTC will be used. For other values it will throw an error.

ext.gleanBuildDate = "2022-01-03T17:30:00"

allowMetricsFromAAR

Normally, the Glean Kotlin SDK looks for metrics.yaml and pings.yaml files in the root directory of the Glean-using project. However, in some cases, these files may need to ship inside the dependencies of the project. For example, this is used in the engine-gecko component to grab the metrics.yaml from the geckoview AAR.

ext.allowMetricsFromAAR = true

When this flag is set, every direct dependency of your library will be searched for a metrics.yaml file, and those metrics will be treated as the metrics as if they were defined by your library. That is, API wrappers accessible from your library will be generated for those metrics.

The metrics.yaml can be added to the dependency itself by calling this on each relevant build variant:

variant.packageLibraryProvider.get().from("${topsrcdir}/path/metrics.yaml")

gleanGenerateMarkdownDocs

The Glean Kotlin SDK can automatically generate Markdown documentation for metrics and pings defined in the registry files, in addition to the metrics API code.

ext.gleanGenerateMarkdownDocs = true

Flipping the feature to true will generate a metrics.md file in $projectDir/docs at build-time. In general this is not necessary for projects using Mozilla's data ingestion infrastructure: in those cases human-readable documentation will automatically be viewable via the Glean Dictionary.

gleanDocsDirectory

The gleanDocsDirectory can be used to customize the path of the documentation output directory. If gleanGenerateMarkdownDocs is disabled, it does nothing. Please note that only the metrics.md will be overwritten: any other file available in the target directory will be preserved.

ext.gleanDocsDirectory = "$rootDir/docs/user/telemetry"

gleanYamlFiles

By default, the Glean Gradle plugin will look for metrics.yaml and pings.yaml files in the same directory that the plugin is included from in your application or library. To override this, ext.gleanYamlFiles may be set to a list of explicit paths.

ext.gleanYamlFiles = ["$rootDir/glean-core/metrics.yaml", "$rootDir/glean-core/pings.yaml"]

gleanExpireByVersion

Expire the metrics and pings by version, using the provided major version.

If enabled, expiring metrics or pings by date will produce an error.

ext.gleanExpireByVersion = 25

Different products have different ways to compute the product version at build-time. For this reason the Glean Gradle plugin cannot provide an automated way to detect the product major version at build time. When using the expiration by version feature in Android, products must provide the major version by themselves.

gleanPythonEnvDir

By default, the Glean Gradle plugin will manage its own Python virtualenv in $gradleUserHomeDir/glean to install glean_parser. By specifying a path in ext.gleanPythonEnvDir you can reuse an existing Python virtualenv.

ext.gleanPythonEnvDir = "$buildDir/externallyManagedVenv"

glean_parser must be available in that virtualenv, the Gradle plugin will make no attempt at installing it.

Offline builds of Android applications that use Glean

The Glean Kotlin SDK has basic support for building Android applications that use Glean in offline mode.

The Glean Kotlin SDK uses a Python script, glean_parser to generate code for metrics from the metrics.yaml and pings.yaml files when Glean-using applications are built. When online, the pieces necessary to run this script are installed automatically.

For offline builds, the Python environment, and packages of glean_parser and its dependencies must be provided prior to building the Glean-using application.

To build a Glean-using application in offline mode, you can either:

Provide an externally-managed virtualenv

Set ext.gleanPythonEnvDir to your existing virtualenv before applying the plugin, see gleanPythonEnvDir.

Provide a Python interpreter and the required wheels

In this mode Glean will setup its own virtualenv in $gradleUserHomeDir/glean, controlled by the GLEAN_PYTHON and GLEAN_PYTHON_WHEELS_DIR environment variables.

  • Install Python 3.8 or later and ensure it's on the PATH.

    • On Linux, installing Python from your Linux distribution's package manager is usually sufficient.

    • On macOS, installing Python from homebrew is known to work, but other package managers may also work.

    • On Windows, we recommend installing one of the official Python installers from python.org.

  • Determine the version of glean_parser required.

    • It can be really difficult to manually determine the version of glean_parser that is required for a given application, since it needs to be tracked through android-components, to glean-core and finally to glean_parser. The required version of glean_parser can be determined by running the following at the top-level of the Glean-using application:

      $ ./gradlew | grep "Requires glean_parser"
      Requires glean_parser==1.28.1
      
  • Download packages for glean_parser and its dependencies:

    • In the root directory of the Glean-using project, create a directory called glean-wheels and cd into it.

    • Download packages for glean_parser and its dependencies, replacing X.Y.Z with the correct version of glean_parser:

      $ python3 -m pip download glean_parser==X.Y.Z
      
  • Build the Glean-using project using ./gradlew, but passing in the --offline flag.

There are a couple of environment variables that control offline building:

  • To override the location of the Python interpreter to use, set the GLEAN_PYTHON environment variable. If unset, the first Python interpreter on the PATH will be used.

  • To override the location of the downloaded Python wheels, set the GLEAN_PYTHON_WHEELS_DIR environment variable. If unset ${projectDir}/glean-wheels will be used.

Instrumenting Android crashes with the Glean SDK

One of the things that might be useful to collect data on in an Android application is crashes. This guide will walk through a basic strategy for instrumenting an Android application with crash telemetry using a custom ping.

Note: This is a very simple example of instrumenting crashes using the Glean SDK. There will be challenges to using this approach in a production application that should be considered. For instance, when an app crashes it can be in an unknown state and may not be able to do things like upload data to a server. The recommended way of instrumenting crashes with Android Components is called lib-crash, which takes into consideration things like multiple processes and persistence.

Before You Start

There are a few things that need to be installed in order to proceed, mainly Android Studio. If you include the Android SDK, Android Studio can take a little while to download and get installed. This walk-through assumes some knowledge of Android application development. Knowing where to go to create a new project and how to add dependencies to a Gradle file will be helpful in following this guide.

Setup Build Configuration

Please follow the instruction in the "Adding Glean to your project" chapter in order to set up Glean in an Android project.

Add A Custom Metric

Since crashes will be instrumented with some custom metrics, the next step will be to add a metrics.yaml file to define the metrics used to record the crash information and a pings.yaml file to define a custom ping which will give some control over the scheduling of the uploading. See "Adding new metrics" for more information about adding metrics.

What metric type should be used to represent the crash data? While this could be implemented several ways, an event is an excellent choice, simply because events capture information in a nice concise way and they have a built-in way of passing additional information using the extras field. If it is necessary to pass along the cause of the exception or a few lines of description, events let us do that easily (with some limitations).

Now that a metric type has been chosen to represent the metric, the next step is creating the metrics.yaml. Inside of the root application folder of the Android Studio project create a new file named metrics.yaml. After adding the schema definition and event metric definition, the metrics.yaml should look like this:

# Required to indicate this is a `metrics.yaml` file
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0

crash:
  exception:
    type: event
    description: |
      Event to record crashes caused by unhandled exceptions
    notification_emails:
      - crashes@example.com
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479
    expires:
      2021-01-01
    send_in_pings:
      - crash
    extra_keys:
      cause:
        description: The cause of the crash
      message:
        description: The exception message

As a brief explanation, this creates a metric called exception within a metric category called crash. There is a text description and the required notification_emails, bugs, data_reviews, and expires fields. The send_in_pings field is important to note here that it has a value of - crash. This means that the crash event metric will be sent via a custom ping named crash (which hasn't been created yet). Finally, note the extra_keys field which has two keys defined, cause and message. This allows for sending additional information along with the event to be associated with these keys.

Note: For Mozilla applications, a mandatory data review is required in order to collect information with the Glean SDK.

Add A Custom Ping

Define the custom ping that will help control the upload scheduling by creating a pings.yaml file in the same directory as the metrics.yaml file. For more information about adding custom pings, see the section on custom pings.
The name of the ping will be crash, so the pings.yaml file should look like this:

# Required to indicate this is a `pings.yaml` file
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0

crash:
  description: >
    A ping to transport crash data
  include_client_id: true
  notification_emails:
    - crash@example.com
  bugs:
    - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479
  data_reviews:
    - https://bugzilla.mozilla.org/show_bug.cgi?id=1582479

Before the newly defined metric or ping can be used, the application must first be built. This will cause the glean_parser to execute and generate the API files that represent the metric and ping that were newly defined.

Note: If changes to the YAML files aren't showing up in the project, try running the clean task on the project before building any time one of the Glean YAML files has been modified.

It is recommended that Glean be initialized as early in the application startup as possible, which is why it's good to use a custom Application, like the Glean Sample App GleanApplication.kt.

Initializing Glean in the Application.onCreate() is ideal for this purpose. Start by adding the import statement to allow the usage of the custom ping that was created, adding the following to the top of the file:

import org.mozilla.gleancrashexample.GleanMetrics.Pings

Next, register the custom ping by calling Glean.registerPings(Pings) in the onCreate() function, preferably before calling Glean.initialize(). The completed function should look something like this:

override fun onCreate() {
  super.onCreate()

  // Register the application's custom pings.
  Glean.registerPings(Pings)

  // Initialize the Glean library
  Glean.initialize(applicationContext)
}

This completes the registration of the custom ping with the Glean Kotlin SDK so that it knows about it and can manage the storage and other important details of it like sending it when send() is called.

Instrument The App To Record The Event

In order to make the custom Application class handle uncaught exceptions, extend the class definition by adding Thread.UncaughtExceptionHandler as an inherited class like this:

class MainActivity : AppCompatActivity(), Thread.UncaughtExceptionHandler {
    ...
}

As part of implementing the Thread.UncaughtExceptionHandler interface, the custom Application needs to implement the override of the uncaughtException() function. An example of this override that records data and sends the ping could look something like this:

override fun uncaughtException(thread: Thread, exception: Throwable) {
    Crash.exception.record(
        mapOf(
            Crash.exceptionKeys.cause to exception.cause!!.toString(),
            Crash.exceptionKeys.message to exception.message!!
        )
    )
    Pings.crash.submit()
}

This records data to the Crash.exception metric from the metrics.yaml. The category of the metric is crash and the name is exception so it is accessed it by calling record() on the Crash.exception object. The extra information for the cause and the message is set as well. Finally, calling Pings.crash.submit() forces the crash ping to be scheduled to be sent.

The final step is to register the custom Application as the default uncaught exception handler by adding the following to the onCreate() function after Glean.initialize(this):

Thread.setDefaultUncaughtExceptionHandler(this)

Next Steps

This information didn't really get recorded by anything, as it would be rejected by the telemetry pipeline unless the application was already known. In order to collect telemetry from a new application, there is additional work that is necessary that is beyond the scope of this example. In order for data to be collected from your project, metadata must be added to the probe_scraper. The instructions for accomplishing this can be found in the probe_scraper documentation.

iOS-specific information

The Glean Swift SDK can be used in iOS applications. The integration into the build system for iOS applications can be tuned to your needs. See the following chapters for details.

iOS build script configuration options

This chapter describes build configuration options that control the behavior of the Glean Swift SDK's sdk_generator.sh. These options are not usually required for normal use.

Options can be passed as command-line flags.

--output <PATH> / -o <PATH

Default: $SOURCE_ROOT/$PROJECT/Generated

The folder to place generated code in.

--build-date <TEXT> / -b <TEXT>

Default: auto-generated.

Overwrite the auto-generated build date.

If set to 0 a static UNIX epoch time will be used. If set to a ISO8601 datetime string (e.g. 2022-01-03T17:30:00) it will use that date. Note that any timezone offset will be ignored and UTC will be used. For other values it will throw an error.

bash sdk_generator.sh --build-date 2022-01-03T17:30:00

--expire-by-version <INTEGER>

Default: none.

Expire the metrics and pings by version, using the provided major version.

If enabled, expiring metrics or pings by date will produce an error.

bash sdk_generator.sh --expire-by-version 95

Different products have different ways to compute the product version at build-time. For this reason the sdk_generator.sh script cannot provide an automated way to detect the product major version at build time. When using the expiration by version feature in iOS, products must provide the major version by themselves.

--markdown <PATH> / -m <PATH>

Default: unset.

The Glean Swift SDK can automatically generate Markdown documentation for metrics and pings defined in the registry files, in addition to the metrics API code. If set the documentation will be generated in the provided path.

bash sdk_generator.sh --markdown $SOURCE_ROOT/docs

In general this is not necessary for projects using Mozilla's data ingestion infrastructure: in those cases human-readable documentation will automatically be viewable via the Glean Dictionary.

--glean-namespace <NAME> / -g <NAME>

Default: Glean

The Glean namespace to use in generated code.

bash sdk_generator.sh --glean-namespace AnotherGlean

If you are using the combined release of application-services and the Glean Swift SDK you need to set the namespace to MozillaAppServices, e.g.:

bash sdk_generator.sh --glean-namespace MozillaAppServices

Glean JavaScript SDK

The Glean JavaScript SDK allows integration with three distinct JavaScript environments: websites, web extension and Node.js.

It is available as @mozilla/glean on npm. This package has different entry points to access the different SDKs.

  • @mozilla/glean/web gives access to the websites SDK
  • @mozilla/glean/webext gives access to the web extension SDK
  • @mozilla/glean/node gives access to the Node.js SDK

Command Line Interface

The @mozilla/glean package exposes the glean CLI. This utility installs glean_parser in a virtual environment and allows users to execute glean_parser command through it.

In order to get a list of all commands available run npx glean --help.

Customizing virtual environment

The glean CLI will look for / create a .venv virtual environment at the directory it is called from and install glean_parser there by default. However, it is possible to customize the directory in which it looks for a virtual environment, by setting the VIRTUAL_ENV environment variable. Example:

VIRTUAL_ENV="my/venv/path" npx glean --help

By default Glean will try and use specific binaries based on the platform: python.exe on windows and python3 everywhere else. You can customize the Python binary used through environment variables. You are able to define both GLEAN_PYTHON and GLEAN_PIP. Example:

Mac:

export GLEAN_PYTHON=<path_to_python_binary>
export GLEAN_PIP=<path_to_pip_binary>

Windows:

SET GLEAN_PYTHON=<path_to_python_binary>
SET GLEAN_PIP=<path_to_pip_binary>

What if the glean command is called from inside an active virtual environment?

The VIRTUAL_ENV environment variable is automatically set while a virtual environment in active, in which case Glean will not create a new environment, but use the currently active one.

Preventing automatic installation of glean_parser

In order to prevent the glean CLI from installing glean_parser itself, it must be provided with the path to a virtual environment that already has glean_parser installed.

When doing that it is important to keep the separate installation of glean_parser in sync with the version expected by the Glean SDK version in use. The glean CLI exposes the --glean-parser-version command that returns the expected version of glean_parser for such use cases.

With this command it is possible to something like:

pip install -U glean_parser==$(npx glean --glean-parser-version)

Plugins

The Glean JavaScript SDK accepts a plugin array as an initialization parameter.

import Glean from "@mozilla/glean/webext"
// This is not a real available plugin,
// it is a hypothetical one for illustration purposes.
import HypotheticalPlugin from "@mozilla/glean/plugins/hypothetical"

Glean.initialize(
  "my.fancy.modified.app",
  uploadStatus,
  {
    plugins: [
      new HypotheticalPlugin("with", "hypothetical", "arguments")
    ]
  }
);

Plugins attach to specific internal events on the SDK and can modify its behavior.

A big advantage of plugins is that they can address very specific use cases of Glean without bloating the final size of the SDK or overloading Glean's configuration object.

On writing your own plugins

It is not currently possible for users to write their own plugins, as this feature is still in its infancy.

Available plugins

PingEncryptionPlugin

The PingEncryptionPlugin encrypts the pings payloads before pings are sent.

PingEncryptionPlugin

The PingEncryptionPlugin encrypts the pings payloads before pings are sent.

The encryption happens after a ping is collected and before it is sent, the scope of this plugin does not include data encryption before collection. In other words, Glean will still store plain data on the user's machine and the data will only be encrypted during transit.

Encrypted pings will not be compliant with the usual Glean ping schema. Instead they will follow a specific encrypted ping schema that looks like this:

{
  "payload": "eyJhbGciOiJFQ0RILUVTI..."
}

The schema has one field, payload, which is an unbounded string containing the output of the encryption of the common Glean payload. Since JSON is the supported transport for our data ingestion platform, the encrypted payload will be in the JWE Compact Serialization format and must include a key id (kid) header identifying the public key used for encryption. A matching key is derived from the document namespace that uniquely identifies the application instead of the key id derived from the public key.

Requesting an encryption key

The encryption key is provisioned by Data SRE and must be generated before new pings can be successfully ingested into a data store. Without a valid encryption key, the Glean pipeline will not be able to parse the pings and they will be thrown into the error stream.

The encryption key should be requested as part of the process of adding a Glean application id to the ingestion pipeline (see this checklist).

Usage

Entry point

//esm
import PingEncryptionPlugin from "@mozilla/glean/plugins/encryption"
// cjs
const { default: PingEncryptionPlugin } = require("@mozilla/glean/plugins/encryption");

Instantiating

The PingEncryptionPlugin constructor expects a JWK as a parameter. This JWK is the key provided by Data SRE during the "Requesting an encryption key" step.

Glean.initialize({
  "my.fancy.encrypted.app",
  uploadStatus,
  {
    plugins: [
         new PingEncryptionPlugin({
              "crv": "P-256",
              "kid": "fancy",
              "kty": "EC",
              "x": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
              "y": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
         })
    ]
  }
});

Note on logPings

The logPings debug feature can still be used if ping encryption is turned on. Pings will be logged in their plain format, right before encryption happens.

Glossary

A glossary with explanations and background for wording used in the Glean project.

Glean

According to the dictionary the word “glean” means:

to gather information or material bit by bit

Glean is the combination of the Glean SDK, the Glean pipeline & Glean tools.

See also: Glean - product analytics & telemetry.

Glean Pipeline

The general data pipeline is the infrastructure that collects, stores, and analyzes telemetry data from our products and logs from various services. See An overview of Mozilla’s Data Pipeline.

The Glean pipeline additionally consists of

Glean SDKs

The Glean SDKs are a bundle of libraries with support for different platforms. The source code is available at https://github.com/mozilla/glean and https://github.com/mozilla/glean.js.

Glean SDKs book

This documentation.

Glean tools

Glean provides additional tools for its usage:

Metric

Metrics are the individual things being measured using Glean. They are defined in metrics.yaml files, also known as registry files.

Glean itself provides some metrics out of the box.

Ping

A ping is a bundle of related metrics, gathered in a payload to be transmitted. The Glean SDKs provide default pings and allows for custom pings, see Glean Pings.

Submission

"To submit" means to collect & to enqueue a ping for uploading.

The Glean SDKs store locally all the metrics set by it or by its clients. Each ping has its own schedule to gather all its locally saved metrics and create a JSON payload with them. This is called "collection".

Upon successful collection, the payload is queued for upload, which may not happen immediately or at all (in case network connectivity is not available).

Unless the user has defined their own custom pings, they don’t need to worry too much about submitting pings.

All the default pings have their scheduling and submission handled by the SDK.

Measurement window

The measurement window of a ping is the time frame in which metrics are being actively gathered for it.

The measurement window start time is the moment the previous ping is submitted. In the absence of a previous ping, this time will be the time the application process started.

The measurement window end time is the moment the current ping gets submitted. Any new metric recorded after submission will be part of the next ping, so this pings measurement window is over.

This Week in Glean (TWiG)

This Week in Glean is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work.

Changelog

Unreleased changes

Full changelog

v60.4.0 (2024-07-23)

Full changelog

  • General
    • Bump the string length limit to 255 characters (#2857)
    • New metric glean.database.write_time to measure database writes (#2845)
    • Require glean_parser v14.3.0 (bug 1909244)
  • Android
    • Delay log init until Glean is getting initialized (#2858)
    • Update to Gradle v8.8 (#2860)
    • Updated Kotlin to version 1.9.24 (#2861)
    • Default-enable delayPingLifetimeIo (#2863)
    • Preparing Glean to be able to remove service-glean from Android Components (#2891)
    • Gradle Plugin: Support for using an external Python environment (#2889)
  • Rust
    • New Metric Types: labeled_custom_distribution, labeled_memory_distribution, and labeled_timing_distribution (bug 1657947)

v60.3.0 (2024-05-31)

Full changelog

  • Android
    • Allow configuring delayPingLifetimeIo in Kotlin and auto-flush this data after 1000 writes. It is also auto-flushed on background. (#2851)

v60.2.0 (2024-05-23)

Full changelog

  • Rust
    • Accept a ping schedule map on initialize (#2839)

v60.1.1 (2024-05-31)

Full changelog

  • Android
    • Allow configuring delayPingLifetimeIo in Kotlin and auto-flush this data after 1000 writes. It is also auto-flushed on background. (#2851) (Backported changes)

v60.1.0 (2024-05-06)

Full changelog

  • Rust
    • New TimingDistribution API for no-allocation single-duration accumulation. (bug 1892097)
  • Python
    • Replace use of deprecated functionality (and make installs work on Python 3.12) (#2820)

v60.0.1 (2024-05-31)

Full changelog

  • Android
    • Allow configuring delayPingLifetimeIo in Kotlin and auto-flush this data after 1000 writes. It is also auto-flushed on background. (#2851) (Backported changes)

v60.0.0 (2024-04-22)

Full changelog

  • General
    • BREAKING CHANGE: Server Knobs API changes requiring changes to consuming applications which make use of Server Knobs (Bug 1889114)
    • BREAKING CHANGE: Deprecated Server Knobs API setMetricsDisabled has been removed from all bindings. (#2792)
    • Added support for ping_schedule metadata property so that pings can be scheduled to be sent when other pings are sent. (#2791)
  • Android
    • Updated Kotlin to version 1.9.23 (#2737)
    • New metric type: Object (#2796)
  • iOS
    • New metric type: Object (#2796)
  • Python
    • New metric type: Object (#2796)

v59.0.0 (2024-03-28)

Full changelog

  • General
    • Hide glean_timestamp from event extras in tests (#2776)
    • Timing Distribution's timer ids now begin at 1, rather than 0, to make some multi-language use cases easier. (2777)
    • Add a configuration option to disable internal pings (#2786)
    • Updated to UniFFI 0.27.0 (#2762)

v58.1.0 (2024-03-12)

Full changelog

  • General
    • Enable wall clock timestamp on all events by default (#2767)
  • Rust
    • Timing distribution and Custom distributions now expose accumulate_single_sample. This includes their traits and consumers that make use of them will need to implement the new functions (Bug 1881297)
  • Android
    • Timing and Custom Distributions now have a accumulate_single_sample API that don't require use of a collection (Bug 1881297)
  • Python
    • Timing Distributions now have both a accumulate_samples and accumulate_single_sample (Bug 1881297)

v58.0.0 (2024-02-29)

Full changelog

  • General
  • Rust
    • New metric type: Object (#2489)
    • BREAKING CHANGE: Support pings without {client|ping}_info sections (#2756)
  • Android
    • Upgrade Android NDK to r26c (#2745)

v57.0.0 (2024-02-12)

Full changelog

  • General
    • Added an experimental event listener API (#2719)
  • Android
    • BREAKING CHANGE: Update JNA to version 5.14.0. Projects using older JNA releases may encounter errors until they update. (#2727)
    • Set the target Android SDK to version 34 (#2709)
    • Fixed an incorrectly named method. The method is now correctly named setExperimentationId.
    • Update to Gradle v8.6 (#2721/#2731)

v56.1.0 (2024-01-16)

Full changelog

  • General
    • Errors are now recorded in cases where we had to create a new data store for Glean due to a failure (bug 1815253)
    • Update glean_parser to v11.0.0 (release notes)
    • Event metrics can now record a maximum of 50 keys in the event extra object (Bug 1869429)
  • iOS
    • Glean for iOS is now being built with Xcode 15.1 (#2669)
  • Android
    • Replaced whenTaskAdded with configureEach in GleanGradlePlugin to avoid unnecessary configuration. (#2697)

v56.0.0 (2023-11-30)

Full changelog

  • General
    • Updated to UniFFI 0.25.2 (#2678)
  • iOS
    • Dropped support for iOS < 15 (#2681)

v55.0.0 (2023-10-23)

  • Python
    • BREAKING CHANGE: Dropped support for Python 3.7 (#)

Full changelog

  • General
    • BREAKING CHANGE: Adding 0 to a counter or labeled_counter metric will be silently ignored instead of raising an invalid_value error (bug 1762859)
    • Trigger the uploader thread after scanning the pending pings directory (bug 1847950)
    • Extend start/stop time of a ping to millisecond precision. Custom pings can opt-out using precise_timestamps: false (#2456)
    • Update glean_parser to v10.0.0. Disallow unit field for anything but quantity, disallows ping lifetime metrics on the events ping, allows to configure precise timestamps in pings (release notes)
    • Add an API to set an Experimentation ID that will be annotated to all pings (Bug 1848201)

v54.0.0 (2023-09-12)

Full changelog

  • General
    • Experimental: Add configuration to add a wall clock timestamp to all events (#2513)
  • Python
    • Switched the build system to maturin. This should not have any effect on consumers. (#2345)
    • BREAKING CHANGE: Dropped support for Python 3.6 (#2345)
  • Kotlin
    • Update to Gradle v8.2.1 (#2516)
    • Increase Android compile SDK to version 34 (#2614)

v53.2.0 (2023-08-02)

Full changelog

  • General
    • Update glean_parser to v8.1.0. Subsequently, metric names now have a larger limit of 70 characters (release notes)
  • Rust
    • The Ping Rate Limit type is now accessible in the Rust Language Binding (#2528)
    • Gracefully handle a failure when starting the upload thread. Glean no longer crashes in that case. (#2545)
    • locale now exposed through the RLB so it can be set by consumers (#2531)
  • Python
    • Added the shutdown API for Python to ensure orderly shutdown and waiting for uploader processes (#2538)
  • Kotlin
    • Move running of upload task when Glean is running in a background service to use the internal Glean Dispatchers rather than WorkManager. Bug 1844533

v53.1.0 (2023-06-28)

Full changelog

  • General
    • Gracefully handle the waiting thread going away during shutdown (#2503)
    • Updated to UniFFI 0.24.1 (#2510)
    • Try blocking shutdown 10s for init to complete (#2518)
  • Android
    • Update minimum supported Java byte code generation to 17 (#2498)

v53.0.0 (2023-06-07)

Full changelog

  • General
    • Adds the capability to merge remote metric configurations, enabling multiple Nimbus Features or components to share this functionality (Bug 1833381)
    • StringList metric type limits have been increased. The length of strings allowed has been increased from 50 to 100 to match the String metric type, and the list length has been increased from 20 to 100 (Bug 1833870)
    • Make ping rate limiting configurable on Glean init. (bug 1647630)
  • Rust
    • Timing distribution traits now expose accumulate_samples and accumulate_raw_samples_nanos. This is a breaking change for consumers that make use of the trait as they will need to implement the new functions (Bug 1829745)
  • iOS
    • Make debugging APIs available on Swift (#2470)
    • Added a shutdown API for Swift. This should only be necessary for when Glean is running in a process other than the main process (like in the VPN daemon, for instance)(Bug 1832324)
    • Glean for iOS is now being built with Xcode 14.3 (#2253)

v52.7.0 (2023-05-10)

Full changelog

  • General
    • Allow user to configure how verbose the internal logging is (#2459)
    • Added a timeout waiting for the dispatcher at shutdown (#2461)
    • Added a new Glean metric glean.validation.shutdown_dispatcher_wait measuring the wait time at shutdown (#2461)
  • Kotlin
    • Update Kotlin to version 1.8.21 (#2462)
    • Make debugging APIs available on Android (Bug 1830937)

v52.6.0 (2023-04-20)

Full changelog

  • Rust
    • The Text metric type is now available in the Rust language bindings (#2451)

v52.5.0 (2023-04-11)

Full changelog

  • General
    • On Rkv detecting a corrupted database delete the old RKV, create a new one and log the error (#2425)
    • Add the Date header as late as possible before the uploader acts (#2436)
    • The logic of the Server Knobs API has been flipped. Instead of applying a list of metrics and their disabled state, the API now accepts a list of metrics and their enabled state (bug 1811253)
  • Kotlin
    • Adds the ability to record metrics on a non-main process. This is enabled by setting a dataPath in the Glean configuration (bug 1815233)
  • iOS
    • Adds the ability to record metrics on a non-main process. This is enabled by setting a dataPath in the Glean configuration (bug 1815233)

v52.4.3 (2023-03-24)

Full changelog

  • General
    • Expose Server Knobs functionality via UniFFI for use on mobile
  • iOS
    • BUGFIX: Prevent another test-only issue: The storage going away when the uploader reports back its status (#2430)

v52.4.2 (2023-03-15)

Full changelog

  • Rust
    • Revert to libstd's remove_dir_all instead of external crate (#2415)
  • Python
    • BUGFIX: Implement an empty shutdown function (#2417)

v52.4.1 (2023-03-10)

Full changelog

  • General
    • Update tempfile crate to remove dependency on potentially vulnerable version of remove_dir_all@0.5.3

v52.4.0 (2023-03-09)

Full changelog

  • General
  • Kotlin
    • Upgrade Android NDK to r25c (#2399)
  • iOS
    • BUGFIX: Reworking the HTTP uploader to avoid background uploading issues (Bug 1819161)

v52.3.1 (2023-03-01)

Full changelog

  • General
    • No functional change from v52.3.0, just CI updates.

v52.3.0 (2023-02-23)

Full changelog

  • General
    • Loosen label restrictions to "at most 71 characters of printable ASCII" (bug 1672273)
    • Introduced 2 new Glean health metrics: glean.upload.send_failure and glean.upload.send_success to measure the time for sending a ping (#2365)
    • Introduced a new Glean metric: glean.validation.shutdown_wait to measure the time Glean waits for the uploader on shutdown (#2365)
  • Rust
    • On shutdown wait up to 30s on the uploader to finish work (#2232)
  • iOS
    • BUGFIX: Avoid an invalid state (double-starting) for baseline.duration when Glean is first initialized (#2368)

v52.2.0 (2023-01-30)

Full changelog

  • General
    • Update to UniFFI 0.23 (#2338)

v52.1.1 (2023-01-26)

Full changelog

  • General
    • BUGFIX: Properly invoke the windows build number function from whatsys (bug 1812672)

v52.1.0 (2023-01-26)

Full changelog

  • General
    • BUGFIX: Custom Pings with events should no longer erroneously post InvalidState errors (bug 1811872)
    • Upgrade to glean_parser v7.0.0 (#2346)
  • Kotlin
    • Update to Gradle v7.6 (#2317)
  • Rust
    • Added a new client_info field windows_build_number (Windows only) (#2325)
    • A new ConfigurationBuilder allows to create the Glean configuration before initialization (#2313)
    • Drop dependency on env_logger for regular builds (#2312)

v52.0.1 (2023-01-19)

Full changelog

  • Android
    • The GleanDebugActivity can run without Glean being initialized (#2336)
  • Python
    • Ship universal2 (aarch64 + x86_64 in one) wheels (#2340)

v52.0.0 (2022-12-13)

Full changelog

  • General
    • Remove the metric glean.validation.first_run_hour. Note that this will mean no reason=upgrade metrics pings from freshly installed clients anymore. (#2271)
    • BEHAVIOUR CHANGE: Events in Custom Pings no longer trigger their submission. (bug 1716725)
      • Custom Pings with unsent events will no longer be sent at startup with reason startup.
      • glean.restarted events will be included in Custom Pings with other events to rationalize event timestamps across restarts.
    • test_reset_glean will remove all previous data if asked to clear stores, even if Glean never has been initialized (#2294)
    • Upgrade to glean_parser v6.5.0, with support for Cow in Rust code (#2300)
    • API REMOVED: The deprecated-since-v38 event metric record(map) API has been removed (bug 1802550)
    • BEHAVIOUR CHANGE: "events" pings will no longer be sent if they have metrics but no events (bug 1803513)
    • Experimental: Add functionality necessary to remotely configure the metric disabled property (bug 1798919)
      • This change has no effect when the API is not used and is transparent to consumers. The API is currently experimental because it is not stable and may change.
  • Rust
    • Static labels for labeled metrics are now Cow<'static, str> to reduce heap allocations (#2272)
    • NEW INTERNAL CONFIGURATION OPTION: trim_data_to_registered_pings will trim event storage to just the registered pings. Consult with the Glean Team before using. (bug 1804915)

v51.8.3 (2022-11-25)

Full changelog

  • General
    • Upgrade to rkv 0.18.3. This comes with a bug fix that ensures that interrupted database writes don't corrupt/truncate the database file (#2288)
  • iOS
    • Avoid building a dynamic library (#2285). Note: v51.8.1 and 51.8.2 are not working on iOS and will break the build due to accidentally including a link to a dynamic library.

v51.8.2 (2022-11-17)

Full changelog

  • General
    • BUGFIX: Reliably clear pending pings and events on Windows using remove_dir_all crate (bug 1801128)
    • Update to rkv v0.18.2 (#2270)

v51.8.1 (2022-11-15)

Full changelog

  • General
    • Do not serialize count field in distribution payload (#2267)
    • BUGFIX: The glean-core "metrics" ping scheduler will now schedule and send "upgrade"-reason pings. (bug 1800646)

v51.8.0 (2022-11-03)

Full changelog

  • General
    • Upgrade to glean_parser v6.3.0, increases the event extra limit to 15 (#2255)
    • Increase event extras value limit to 500 bytes (#2255)
  • Kotlin
    • Increase to Android target/compile SDK version 33 (#2246)
  • iOS
    • Try to avoid a crash by not invalidating upload sessions (#2254)

v51.7.0 (2022-10-25)

Full changelog

  • iOS
    • Glean for iOS is now being built with Xcode 13.4 again (#2242)
  • Rust
    • Add cargo feature preinit_million_queue to up the preinit queue length from 10^3 to 10^6 (bug 1796258)

v51.6.0 (2022-10-24)

Full changelog

  • General
    • The internal glean-core dispatch queue changed from bounded to unbounded, while still behaving as a bounded queue.
  • iOS
    • BUGFIX: Additional work to address an iOS crash due to an invalidated session (#2235)

v51.5.0 (2022-10-18)

Full changelog

  • General
    • Add count to DistributionData payload (#2196)
    • Update to UniFFI 0.21.0 (#2229)
  • Android
    • Synchronize AndroidX dependencies with AC (#2219)
    • Bump jna to 5.12.1 #2221 (#2221)
  • iOS
    • Glean for iOS is now being built with Xcode 14.0 (#2188)

v51.4.0 (2022-10-04)

Full changelog

  • Kotlin
    • Update Kotlin and Android Gradle Plugin to the latest releases (#2211)
  • Swift
    • Fix for iOS startup crash caused by Glean (#2206)

v51.3.0 (2022-09-28)

Full changelog

  • General
    • Update URL metric character limit to 8k to support longer URLs. URLs that are too long now are truncated to MAX_URL_LENGTH and still recorded along with an Overflow error. (#2199)
  • Kotlin
    • Gradle plugin: Fix quoting issue in Python wrapper code (#2193)
    • Bumped the required Android NDK to version 25.1.8937393 (#2195)

v51.2.0 (2022-09-08)

Full changelog

  • General
    • Relax glean_parser version requirement. All "compatible releases" are now allowed (#2086)
    • Uploaders can now signal that they can't upload anymore (#2136)
    • Update UniFFI to version 0.19.6 (#2175)
  • Kotlin
    • BUGFIX: Re-enable correctly collecting glean.validation.foreground_count again (#2153)
    • BUGFIX: Gradle plugin: Correctly remove the version conflict check. Now the consuming module need to ensure it uses a single version across all dependencies (#2155)
    • Upgrade dependencies and increase to Android target/compile SDK version 32 (#2150)
    • Upgrade Android NDK to r25 (#2159)
    • BUGFIX: Correctly set os_version and architecture again (#2174)
  • iOS
    • BUGFIX: Correctly set os_version and architecture again (#2174)
  • Python
    • BUGFIX: Correctly handle every string that represents a UUID, including non-hyphenated random 32-character strings (#2182)

v51.1.0 (2022-08-08)

Full changelog

  • General
    • BUGFIX: Handle that Glean might be uninitialized when an upload task is requested (#2131)
    • Updated the glean_parser to version 6.1.2
  • Kotlin
    • BUGFIX: When setting a local endpoint in testing check for testing mode, not initialization (#2145)
    • Gradle plugin: Remove the version conflict check. Now the consuming module need to ensure it uses a single version across all dependencies (#2143)

v51.0.1 (2022-07-26)

Full changelog

  • General
    • BUGFIX: Set the following client_info fields correctly again: android_sdk_version, device_manufacturer, device_model, locale. These were never set in Glean v50.0.0 to v51.0.0 (#2131)

v51.0.0 (2022-07-22)

Full changelog

  • General
    • Remove testHasValue from all implementations. testGetValue always returns a null value (null, nil, None depending on the language) and does not throw an exception (#2087).
    • BREAKING CHANGE: Dropped ping_name argument from all test_get_num_recorded_errors methods (#2088)
      Errors default to the metrics ping, so that's what is queried internally.
    • BREAKING: Disable safe-mode everywhere. This causes all clients to migrate from LMDB to safe-mode storage (#2123)
  • Kotlin
    • Fix the Glean Gradle Plugin to work with Android Gradle Plugin v7.2.1 (#2114)
  • Rust
    • Add a method to construct an Event with runtime-known allowed extra keys. (bug 1767037)

v50.1.4 (2022-08-01)

Full changelog

  • General
    • BUGFIX: Handle that Glean might be uninitialized when an upload task is requested (#2131)

v50.1.3 (2022-07-26)

Full changelog

  • General
    • BUGFIX: Set the following client_info fields correctly again: android_sdk_version, device_manufacturer, device_model, locale. These were never set in Glean v50.0.0 to v51.0.0 (#2131)

v50.1.2 (2022-07-08)

Full changelog

  • General
    • Update UniFFI to version 0.19.3
    • Fix rust-beta-tests linting

v50.1.1 (2022-06-17)

Full changelog

  • Kotlin
    • Fix bug in Glean Gradle plugin by using correct quoting in embedded Python script (#2097)
    • Fix bug in Glean Gradle plugin by removing references to Linux paths (#2098)

v50.1.0 (2022-06-15)

Full changelog

  • General
    • Updated to glean_parser v6.1.1 (#2092)
  • Swift
    • Dropped usage of Carthage for internal dependencies (#2089)
    • Implement the text metric (#2073)
  • Kotlin
    • Implement the text metric (#2073)
  • Rust
    • Derive serde::{Deserialize, Serialize} on Lifetime and CommonMetricData (bug 1772156)

v50.0.1 (2022-05-25)

Full changelog

  • General
    • Updated to glean_parser v6.0.1
  • Python
    • Remove duplicate log initialization and prevent crash (#2064)

v50.0.0 (2022-05-20)

Full changelog

This release is a major refactoring of the internals and contains several breaking changes to exposed APIs. Exposed functionality should be unaffected. See below for details.

  • General
    • Switch to UniFFI-defined and -generated APIs for all 3 foreign-language SDKs
    • The task dispatcher has been moved to Rust for all foreign-language SDKs
    • Updated to glean_parser v6.0.0
  • Swift
    • testGetValue on all metric types now returns nil when no data is recorded instead of throwing an exception.
    • testGetValue on metrics with more complex data now return new objects for inspection. See the respective documentation for details.
    • testHasValue on all metric types is deprecated. It is currently still available as extension methods. Use testGetValue with not-null checks.
  • Kotlin
    • testGetValue on all metric types now returns null when no data is recorded instead of throwing an exception.
    • testGetValue on metrics with more complex data now return new objects for inspection. See the respective documentation for details.
    • testHasValue on all metric types is deprecated. It is currently still available as extension methods and thus require an additional import. Use testGetValue with not-null checks.
    • On TimingDistributionMetric, CustomDistributionMetric, MemoryDistributionMetric the accumulateSamples method now takes a List<Long> instead of LongArray. Use listOf instead of longArrayOf or call .toList
  • TimingDistributionMetricType.start now always returns a valid TimerId, TimingDistributionMetricType.stopAndAccumulate always requires a TimerId.
  • Python
    • test_get_value on all metric types now returns None when no data is recorded instead of throwing an exception.
    • test_has_value on all metric types was removed. Use test_get_value with not-null checks.

v44.2.0 (2022-05-16)

Full changelog

  • General
    • The glean.error.preinit_tasks_overflow metric now reports only the number of overflowing tasks. It is marked as version 1 in the definition now. (#2026)
  • Kotlin
    • (Development only) Allow to override the used glean_parser in the Glean Gradle Plugin (#2029)
    • setSourceTags is now a public API (#2035))
  • iOS
    • setSourceTags is now a public API (#2035)
  • Rust
    • Implemented try_get_num_recorded_errors for Boolean in Rust Language Bindings (#2049)

v44.1.1 (2022-04-14)

Full changelog

  • Rust
    • Raise the global dispatcher queue limit from 100 to 1000 tasks. (bug 1764549)
  • iOS
    • Enable expiry by version in the sdk_generator.sh script (#2013)

v44.1.0 (2022-04-06)

Full changelog

  • Android
    • The glean-native-forUnitTests now ships with separate libraries for macOS x86_64 and macOS aarch64 (#1967)
  • Rust
    • Glean will no longer overwrite the User-Agent header, but instead send that information as X-Telemetry-Agent (bug 1711928)

v44.0.0 (2022-02-09)

  • General
    • BREAKING CHANGE: Updated glean_parser version to 5.0.1 (#1852). This update drops support for generating C# specific metrics API.
  • Rust
    • Ensure test-only destroy_glean() handles initialize() having started but not completed (bug 1750235)
  • Swift
    • Dropping support of the Carthage-compatible framework archive (#1943). The Swift Package (https://github.com/mozilla/glean-swift) is the recommended way of consuming Glean iOS.
  • Python
    • BUGFIX: Datetime metrics now correctly record the local timezone (#1953).

Full changelog

v43.0.2 (2022-01-17)

Full changelog

  • General
    • Fix artifact publishing properly (#1930)

v43.0.1 (2022-01-17)

Full changelog

  • General
    • Fix artifact publishing (#1930)

v43.0.0 (2022-01-17)

Full changelog

  • General
    • Removed invalid_timezone_offset metric (#1923)
  • Python
    • It is now possible to emit log messages from the networking subprocess by using the new log_level parameter to Glean.initialize. (#1918)
  • Kotlin
    • Automatically pass build date as part of the build info (#1917)
  • iOS
    • BREAKING CHANGE: Pass build info into initialize, which contains the build date (#1917). A suitable instance is generated by glean_parser in GleanMetrics.GleanBuild.info.

v42.3.2 (2021-12-15)

Full changelog

  • Python
    • Reuse existing environment when launching subprocess (#1908)

v42.3.1 (2021-12-07)

Full changelog

  • iOS
    • Fix Carthage archive release (#1891)

v42.3.0 (2021-12-07)

Full changelog

  • Rust
    • BUGFIX: Correct category & name for preinit_tasks_overflow metric. Previously it would have been wrongly recorded as preinit_tasks_overflow.glean.error (#1887)
    • BUGFIX: Fix to name given to the events ping when instantiated (#1885)
  • iOS
    • BUGFIX: Make fields of RecordedEventData publicly accessible (#1867)
    • Skip code generation in indexbuild build (#1889)
  • Python
    • Don't let environment affect subprocess module search path (#1542)

v42.2.0 (2021-11-03)

Full changelog

  • General
    • Updated glean_parser version to 4.3.1 (#1852)
  • Android
    • Automatic detection of tags.yaml files (#1852)

v42.1.0 (2021-10-18)

Full changelog

  • Rust
    • Backwards-compatible API Change: Make experiment test APIs public. (#1834)

v42.0.1 (2021-10-11)

Full changelog

  • General
    • BUGFIX: Avoid a crash when accessing labeled metrics by caching created objects (#1823).
  • Python
    • Glean now officially supports Python 3.10 (#1818)

v42.0.0 (2021-10-06)

Full changelog

  • Android
    • Updated to Gradle 7, Android Gradle Plugin 7 and Rust Android Plugin 0.9 as well as building with Java 11 (#1801)
  • iOS
    • Add support for the URL metric type (#1791)
    • Remove reliance on Operation for uploading and instead use the background capabilities of URLSession (#1783)
    • Glean for iOS is now being built with Xcode 13.0.0 (#1802).
  • Rust
    • BUGFIX: No panic if trying to flush ping-lifetime data after shutdown (#1800)
    • BREAKING CHANGE: glean::persist_ping_lifetime_data is now async (#1812)

v41.1.1 (2021-09-29)

Full changelog

  • Android
    • BUGFIX: Limit logging to Glean crates (#1808)

v41.1.0 (2021-09-16)

Full changelog

  • Rust
    • BUGFIX: Ensure RLB persists ping lifetime data on shutdown (#1793)
    • Expose persist_ping_lifetime_data in the RLB. Consumers can call this to persist data at convenient times, data is also persisted on shutdown (#1793)

v41.0.0 (2021-09-13)

Full changelog

  • General
    • BUGFIX: Only clear specified storage in delayed ping io mode (#1782)
    • Require Rust >= 1.53.0 (#1782)
  • Android
    • Glean.initialize now requires a buildInfo parameter to pass in build time version information. A suitable instance is generated by glean_parser in ${PACKAGE_ROOT}.GleanMetrics.GleanBuildInfo.buildInfo. Support for not passing in a buildInfo object has been removed. (#1752)

v40.2.0 (2021-09-08)

Full changelog

  • General
    • Updated glean_parser version to 4.0.0
  • Android
    • Add support for the URL metric type (#1778)
  • Rust
    • Add support for the URL metric type (#1778)
  • Python
    • Add support for the URL metric type (#1778)

v40.1.1 (2021-09-02)

Full changelog

  • iOS
    • Use 'Unknown' value if system data can't be decoded as UTF-8 (#1769)
    • BUGFIX: Add quantity metric type to the build (#1774). Previous builds are unable to use quantity metrics

v40.1.0 (2021-08-25)

Full changelog

  • Android
    • Updated to Kotlin 1.5, Android Gradle Plugin 4.2.2 and Gradle 6.7.1 (#1747)
    • The glean-gradle-plugin now forces a compile failure when multiple Glean versions are detected in the build (#1756)
    • The glean-gradle-plugin does not enable a glean-native capability on GeckoView anymore. That will be done by GeckoView directly (#1759)

v40.0.0 (2021-07-28)

Full changelog

  • Android
    • Breaking Change: Split the Glean Kotlin SDK into two packages: glean and glean-native (#1595). Consumers will need to switch to org.mozilla.telemetry:glean-native-forUnitTests. Old code in build.gradle:

      testImplementation "org.mozilla.telemetry:glean-forUnitTests:${project.ext.glean_version}"
      

      New code in build.gradle:

      testImplementation "org.mozilla.telemetry:glean-native-forUnitTests:${project.ext.glean_version}"
      
    • The glean-gradle-plugin now automatically excludes the glean-native dependency if geckoview-omni is also part of the build. Glean native functionality will be provided by the geckoview-omni package.

  • Rust
    • The glean-ffi is no longer compiled as a cdylib. Other language SDKs consume glean-bundle instead as a cdylib. This doesn't affect consumers.

v39.1.0 (2021-07-26)

Full changelog

  • General
    • Updated glean_parser version to 3.6.0
    • Allow Custom Distribution metric type on all platforms (#1679)

v39.0.4 (2021-07-26)

Full changelog

  • General
    • Extend invalid_timezone_offset metric until the end of the year (#1697)

v39.0.3 (2021-06-09)

Full changelog

  • Android
    • Unbreak Event#record API by accepting null on the deprecated API. The previous 39.0.0 release introduced the new API, but accidentally broke certain callers that just forward arguments. This restores passing null (or nothing) when using the old API. It remains deprecated.

v39.0.2 (2021-06-07)

Full changelog

v39.0.1 (2021-06-04)

Full changelog

  • iOS
    • Build and release Glean as an xcframework (#1663) This will now also auto-update the Glean package at https://github.com/mozilla/glean-swift.

v39.0.0 (2021-05-31)

Full changelog

  • General
    • Add new event extras API to all implementations. See below for details (#1603)
    • Updated glean_parser version to 3.4.0 (#1603)
  • Rust
    • Breaking Change: Allow event extras to be passed as an object. This replaces the old HashMap-based API. Values default to string. See the event documentation for details. (#1603) Old code:

      let mut extra = HashMap::new();
      extra.insert(SomeExtra::Key1, "1".into());
      extra.insert(SomeExtra::Key2, "2".into());
      metric.record(extra);
      

      New code:

      let extra = SomeExtra {
          key1: Some("1".into()),
          key2: Some("2".into()),
      };
      metric.record(extra);
      
  • Android
    • Deprecation: The old event recording API is replaced by a new one, accepting a typed object (#1603). See the event documentation for details.
    • Skip build info generation for libraries (#1654)
  • Python
    • Deprecation: The old event recording API is replaced by a new one, accepting a typed object (#1603). See the event documentation for details.
  • Swift
    • Deprecation: The old event recording API is replaced by a new one, accepting a typed object (#1603). See the event documentation for details.

v38.0.1 (2021-05-17)

Full changelog

  • General
    • BUGFIX: Invert the lock order in glean-core's metrics ping scheduler (#1637)

v38.0.0 (2021-05-12)

Full changelog

  • General
    • Update documentation to recommend using Glean Dictionary instead of metrics.md (#1604)
  • Rust
    • Breaking Change: Don't return a result from submit_ping. The boolean return value indicates whether a ping was submitted (#1613)
    • Breaking Change: Glean now schedules "metrics" pings, accepting a new Configuration parameter. (#1599)
    • Dispatch setting the source tag to avoid a potential crash (#1614)
    • Testing mode will wait for init & upload tasks to finish (#1628)
  • Android
    • Set required fields for client_info before optional ones (#1633)
    • Provide forward-compatibility with Gradle 6.8 (#1616)

v37.0.0 (2021-04-30)

Full changelog

  • General
    • Breaking Change: "deletion-request" pings now include the reason upload was disabled: at_init (Glean detected a change between runs) or set_upload_enabled (Glean was told of a change as it happened). (#1593).
    • Attempt to upload a ping even in the face of IO Errors (#1576).
    • Implement an additional check to avoid crash due to faulty timezone offset (#1581)
      • This now records a new metric glean.time.invalid_timezone_offset, counting how often we failed to get a valid timezone offset.
    • Use proper paths throughout to hopefully handle non-UTF-8 paths more gracefully (#1596)
    • Updated glean_parser version to 3.2.0 (#1609)
  • iOS
    • Code generator: Ensure at least pip 20.3 is available in iOS build (#1590)

v36.0.1 (2021-04-09)

Full changelog

  • RLB
    • Provide an internal-use-only API to pass in raw samples for timing distributions (#1561).
    • Expose Timespan's set_raw to Rust (#1578).
  • Android
    • BUGFIX: TimespanMetricType.measure and TimingDistributionMetricType.measure won't get inlined anymore (#1560). This avoids a potential bug where a return used inside the closure would end up not measuring the time. Use return@measure <val> for early returns.
  • Python
    • The Glean Python bindings now use rkv's safe mode backend. This should avoid intermittent segfaults in the LMDB backend.

v36.0.0 (2021-03-16)

Full changelog

  • General
    • Introduce a new API Ping#test_before_next_submit to run a callback right before a custom ping is submitted (#1507).
      • The new API exists for all language bindings (Kotlin, Swift, Rust, Python).
    • Updated glean_parser version to 2.5.0
    • Change the fmt- and lint- make commands for consistency (#1526)
    • The Glean SDK can now produce testing coverage reports for your metrics (#1482).
  • Python
    • Update minimal required version of cffi dependency to 1.13.0 (#1520).
    • Ship wheels for arm64 macOS (#1534).
  • RLB
    • Added rate metric type (#1516).
    • Set internal_metrics::os_version for MacOS, Windows and Linux (#1538)
    • Expose a function get_timestamp_ms to get a timestamp from a monotonic clock on all supported operating systems, to be used for event timestamps (#1546).
    • Expose a function to record events with an externally provided timestamp.
  • iOS
    • Breaking Change: Event timestamps are now correctly recorded in milliseconds (#1546).
      • Since the first release event timestamps were erroneously recorded with nanosecond precision (#1549). This is now fixed and event timestamps are in milliseconds. This is equivalent to how it works in all other language bindings.

v35.0.0 (2021-02-22)

Full changelog

  • Android
    • Glean.initialize can now take a buildInfo parameter to pass in build time version information, and avoid calling out to the Android package manager at runtime. A suitable instance is generated by glean_parser in ${PACKAGE_ROOT}.GleanMetrics.GleanBuildInfo.buildInfo (#1495). Not passing in a buildInfo object is still supported, but is deprecated.
    • The testGetValue APIs now include a message on the NullPointerException thrown when the value is missing.
    • Breaking change: LEGACY_TAG_PINGS is removed from GleanDebugActivity (#1510)
  • RLB
    • Breaking change: Configuration.data_path is now a std::path::PathBuf(#1493).

v34.1.0 (2021-02-04)

Full changelog

  • General
    • A new metric glean.validation.pings_submitted tracks the number of pings sent. It is included in both the metrics and baseline pings.
  • iOS
    • The metric glean.validation.foreground_count is now sent in the metrics ping (#1472).
    • BUGFIX: baseline pings with reason dirty_startup are no longer sent if Glean did not full initialize in the previous run (#1476).
  • Python
    • Expose the client activity API (#1481).
    • BUGFIX: Publish a macOS wheel again. The previous release failed to build a Python wheel for macOS platforms (#1471).
  • RLB
    • BUGFIX: baseline pings with reason dirty_startup are no longer sent if Glean did shutdown cleanly (#1483).

v34.0.0 (2021-01-29)

Full changelog

  • General
    • Other bindings detect when RLB is used and try to flush the RLB dispatcher to unblock the Rust API (#1442).
      • This is detected automatically, no changes needed for consuming code.
    • Add support for the client activity API (#1455). This API is either automatically used or exposed by the language bindings.
    • Rename the reason background to inactive for both the baseline and events ping. Rename the reason foreground to active for the baseline ping.
  • RLB
    • When the pre-init task queue overruns, this is now recorded in the metric glean.error.preinit_tasks_overflow (#1438).
    • Expose the client activity API (#1455).
    • Send the baseline ping with reason dirty_startup, if needed, at startup.
    • Expose all required types directly (#1452).
      • Rust consumers will not need to depend on glean-core anymore.
  • Android
    • BUGFIX: Don't crash the ping uploader when throttled due to reading too large wait time values (#1454).
    • Use the client activity API (#1455).
    • Update AndroidX dependencies (#1441).
  • iOS
    • Use the client activity API (#1465). Note: this now introduces a baseline ping with reason active on startup.

v33.10.3 (2021-01-18)

Full changelog

  • Rust
    • Upgrade rkv to 0.17 (#1434)

v33.10.2 (2021-01-15)

Full changelog

  • General:
    • A new metric glean.error.io has been added, counting the times an IO error happens when writing a pending ping to disk (#1428)
  • Android
    • A new metric glean.validation.foreground_count was added to the metrics ping (#1418).
  • Rust
    • BUGFIX: Fix lock order inversion in RLB Timing Distribution (#1431).
    • Use RLB types instead of glean-core ones for RLB core metrics. (#1432).

v33.10.1 (2021-01-06)

Full changelog

No functional changes. v33.10.0 failed to generated iOS artifacts due to broken tests (#1421).

v33.10.0 (2021-01-06)

Full changelog

  • General
    • A new metric glean.validation.first_run_hour, analogous to the existing first_run_date but with hour resolution, has been added. Only clients running the app for the first time after this change will report this metric (#1403).
  • Rust
    • BUGFIX: Don't require mutable references in RLB traits (#1417).
  • Python
    • Building the Python package from source now works on musl-based Linux distributions, such as Alpine Linux (#1416).

v33.9.1 (2020-12-17)

Full changelog

  • Rust
    • BUGFIX: Don't panic on shutdown and avoid running tasks if uninitialized (#1398).
    • BUGFIX: Don't fail on empty database files (#1398).
    • BUGFIX: Support ping registration before Glean initializes (#1393).

v33.9.0 (2020-12-15)

Full changelog

  • Rust
    • Introduce the String List metric type in the RLB. (#1380).
    • Introduce the Datetime metric type in the RLB (#1384).
    • Introduce the CustomDistribution and TimingDistribution metric type in the RLB (#1394).

v33.8.0 (2020-12-10)

Full changelog

  • Rust
    • Introduce the Memory Distribution metric type in the RLB. (#1376).
    • Shut down Glean in tests before resetting to make sure they don't mistakenly init Glean twice in parallel (#1375).
    • BUGFIX: Fixing 2 lock-order-inversion bugs found by TSan (#1378).
      • TSan runs on mozilla-central tests, which found two (potential) bugs where 2 different locks were acquired in opposite order in different code paths, which could lead to deadlocks in multi-threaded code. As RLB uses multiple threads (e.g. for init and the dispatcher) by default, this can easily become an actual issue.
  • Python
    • All log messages from the Glean SDK are now on the glean logger, obtainable through logging.getLogger("glean"). (Prior to this, each module had its own logger, for example glean.net.ping_upload_worker).

v33.7.0 (2020-12-07)

Full changelog

  • Rust
    • Upgrade rkv to 0.16.0 (no functional changes) (#1355).
    • Introduce the Event metric type in the RLB (#1361).
  • Python
    • Python Linux wheels no longer work on Linux distributions released before 2014 (they now use the manylinux2014 ABI) (#1353).
    • Unbreak Python on non-Linux ELF platforms (BSD, Solaris/illumos) (#1363).

v33.6.0 (2020-12-02)

Full changelog

  • Rust
    • BUGFIX: Negative timespans for the timespan metric now correctly record an InvalidValue error (#1347).
    • Introduce the Timespan metric type in the RLB (#1347).
  • Python
    • BUGFIX: Network slowness or errors will no longer block the main dispatcher thread, leaving work undone on shutdown (#1350).
    • BUGFIX: Lower sleep time on upload waits to avoid being stuck when the main process ends (#1349).

v33.5.0 (2020-12-01)

Full changelog

  • Rust
    • Introduce the UUID metric type in the RLB.
    • Introduce the Labeled metric type in the RLB (#1327).
    • Introduce the Quantity metric type in the RLB.
    • Introduce the shutdown API.
    • Add Glean debugging APIs.
  • Python
    • BUGFIX: Setting a UUID metric to a value that is not in the expected UUID format will now record an error with the Glean error reporting system.

v33.4.0 (2020-11-17)

Full changelog

  • General
    • When Rkv's safe mode is enabled (features = ["rkv-safe-mode"] on the glean-core crate) LMDB data is migrated at first start (#1322).
  • Rust
    • Introduce the Counter metric type in the RLB.
    • Introduce the String metric type in the RLB.
    • BUGFIX: Track the size of the database directory at startup (#1304).
  • Python
    • BUGFIX: Fix too-long sleep time in uploader due to unit mismatch (#1325).
  • Swift
    • BUGFIX: Fix too-long sleep time in uploader due to unit mismatch (#1325).

v33.3.0 (2020-11-12)

Full changelog

  • General
    • Do not require default-features on rkv and downgrade bincode (#1317)
    • Do not require default-features on rkv and downgrade bincode (#1317)
  • Rust
    • Implement the experiments API (#1314)

v33.2.0 (2020-11-10)

Full changelog

  • Python
    • Fix building of Linux wheels (#1303)
      • Python Linux wheels no longer work on Linux distributions released before 2010. (They now use the manylinux2010 ABI, rather than the manylinux1 ABI.)
  • Rust
    • Introduce the RLB net module (#1292)

v33.1.2 (2020-11-04)

Full changelog

  • No changes. v33.1.1 was tagged incorrectly.

v33.1.1 (2020-11-04)

Full changelog

  • No changes. v33.1.0 was tagged incorrectly.

v33.1.0 (2020-11-04)

Full changelog

  • General
    • Standardize throttle backoff time throughout all bindings. (#1240)
    • Update glean_parser to 1.29.0
      • Generated code now includes a comment next to each metric containing the name of the metric in its original snake_case form.
    • Expose the description of the metric types in glean_core using traits.
  • Rust
    • Add the BooleanMetric type.
    • Add the dispatcher module (copied over from mozilla-central).
    • Allow consumers to specify a custom uploader.
  • Android
    • Update the JNA dependency from 5.2.0 to 5.6.0
    • The glean-gradle-plugin now makes sure that only a single Miniconda installation will happen at the same time to avoid a race condition when multiple components within the same project are using Glean.

v33.0.4 (2020-09-28)

Full changelog

Note: Previous 33.0.z releases were broken. This release now includes all changes from 33.0.0 to 33.0.3.

  • General
    • Update glean_parser to 1.28.6
      • BUGFIX: Ensure Kotlin arguments are deterministically ordered
  • Android
    • Breaking change: Updated to the Android Gradle Plugin v4.0.1 and Gradle 6.5.1. Projects using older versions of these components will need to update in order to use newer versions of the Glean SDK.
    • Update the Kotlin Gradle Plugin to version 1.4.10.
    • Fixed the building of .aar releases on Android so they include the Rust shared objects.

v33.0.3 (2020-09-25)

Full changelog

  • General
    • v33.0.2 was tagged incorrectly. This release is just to correct that mistake.

v33.0.2 (2020-09-25)

Full changelog

  • Android
    • Fixed the building of .aar releases on Android so they include the Rust shared objects.

v33.0.1 (2020-09-24)

Full changelog

  • General
    • Update glean_parser to 1.28.6
      • BUGFIX: Ensure Kotlin arguments are deterministically ordered
  • Android
    • Update the Kotlin Gradle Plugin to version 1.4.10.

v33.0.0 (2020-09-22)

Full changelog

  • Android
    • Breaking change: Updated to the Android Gradle Plugin v4.0.1 and Gradle 6.5.1. Projects using older versions of these components will need to update in order to use newer versions of the Glean SDK.

v32.4.1 (2020-10-01)

Full changelog

  • General
    • Update glean_parser to 1.28.6
      • BUGFIX: Ensure Kotlin arguments are deterministically ordered
    • BUGFIX: Transform ping directory size from bytes to kilobytes before accumulating to glean.upload.pending_pings_directory_size (#1236).

v32.4.0 (2020-09-18)

Full changelog

  • General
    • Allow using quantity metric type outside of Gecko (#1198)
    • Update glean_parser to 1.28.5
      • The SUPERFLUOUS_NO_LINT warning has been removed from the glinter. It likely did more harm than good, and makes it hard to make metrics.yaml files that pass across different versions of glean_parser.
      • Expired metrics will now produce a linter warning, EXPIRED_METRIC.
      • Expiry dates that are more than 730 days (~2 years) in the future will produce a linter warning, EXPIRATION_DATE_TOO_FAR.
      • Allow using the Quantity metric type outside of Gecko.
      • New parser configs custom_is_expired and custom_validate_expires added. These are both functions that take the expires value of the metric and return a bool. (See Metric.is_expired and Metric.validate_expires). These will allow FOG to provide custom validation for its version-based expires values.
    • Add a limit of 250 pending ping files. (#1217).
  • Android
    • Don't retry the ping uploader when waiting, sleep instead. This avoids a never-ending increase of the backoff time (#1217).

v32.3.2 (2020-09-11)

Full changelog

  • General
    • Track the size of the database file at startup (#1141).
    • Submitting a ping with upload disabled no longer shows an error message (#1201).
    • BUGFIX: scan the pending pings directories after dealing with upload status on initialization. This is important, because in case upload is disabled we delete any outstanding non-deletion ping file, and if we scan the pending pings folder before doing that we may end up sending pings that should have been discarded. (#1205)
  • iOS
    • Disabled code coverage in release builds (#1195).
  • Python
    • Glean now ships a source package to pip install on platforms where wheels aren't provided.

v32.3.1 (2020-09-09)

Full changelog

  • Python
    • Fixed the release process to generate all wheels (#1193).

v32.3.0 (2020-08-27)

Full changelog

  • Android
    • Handle ping registration off the main thread. This removes a potential blocking call (#1132).
  • iOS
    • Handle ping registration off the main thread. This removes a potential blocking call (#1132).
    • Glean for iOS is now being built with Xcode 12.0.0 (Beta 5) (#1170).

v32.2.0 (2020-08-25)

Full changelog

  • General
    • Move logic to limit the number of retries on ping uploading "recoverable failures" to glean-core. (#1120)
      • The functionality to limit the number of retries in these cases was introduced to the Glean SDK in v31.1.0. The work done now was to move that logic to the glean-core in order to avoid code duplication throughout the language bindings.
    • Update glean_parser to v1.28.3
      • BUGFIX: Generate valid C# code when using Labeled metric types.
      • BUGFIX: Support HashSet and Dictionary in the C# generated code.
    • Add a 10MB quota to the pending pings storage. (#1100)
  • C#
    • Add support for the String List metric type (#1108).
    • Enable generating the C# APIs using the glean_parser (#1092).
    • Add support for the EventMetricType in C# (#1129).
    • Add support for the TimingDistributionMetricType in C# (#1131).
    • Implement the experiments API in C# (#1145).
    • This is the last release with C# language bindings changes. Reach out to the Glean SDK team if you want to use the C# bindings in a new product and require additional features.
  • Python
    • BUGFIX: Limit the number of retries for 5xx server errors on ping uploads (#1120).
      • This kinds of failures yield a "recoverable error", which means the ping gets re-enqueued. That can cause infinite loops on the ping upload worker. For python we were incorrectly only limiting the number of retries for I/O errors, another type of "recoverable error".
    • kebab-case ping names are now converted to snake_case so they are available on the object returned by load_pings (#1122).
    • For performance reasons, the glinter is no longer run as part of glean.load_metrics(). We recommend running glinter as part of your project's continuous integration instead (#1124).
    • A measure context manager for conveniently measuring runtimes has been added to TimespanMetricType and TimingDistributionMetricType (#1126).
    • Networking errors have changed from ERROR level to DEBUG level so they aren't displayed by default (#1166).
  • iOS
    • Changed logging to use OSLog rather than a mix of NSLog and print. (#1133)

v32.1.1 (2020-08-24)

Full changelog

v32.1.0 (2020-08-17)

Full changelog

  • General
    • The upload rate limiter has been changed from 10 pings per minute to 15 pings per minute.

v32.0.0 (2020-08-03)

Full changelog

  • General
    • Limit ping request body size to 1MB. (#1098)
  • iOS
    • Implement ping tagging (i.e. the X-Source-Tags header) through custom URL (#1100).
  • C#
    • Add support for Labeled Strings and Labeled Booleans.
    • Add support for the Counter metric type and Labeled Counter.
    • Add support for the MemoryDistributionMetricType.
  • Python
    • Breaking change: data_dir must always be passed to Glean.initialize. Prior to this, a missing value would store Glean data in a temporary directory.
    • Logging messages from the Rust core are now sent through Python's standard library logging module. Therefore all logging in a Python application can be controlled through the logging module interface.
  • Android
    • BUGFIX: Require activities executed via GleanDebugView to be exported.

v31.6.0 (2020-07-24)

Full changelog

  • General
    • Implement JWE metric type (#1073, #1062).
    • DEPRECATION: getUploadEnabled is deprecated (respectively get_upload_enabled in Python) (#1046)
      • Due to Glean's asynchronous initialization the return value can be incorrect. Applications should not rely on Glean's internal state. Upload enabled status should be tracked by the application and communicated to Glean if it changes. Note: The method was removed from the C# and Python implementation.
    • Update glean_parser to v1.28.1
      • The glean_parser linting was leading consumers astray by incorrectly suggesting that deletion-request be instead deletion_request when used for send_in_pings. This was causing metrics intended for the deletion-request ping to not be included when it was collected and submitted. Consumers that are sending metrics in the deletion-request ping will need to update the send_in_pings value in their metrics.yaml to correct this.
      • Fixes a bug in doc rendering.

v31.5.0 (2020-07-22)

Full changelog

  • General
    • Implement ping tagging (i.e. the X-Source-Tags header) (#1074). Note that this is not yet implemented for iOS.
    • String values that are too long now record invalid_overflow rather than invalid_value through the Glean error reporting mechanism. This affects the string, event and string list metrics.
    • metrics.yaml files now support a data_sensitivity field to all metrics for specifying the type of data collected in the field.
  • Python
    • The Python unit tests no longer send telemetry to the production telemetry endpoint.
    • BUGFIX: If an application_version isn't provided to Glean.initialize, the client_info.app_display_version metric is set to "Unknown", rather than resulting in invalid pings.
  • Android
    • Allow defining which Activity to run next when using the GleanDebugActivity.
  • iOS
    • BUGFIX: The memory unit is now correctly set on the MemoryDistribution metric type in Swift in generated metrics code.
  • C#
    • Metrics can now be generated from the metrics.yaml files.

v31.4.1 (2020-07-20)

Full changelog

  • General
    • BUGFIX: fix int32 to ErrorType mapping. The InvalidOverflow had a value mismatch between glean-core and the bindings. This would only be a problem in unit tests. (#1063)
  • Android
    • Enable propagating options to the main product Activity when using the GleanDebugActivity.
    • BUGFIX: Fix the metrics ping collection for startup pings such as reason=upgrade to occur in the same thread/task as Glean initialize. Otherwise, it gets collected after the application lifetime metrics are cleared such as experiments that should be in the ping. (#1069)

v31.4.0 (2020-07-16)

Full changelog

  • General
    • Enable debugging features through environment variables. (#1058)

v31.3.0 (2020-07-10)

Full changelog

  • General
    • Remove locale from baseline ping. (1609968, #1016)
    • Persist X-Debug-ID header on store ping. (1605097, #1042)
    • BUGFIX: raise an error if Glean is initialized with an empty string as the application_id (#1043).
  • Python
    • BUGFIX: correctly set the app_build metric to the newly provided application_build_id initialization option (#1031).
    • The Python bindings now report networking errors in the glean.upload.ping_upload_failure metric (like all the other bindings) (#1039).
    • Python default upgraded to Python 3.8 (#995)
  • iOS
    • BUGFIX: Make LabeledMetric subscript public, so consuming applications can actually access it (#1027)

v31.2.3 (2020-06-29)

Full changelog

  • General
    • Move debug view tag management to the Rust core. (1640575, #998)
    • BUGFIX: Fix mismatch in events keys and values by using glean_parser version 1.23.0.

v31.2.2 (2020-06-26)

Full changelog

  • Android
    • BUGFIX: Compile dependencies with NDEBUG to avoid linking unavailable symbols. This fixes a crash due to a missing stderr symbol on older Android (#1020)

v31.2.1 (2020-06-25)

Full changelog

  • Python
    • BUGFIX: Core metrics are now present in every ping, even if submit is called before initialize has a chance to complete. (#1012)

v31.2.0 (2020-06-24)

Full changelog

  • General
    • Add rate limiting capabilities to the upload manager. (1543612, #974)
  • Android
    • BUGFIX: baseline pings with reason "dirty startup" are no longer sent if Glean did not full initialize in the previous run (#996).
  • Python
    • Support for Python 3.5 was dropped (#987).
    • Python wheels are now shipped with Glean release builds, resulting in much smaller libraries (#1002)
    • The Python bindings now use locale.getdefaultlocale() rather than locale.getlocale() to determine the locale (#1004).

v31.1.2 (2020-06-23)

Full changelog

  • General
    • BUGFIX: Correctly format the date and time in the Date header (#993).
  • Python
    • BUGFIX: Additional time is taken at shutdown to make sure pings are sent and telemetry is recorded. (1646173, #983)
    • BUGFIX: Glean will run on the main thread when running in a multiprocessing subprocess (#986).

v31.1.1 (2020-06-12)

Full changelog

  • Android
    • Dropping the version requirement for lifecycle extensions down again. Upping the required version caused problems in A-C.

v31.1.0 (2020-06-11)

Full changelog

  • General:
    • The regex crate is no longer required, making the Glean binary smaller (#949)
    • Record upload failures into a new metric (#967)
    • Log FFI errors as actual errors (#935)
    • Limit the number of upload retries in all implementations (#953, #968)
  • Python
    • Additional safety guarantees for applications that use Python threading (#962)

v31.0.2 (2020-05-29)

Full changelog

  • Rust
    • Fix list of included files in published crates

v31.0.1 (2020-05-29)

Full changelog

  • Rust
    • Relax version requirement for flate2 for compatibility reasons

v31.0.0 (2020-05-28)

Full changelog

  • General:

    • The version of glean_parser has been upgraded to v1.22.0
      • A maximum of 10 extra_keys is now enforced for event metric types.
      • Breaking change: (Swift only) Combine all metrics and pings into a single generated file Metrics.swift.
        • For Swift users this requires to change the list of output files for the sdk_generator.sh script. It now only needs to include the single file Generated/Metrics.swift.
  • Python:

    • BUGFIX: lifetime: application metrics are no longer recorded as lifetime: user.
    • BUGFIX: glean-core is no longer crashing when calling uuid.set with invalid UUIDs.
    • Refactor the ping uploader to use the new upload mechanism and add gzip compression.
    • Most of the work in Glean.initialize happens on a worker thread and no longer blocks the main thread.
  • Rust:

    • Expose Datetime types to Rust consumers.

v30.1.0 (2020-05-22)

Full changelog

  • Android & iOS
    • Ping payloads are now compressed using gzip.
  • iOS
    • Glean.initialize is now a no-op if called from an embedded extension. This means that Glean will only run in the base application process in order to prevent extensions from behaving like separate applications with different client ids from the base application. Applications are responsible for ensuring that extension metrics are only collected within the base application.
  • Python
    • lifetime: application metrics are now cleared after the Glean-owned pings are sent, after the product starts.
    • Glean Python bindings now build in a native Windows environment.
    • BUGFIX: MemoryDistributionMetric now parses correctly in metrics.yaml files.
    • BUGFIX: Glean will no longer crash if run as part of another library's coverage testing.

v30.0.0 (2020-05-13)

Full changelog

  • General:
    • We completely replaced how the upload mechanism works. glean-core (the Rust part) now controls all upload and coordinates the platform side with its own internals. All language bindings implement ping uploading around a common API and protocol. There is no change for users of Glean, the language bindings for Android and iOS have been adopted to the new mechanism already.
    • Expose RecordedEvent and DistributionData types to Rust consumers (#876)
    • Log crate version at initialize (#873)
  • Android:
    • Refactor the ping uploader to use the new upload mechanism.
  • iOS:
    • Refactor the ping uploader to use the new upload mechanism.

v29.1.2 (2021-01-26)

Full changelog

This is an iOS release only, built with Xcode 11.7

Otherwise no functional changes.

  • iOS
    • Build with Xcode 11.7 (#1457)

v29.1.1 (2020-05-22)

Full changelog

  • Android
    • BUGFIX: Fix a race condition that leads to a ConcurrentModificationException. Bug 1635865

v29.1.0 (2020-05-11)

Full changelog

  • General:
    • The version of glean_parser has been upgraded to v1.20.4
      • BUGFIX: yamllint errors are now reported using the correct file name.
    • The minimum and maximum values of a timing distribution can now be controlled by the time_unit parameter. See bug 1630997 for more details.

v29.0.0 (2020-05-05)

Full changelog

  • General:
    • The version of glean_parser has been upgraded to v1.20.2 (#827):
      • Breaking change: glinter errors found during code generation will now return an error code.
      • glean_parser now produces a linter warning when user lifetime metrics are set to expire. See bug 1604854 for additional context.
  • Android:
    • The PingType.submit() can now be called without a null by Java consumers (#853).
  • Python:
    • BUGFIX: Fixed a race condition in the atexit handler, that would have resulted in the message "No database found" (#854).
    • The Glean FFI header is now parsed at build time rather than runtime. Relevant for packaging in PyInstaller, the wheel no longer includes glean.h and adds _glean_ffi.py (#852).
    • The minimum versions of many secondary dependencies have been lowered to make the Glean SDK compatible with more environments.
    • Dependencies that depend on the version of Python being used are now specified using the Declaring platform specific dependencies syntax in setuptools. This means that more recent versions of dependencies are likely to be installed on Python 3.6 and later, and unnecessary backport libraries won't be installed on more recent Python versions.
  • iOS:
    • Glean for iOS is now being built with Xcode 11.4.1 (#856)

v28.0.0 (2020-04-23)

Full changelog

  • General:
    • The baseline ping is now sent when the application goes to foreground, in addition to background and dirty-startup.
  • Python:
    • BUGFIX: The ping uploader will no longer display a trace back when the upload fails due to a failed DNS lookup, network outage, or related issues that prevent communication with the telemetry endpoint.
    • The dependency on inflection has been removed.
    • The Python bindings now use subprocess rather than multiprocessing to perform ping uploading in a separate process. This should be more compatible on all of the platforms Glean supports.

v27.1.0 (2020-04-09)

Full changelog

  • General:
    • BUGFIX: baseline pings sent at startup with the dirty_startup reason will now include application lifetime metrics (#810)
  • iOS:
    • Breaking change: Change Glean iOS to use Application Support directory #815. No migration code is included. This will reset collected data if integrated without migration. Please contact the Glean SDK team if this affects you.
  • Python
    • BUGFIX: Fixed a race condition between uploading pings and deleting the temporary directory on shutdown of the process.

v27.0.0 (2020-04-08)

Full changelog

  • General
    • Glean will now detect when the upload enabled flag changes outside of the application, for example due to a change in a config file. This means that if upload is disabled while the application wasn't running (e.g. between the runs of a Python command using the Glean SDK), the database is correctly cleared and a deletion request ping is sent. See #791.
    • The events ping now includes a reason code: startup, background or max_capacity.
  • iOS:
    • BUGFIX: A bug where the metrics ping is sent immediately at startup on the last day of the month has been fixed.
    • Glean for iOS is now being built with Xcode 11.4.0
    • The measure convenience function on timing distributions and time spans will now cancel the timing if the measured function throws, then rethrow the exception (#808)
    • Broken doc generation has been fixed (#805).
  • Kotlin
    • The measure convenience function on timing distributions and time spans will now cancel the timing if the measured function throws, then rethrow the exception (#808)
  • Python:
    • Glean will now wait at application exit for up to one second to let its worker thread complete.
    • Ping uploading now happens in a separate child process by default. This can be disabled with the allow_multiprocessing configuration option.

v26.0.0 (2020-03-27)

Full changelog

  • General:
    • The version of glean_parser has been updated to 1.19.0:
      • Breaking change: The regular expression used to validate labels is stricter and more correct.
      • Add more information about pings to markdown documentation:
        • State whether the ping includes client id;
        • Add list of data review links;
        • Add list of related bugs links.
      • glean_parser now makes it easier to write external translation functions for different language targets.
      • BUGFIX: glean_parser now works on 32-bit Windows.
  • Android:
    • gradlew clean will no longer remove the Miniconda installation in ~/.gradle/glean. Therefore clean can be used without reinstalling Miniconda afterward every time.
  • Python:
    • Breaking Change: The glean.util and glean.hardware modules, which were unintentionally public, have been made private.
    • Most Glean work and I/O is now done on its own worker thread. This brings the parallelism Python in line with the other platforms.
    • The timing distribution, memory distribution, string list, labeled boolean and labeled string metric types are now supported in Python (#762, #763, #765, #766)

v25.1.0 (2020-02-26)

Full changelog

  • Python:
    • The Boolean, Datetime and Timespan metric types are now supported in Python (#731, #732, #737)
    • Make public, document and test the debugging features (#733)

v25.0.0 (2020-02-17)

Full changelog

  • General:
    • ping_type is not included in the ping_info any more (#653), the pipeline takes the value from the submission URL.
    • The version of glean_parser has been upgraded to 1.18.2:
      • Breaking Change (Java API) Have the metrics names in Java match the names in Kotlin. See Bug 1588060.
      • The reasons a ping are sent are now included in the generated markdown documentation.
  • Android:
    • The Glean.initialize method runs mostly off the main thread (#672).
    • Labels in labeled metrics now have a correct, and slightly stricter, regular expression. See label format for more information.
  • iOS:
    • The baseline ping will now include reason codes that indicate why it was submitted. If an unclean shutdown is detected (e.g. due to force-close), this ping will be sent at startup with reason: dirty_startup.
    • Per Bug 1614785, the clearing of application lifetime metrics now occurs after the metrics ping is sent in order to preserve values meant to be included in the startup metrics ping.
    • initialize() now performs most of its work in a background thread.
  • Python:
    • When the pre-init task queue overruns, this is now recorded in the metric glean.error.preinit_tasks_overflow.
    • glinter warnings are printed to stderr when loading metrics.yaml and pings.yaml files.

v24.2.0 (2020-02-11)

Full changelog

  • General:
    • Add locale to client_info section.
    • Deprecation Warning Since locale is now in the client_info section, the one in the baseline ping (glean.baseline.locale) is redundant and will be removed by the end of the quarter.
    • Drop the Glean handle and move state into glean-core (#664)
    • If an experiment includes no extra fields, it will no longer include {"extra": null} in the JSON payload.
    • Support for ping reason codes was added.
    • The metrics ping will now include reason codes that indicate why it was submitted.
    • The version of glean_parser has been upgraded to 1.17.3
  • Android:
    • Collections performed before initialization (preinit tasks) are now dispatched off the main thread during initialization.
    • The baseline ping will now include reason codes that indicate why it was submitted. If an unclean shutdown is detected (e.g. due to force-close), this ping will be sent at startup with reason: dirty_startup.
  • iOS:
    • Collections performed before initialization (preinit tasks) are now dispatched off the main thread and not awaited during initialization.
    • Added recording of glean.error.preinit_tasks_overflow to report when the preinit task queue overruns, leading to data loss. See bug 1609734

v24.1.0 (2020-01-16)

Full changelog

  • General:
    • Stopping a non started measurement in a timing distribution will now be reported as an invalid_state error.
  • Android:
    • A new metric glean.error.preinit_tasks_overflow was added to report when the preinit task queue overruns, leading to data loss. See bug 1609482

v24.0.0 (2020-01-14)

Full changelog

  • General:
    • Breaking Change An enableUpload parameter has been added to the initialize() function. This removes the requirement to call setUploadEnabled() prior to calling the initialize() function.
  • Android:
    • The metrics ping scheduler will now only send metrics pings while the application is running. The application will no longer "wake up" at 4am using the Work Manager.
    • The code for migrating data from Glean SDK before version 19 was removed.
    • When using the GleanTestLocalServer rule in instrumented tests, pings are immediately flushed by the WorkManager and will reach the test endpoint as soon as possible.
  • Python:
    • The Python bindings now support Python 3.5 - 3.7.
    • The Python bindings are now distributed as a wheel on Linux, macOS and Windows.

v23.0.1 (2020-01-08)

Full changelog

  • Android:
    • BUGFIX: The Glean Gradle plugin will now work if an app or library doesn't have a metrics.yaml or pings.yaml file.
  • iOS:
    • The released iOS binaries are now built with Xcode 11.3.

v23.0.0 (2020-01-07)

Full changelog

  • Python bindings:
    • Support for events and UUID metrics was added.
  • Android:
    • The Glean Gradle Plugin correctly triggers docs and API updates when registry files change, without requiring them to be deleted.
    • parseISOTimeString has been made 4x faster. This had an impact on Glean migration and initialization.
    • Metrics with lifetime: application are now cleared when the application is started, after startup Glean SDK pings are generated.
  • All platforms:
    • The public method PingType.send() (in all platforms) have been deprecated and renamed to PingType.submit().
    • Rename deletion_request ping to deletion-request ping after glean_parser update

v22.1.0 (2019-12-17)

Full changelog

  • Add InvalidOverflow error to TimingDistributions (#583)

v22.0.0 (2019-12-05)

Full changelog

  • Add option to defer ping lifetime metric persistence (#530)
  • Add a crate for the nice control API (#542)
  • Pending deletion_request pings are resent on start (#545)

v21.3.0 (2019-12-03)

Full changelog

  • Timers are reset when disabled. That avoids recording timespans across disabled/enabled toggling (#495).
  • Add a new flag to pings: send_if_empty (#528)
  • Upgrade glean_parser to v1.12.0
  • Implement the deletion request ping in Glean (#526)

v21.2.0 (2019-11-21)

Full changelog

  • All platforms

    • The experiments API is no longer ignored before the Glean SDK initialized. Calls are recorded and played back once the Glean SDK is initialized.

    • String list items were being truncated to 20, rather than 50, bytes when using .set() (rather than .add()). This has been corrected, but it may result in changes in the sent data if using string list items longer than 20 bytes.

v21.1.1 (2019-11-20)

Full changelog

  • Android:

    • Use the LifecycleEventObserver interface, rather than the DefaultLifecycleObserver interface, since the latter isn't compatible with old SDK targets.

v21.1.0 (2019-11-20)

Full changelog

  • Android:

    • Two new metrics were added to investigate sending of metrics and baseline pings. See bug 1597980 for more information.

    • Glean's two lifecycle observers were refactored to avoid the use of reflection.

  • All platforms:

    • Timespans will now not record an error if stopping after setting upload enabled to false.

v21.0.0 (2019-11-18)

Full changelog

  • Android:

    • The GleanTimerId can now be accessed in Java and is no longer a typealias.

    • Fixed a bug where the metrics ping was getting scheduled twice on startup.

  • All platforms

    • Bumped glean_parser to version 1.11.0.

v20.2.0 (2019-11-11)

Full changelog

  • In earlier 20.x.x releases, the version of glean-ffi was incorrectly built against the wrong version of glean-core.

v20.1.0 (2019-11-11)

Full changelog

  • The version of Glean is included in the Glean Gradle plugin.

  • When constructing a ping, events are now sorted by their timestamp. In practice, it rarely happens that event timestamps are unsorted to begin with, but this guards against a potential race condition and incorrect usage of the lower-level API.

v20.0.0 (2019-11-11)

Full changelog

  • Glean users should now use a Gradle plugin rather than a Gradle script. (#421) See integrating with the build system docs for more information.

  • In Kotlin, metrics that can record errors now have a new testing method, testGetNumRecordedErrors. (#401)

v19.1.0 (2019-10-29)

Full changelog

  • Fixed a crash calling start on a timing distribution metric before Glean is initialized. Timings are always measured, but only recorded when upload is enabled (#400)
  • BUGFIX: When the Debug Activity is used to log pings, each ping is now logged only once (#407)
  • New invalid state error, used in timespan recording (#230)
  • Add an Android crash instrumentation walk-through (#399)
  • Fix crashing bug by avoiding assert-printing in LMDB (#422)
  • Upgrade dependencies, including rkv (#416)

v19.0.0 (2019-10-22)

Full changelog

First stable release of Glean in Rust (aka glean-core). This is a major milestone in using a cross-platform implementation of Glean on the Android platform.

  • Fix round-tripping of timezone offsets in dates (#392)
  • Handle dynamic labels in coroutine tasks (#394)

v0.0.1-TESTING6 (2019-10-18)

Full changelog

  • Ignore dynamically stored labels if Glean is not initialized (#374)
  • Make sure ProGuard doesn't remove Glean classes from the app (#380)
  • Keep track of pings in all modes (#378)
  • Add jnaTest dependencies to the forUnitTest JAR (#382)

v0.0.1-TESTING5 (2019-10-10)

Full changelog

  • Upgrade to NDK r20 (#365)

v0.0.1-TESTING4 (2019-10-09)

Full changelog

  • Take DST into account when converting a calendar into its items (#359)
  • Include a macOS library in the forUnitTests builds (#358)
  • Keep track of all registered pings in test mode (#363)

v0.0.1-TESTING3 (2019-10-08)

Full changelog

  • Allow configuration of Glean through the GleanTestRule
  • Bump glean_parser version to 1.9.2

v0.0.1-TESTING2 (2019-10-07)

Full changelog

  • Include a Windows library in the forUnitTests builds

v0.0.1-TESTING1 (2019-10-02)

Full changelog

General

First testing release.

Unreleased changes

Full changelog

v5.0.2 (2024-05-23)

Full changelog

  • #1935: BREAKING CHANGE: Remove migrateFromLegacyStorage capability and configuration option. If your project currently sets the migrateFromLegacyStorage value, this will no longer work.
  • #1942: Bumped glean_parser version to 14.1.2.

v5.0.1 (2024-04-30)

Full changelog

  • #1923: Bumped glean_parser version to 14.0.1.
  • #1921: BUGFIX: Fix issue causing glean.client.annotation.experimentation_id metric to not get added in certain pings.
  • #1919: Add glean.page_id to Glean automatic events.

v5.0.0 (2024-03-25)

Full changelog

This is the official release based on the v5.0.0-pre.0 release.

v5.0.0-pre.0 (2024-03-21)

Full changelog

  • #1895: Improve automatic click events for nested elements.
  • #1899: Bug 1886113 - Add wall-clock timestamps to all events.
  • #1900: Bug 1886443 - automatic click events in web sample project.

v4.1.0-pre.0 (2024-03-05)

Full changelog

  • #1866: Added a new uploader that falls back to fetch if navigator.sendBeacon fails.
  • #1876: BREAKING CHANGE: navigator.sendBeacon with fallback to fetch (see #1866) is now the default uploader. This can be changed manually.
  • #1850: Automatically record basic session information (session_id & session_count) for web properties.

v4.0.0 (2024-01-24)

Full changelog

  • This is the official release based on the v4.0.0-pre.x releases.

v4.0.0-pre.3 (2023-12-22)

Full changelog

  • #1848: Support for automatically collecting element click events (first version)
  • #1849: Truncate event extra strings to 500 bytes. This also updates other string-based metrics to truncate based on max bytes rather than a set number of characters.

v4.0.0-pre.2 (2023-12-06)

Full changelog

  • #1835: Added support for automatic page load instrumentation.
  • #1846: Add logging messages when using the debugging APIs from the browser console.

v4.0.0-pre.1 (2023-12-01)

Full changelog

  • #1834: Added support for navigator.sendBeacon. This is not turned on by default and needs to be enabled manually.

v4.0.0-pre.0 (2023-11-27)

Full changelog

  • #1808: BREAKING CHANGE: Make glean.js fully synchronous.
  • #1835: Automatic instrumentation of page load events for simple web properties.

v3.0.0 (2023-11-16)

Full changelog

This is the official release based on v3.0.0-pre.1.

v3.0.0-pre.1 (2023-11-15)

Full changelog

  • #1814: BREAKING CHANGE: Temporarily drop support for web extensions. This platform will be added again once we complete the Glean.js platform refactoring.

v3.0.0-pre.0 (2023-11-10)

Full changelog

  • #1810: BREAKING CHANGE: Drop support for QT.
  • #1811: Update glean_parser to v10.0.3.

v2.0.5 (2023-10-16)

Full changelog

  • #1788: Fix window is undefined error when setting up browser debugging.

v2.0.4 (2023-10-10)

Full changelog

  • #1772: Fix bug where window.Glean functions were getting set on non-browser properties.
  • #1784: Store window.Glean debugging values in sessionStorage. This will set debug options on page init while the current session is still active.

v2.0.3 (2023-09-27)

Full changelog

  • #1770: Allow debugging in browser console via window.Glean.

v2.0.2 (2023-09-14)

Full changelog

  • #1768: Add support for GLEAN_PYTHON and GLEAN_PIP environment variables.
  • #1755: Add sync check to set function for the URL metric.
  • #1766: Update default maxEvents count to 1. This means an events ping will be sent after each recorded event unless the maxEvents count is explicitly set to a larger number.

v2.0.1 (2023-08-11)

Full changelog

  • #1751: Add a migration flag to initialize. If not explicitly set in the config object the migration from IndexedDB to LocalStorage will not occur. The only projects that should ever set this flag are those that have used Glean.js in production with a version <v2.0.0 and have upgraded.

v2.0.0 (2023-08-03)

Full changelog

Important

This version of Glean.js migrates the browser implementation from using IndexedDB to using LocalStorage. The storage change means that all Glean.js actions run synchronously.

  • #1748: Update glean_parser version to the latest.
  • #1733: Add SSR support for Glean.js.
  • #1728: Migrate client_id and first_run_date.
  • #1695: Update Glean.js web to use LocalStorage.

v1.4.0 (2023-05-10)

Full changelog

  • #1671: Allow building on Windows machines.
  • #1674: Upgrade glean_parser version to 7.2.1.

v1.3.0 (2022-10-18)

Full changelog

  • #1544: Upgrade glean_parser version to 6.2.1.
  • #1516: Implement the Custom Distribution metric type.
  • #1514: Implement the Memory Distribution metric type.
  • #1475: Implement the Timing Distribution metric type.

v1.2.0 (2022-09-21)

Full changelog

  • #1513: Bump URL metric character limit to 8k to support longer URLs. URLs that are too long now are truncated to URL_MAX_LENGTH and still recorded along with an Overflow error.
  • #1500: BUGFIX: Update how we invoke CLI python script to fix npm run glean behavior on Windows.
  • #1457: Update ts-node to 10.8.0 to resolve ESM issues when running tests inside of webext sample project.
  • #1452: Remove glean.restarted trailing events from events list.
  • #1450: Update ts-node to 10.8.0 to resolve ESM issues when running tests.
  • #1449: BUGFIX: Add missing quotes to prepare-release script to fix issues with version numbers in Qt sample README & circle ci config.
  • #1449: Update Qt sample project docs to include note about problems with different version numbers of Qt commands.

v1.1.0 (2022-07-18)

Full changelog

  • #1318: Expose ErrorType through its own entry point.
  • #1271: BUGFIX: Fix pings validation function when scanning pings database on initialize.
    • This bug was preventing pings that contained custom headers from being successfully validated and enqueued on initialize.
  • #1335: BUGFIX: Fix uploading gzip-compressed pings in Node.
  • #1415: BUGFIX: Publish the TS type information for the web platform.

v1.0.0 (2022-03-17)

Full changelog

  • #1154: BUGFIX: Implemented initialize and set_upload_enabled reasons for deletion-request ping.
  • #1233: Add optional buildDate argument to initialize configuration. The build date can be generated by glean_parser.
  • #1233: Update glean_parser to version 5.1.0.
  • #1217: Record InvalidType error when incorrectly type values are passed to metric recording functions.
  • #1267: Implement the 'events' ping.

v0.32.0 (2022-03-01)

Full changelog

  • #1220: Refactor virtual environment behavior to support virtual environments that aren't in the project root.
  • #1130: BUGFIX: Guarantee event timestamps cannot be negative numbers.
    • Timestamps were observed to be negative in a few occurrences, for platforms that do not provide the performance.now API, namely QML, and in which we fallback to the Date.now API.
    • If event timestamps are negative pings are rejected by the pipeline.
  • #1132: Retry ping request on network error with keepalive: false. This is sometimes an issue on Chrome browsers below v81.
  • #1170: Update glean_parser to version 5.0.0.
  • #1178: Enable running the glean command offline.
    • When offline Glean will not attempt to install glean_parser.
  • #1178: Enable running the glean command with as many or as little arguments as wanted.
    • Previously the command could only be run with 3 commands, even though all glean_parser commands would have been valid commands for the glean CLI.
  • #1210: Show comprehensive error message when missing storage permissions for Glean on web extensions.
  • #1223: Add --glean-parser-version command to CLI to allow users to retrieve the glean_parser version without installing glean_parser.
  • #1228: BUGFIX: Apply debug features before sending pings at initialize.

v0.31.0 (2022-01-25)

Full changelog

  • #1065: Delete minimal amount of data when invalid data is found while collecting ping.
    • Previous behavior was to delete the whole ping when invalid data was found on the database, new behavior only deletes the actually invalid data and leave the rest of the ping intact.
  • #1065: Only import metric types into the library when they are used either by the user or Glean itself.
    • Previously the code required to deserialize metric data from the database was always imported by the library even if the metric type was never used by the client. This effort will decrease the size of the Glean.js bundles that don't import all the metric types.
  • #1046: Remove legacy X-Client-Type X-Client-Version from Glean pings.
  • #1071: BREAKING CHANGE: Move the testResetGlean API from the Glean singleton and into it's own entry point @mozilla/glean/testing.
    • In order to use this API one must import it through import { testResetGlean } from "@mozilla/glean/testing" instead of using it from the Glean singleton directly.
    • This lower the size of the Glean library, because testing functionality is not imported unless in a testing environment.
    • This change does not apply to QML. In this environment the API remains the same.

v0.30.0 (2022-01-10)

Full changelog

  • #1045: BUGFIX: Provide informative error message when unable to access database in QML.
  • #1077: BUGFIX: Do not clear lifetime metrics before submitting deletion-request ping on initialize.
    • This bug causes malformed deletion-request pings in Glean is initialized with uploadEnabled=false.

v0.29.0 (2022-01-04)

Full changelog

  • #1006: Implement the rate metric.
  • #1066: BUGFIX: Guarantee reported timestamps are never floating point numbers.
    • Floating point timestamps are rejected by the pipeline.

v0.28.0 (2021-12-08)

Full changelog

  • #984: BUGFIX: Return correct upload result in case an error happens while building a ping request.
  • #988: BUGFIX: Enforce rate limitation at upload time, not at ping submission time.
    • Note: This change required a big refactoring of the internal uploading logic.
  • #994: Automatically restart ping upload once the rate limit window is ended.
    • Prior to this change, ping uploading would only be resumed once the .submit() API was called again, even if Glean was not throttled anymore.
    • Note: this change does not apply to QML. We used the setTimeout/clearTimeout APIs in this feature and those are not available on the QML platform. Follow Bug 1743140 for updates.
  • #1015: BUGFIX: Make attempting to call the setUploadEnabled API before initializing Glean a no-op.
  • #1016: BUGFIX: Make shutdown a no-op in case Glean is not initialized.

v0.27.0 (2021-11-22)

Full changelog

  • #981: Update rate limits for ping submission from 15 pings/minute to 40 pings/minute.
  • #967: BREAKING CHANGE: Remove debug option from Glean configuration option.
    • The Glean.setDebugViewTag, Glean.setSourceTags and Glean.setLogPings should be used instead. Note that these APIs can safely be called prior to initialization.

v0.26.0 (2021-11-19)

Full changelog

  • #965: Attempt to infer the Python virtualenv folder from the environment before falling back to the default .venv.
    • Users may provide a folder name through the VIRTUAL_ENV environment variable.
    • If the user is inside an active virtualenv the VIRTUAL_ENV environment variable is already set by Python. See: https://docs.python.org/3/library/venv.html.
  • #968: Add runtime arguments type checking to Glean.setUploadEnabled API.
  • #970: BUGFIX: Guarantee uploading is immediately resumed if the uploader has been stopped due to any of the uploading limits being hit.

v0.25.0 (2021-11-15)

Full changelog

  • #924: Only HTTPS server endpoints outside of testing mode.
    • In testing mode HTTP may be used. No other protocols are allowed.
  • #951: Expose Uploader, UploadResult and UploadResultStatus.
    • These are necessary for creating custom uploaders. Especially from TypeScript.

v0.24.0 (2021-11-04)

Full changelog

  • #856: Expose the @mozilla/glean/web entry point for using Glean.js in websites.
  • #856: Implement the PlatformInfo module for the web platform.
    • Out of os, os_version, architecture and locale, on the web platform, we can only retrieve os and locale information. The other information will default to the known value Unknown for all pings coming from this platform.
  • #856: Expose the @mozilla/glean/web entry point for using Glean.js in websites.
  • #908: BUGFIX: Guarantee internal uploadEnabled state always has a value.
    • When uploadEnabled was set to false and then Glean was restarted with it still false, the internal uploadEnabled state was not being set. That should not cause particularly harmful behavior, since undefined is still a "falsy" value. However, this would create a stream of loud and annoying log messages.
  • #898: Implement the Storage module for the web platform.

v0.23.0 (2021-10-12)

Full changelog

  • #755: Only allow calling of test* functions in "test mode".
    • Glean is put in "test mode" once the Glean.testResetGlean API called.
  • #811: Apply various fixes to the Qt entry point file.
    • Expose ErrorType. This is only useful for testing purposes;
    • Fix version of QtQuick.LocalStorage plugin;
    • Fix the way to access the lib from inside the shutdown method. Previous to this fix, it is not possible to use the shutdown method;
    • Expose the Glean.testRestGlean API.
  • #822: Fix API reference docs build step.
  • #825: Accept architecture and osVersion as initialization parameters in Qt. In Qt these values are not easily available from the environment.

v0.22.0 (2021-10-06)

Full changelog

  • #796: Support setting the app_channel metric.
  • #799: Make sure Glean does not do anything else in case initialization errors.
    • This may happen in case there is an error creating the databases. Mostly an issue on Qt/QML where we use a SQLite database which can throw errors on initialization.
  • #799: Provide stack traces when logging errors.

v0.21.1 (2021-09-30)

Full changelog

  • #780: Fix the publishing step for releases. The Qt-specific build should now publish correctly.

v0.21.0 (2021-09-30)

Full changelog

  • #754: Change target ECMAScript target from 2015 to 2016 when building for Qt.

  • #779: Add a number of workarounds for the Qt JavaScript engine.

  • #775: Disallow calling test only methods outside of test mode.

    • NOTE: Test mode is set once the API Glean.testResetGlean is called.

v0.20.0 (2021-09-17)

Full changelog

  • #696: Expose Node.js entry point @mozilla/glean/node.
  • #695: Implement PlatformInfo module for the Node.js platform.
  • #695: Implement Uploader module for the Node.js platform.

v0.19.0 (2021-09-03)

Full changelog

  • #526: Implement mechanism to sort events reliably throughout restarts.
    • A new event (glean.restarted) will be included in the events payload of pings, in case there was a restart in the middle of the ping measurement window.
  • #534: Expose Uploader base class through @mozilla/glean/<platform>/uploader entry point.
  • #580: Limit size of pings database to 250 pings or 10MB.
  • #580: BUGFIX: Pending pings at startup up are uploaded from oldest to newest.
  • #607: Record an error when incoherent timestamps are calculated for events after a restart.
  • #630: Accept booleans and numbers as event extras.
  • #647: Implement the Text metric type.
  • #658: Implement rate limiting for ping upload.
    • Only up to 15 ping submissions every 60 seconds are now allowed.
  • #658: BUGFIX: Unblock ping uploading jobs after the maximum of upload failures are hit for a given uploading window.
  • #661: Include unminified version of library on Qt/QML builds.
  • #681: BUGFIX: Fix error in scanning events database upon initialization on Qt/QML.
    • This bug prevents the changes introduced in #526 from working properly in Qt/QML.
  • #692: BUGFIX: Ensure events database is initialized at a time Glean is already able to record metrics.
    • This bug also prevents the changes introduced in #526 from working properly in all platforms.

v0.18.1 (2021-07-22)

Full changelog

  • #552: BUGFIX: Do not clear deletion-request ping from upload queue when disabling upload.

v0.18.0 (2021-07-20)

Full changelog

  • #542: Implement shutdown API.

v0.17.0 (2021-07-16)

Full changelog

  • #529: Implement the URL metric type.

  • #526: Implement new events sorting logic, which allows for reliable sorting of events throughout restarts.

v0.16.0 (2021-07-06)

Full changelog

  • #346: Provide default HTTP client for Qt/QML platform.
  • #399: Check if there are ping data before attempting to delete it.
    • This change lowers the amount of log messages related to attempting to delete nonexistent data.
  • #411: Tag all messages logged by Glean with the component they are coming from.
  • #415, #430: Gzip ping payload before upload
    • This changes the signature of Uploader.post to accept string | Uint8Array for the body parameter, instead of only string.
  • #431: BUGFIX: Record the timestamp for events before dispatching to the internal task queue.
  • #462: Implement persistent storage for Qt/QML platform.
  • #466: Expose ErrorType enum, for using with the testGetNumRecordedErrors API.
  • #497: Implement limit of 1MB for ping request payload. Limit is calculated after gzip compression.

v0.15.0 (2021-06-03)

Full changelog

  • #389: BUGFIX: Make sure to submit a deletion-request ping before clearing data when toggling upload.
  • #375: Release Glean.js for Qt as a QML module.

v0.14.1 (2021-05-21)

Full changelog

  • #342: BUGFIX: Fix timespan payload representation to match exactly the payload expected according to the Glean schema.
  • #343: BUGFIX: Report the correct failure exit code when the Glean command line tool fails.

v0.14.0 (2021-05-19)

Full changelog

  • #313: Send Glean.js version and platform information on X-Telemetry-Agent header instead of User-Agent header.

v0.13.0 (2021-05-18)

Full changelog

  • #313: Implement error recording mechanism and error checking testing API.
  • #319: BUGFIX: Do not allow recording floats with the quantity and counter metric types.

v0.12.0 (2021-05-11)

Full changelog

  • #279: BUGFIX: Ensure only empty pings triggers logging of "empty ping" messages.
  • #288: Support collecting PlatformInfo from Qt applications. Only OS name and locale are supported.
  • #281: Add the QuantityMetricType.
  • #303: Implement setRawNanos API for the TimespanMetricType.

v0.11.0 (2021-05-03)

Full changelog

  • #260: Set minimum node (>= 12.0.0) and npm (>= 7.0.0) versions.
  • #202: Add a testing API for the ping type.
  • #253:
    • Implement the timespan metric type.
    • BUGFIX: Report event timestamps in milliseconds.
  • #261: Show a spinner while setting up python virtual environment
  • #273: BUGFIX: Expose the missing LabeledMetricType and TimespanMetricType in Qt.

v0.10.2 (2021-04-26)

Full changelog

  • #256: BUGFIX: Add the missing js extension to the dispatcher.

v0.10.1 (2021-04-26)

Full changelog

  • #254: BUGFIX: Allow the usage of the Glean specific metrics API before Glean is initialized.

v0.10.0 (2021-04-20)

Full changelog

  • #228: Provide a Qt build with every new release.
  • #227: BUGFIX: Fix a bug that prevented using labeled_string and labeled_boolean.
  • #226: BUGFIX: Fix Qt build configuration to target ES5.

v0.9.2 (2021-04-19)

Full changelog

  • #220: Update glean_parser to version 3.1.1.

v0.9.1 (2021-04-19)

Full changelog

  • #219: BUGFIX: Fix path to ping entry point in package.json.

v0.9.0 (2021-04-19)

Full changelog

  • #201: BUGFIX: Do not let the platform be changed after Glean is initialized.
  • #215: Update the glean-parser to version 3.1.0.
  • #214: Improve error reporting of the Glean command.

v0.8.1 (2021-04-14)

Full changelog

  • #206: BUGFIX: Fix ping URL path.
    • Application ID was being reporting as undefined.

v0.8.0 (2021-04-13)

Full changelog

  • #173: Drop Node.js support from webext entry points
  • #155: Allow to define custom uploaders in the configuration.
  • #184: Correctly report appBuild and appDisplayVersion if provided by the user.
  • #198, #192, #184, #180, #174, #165: BUGFIX: Remove all circular dependencies.

v0.7.0 (2021-03-26)

Full changelog

  • #143: Provide a way to initialize and reset Glean.js in tests.

v0.6.1 (2021-03-22)

Full changelog

  • #130: BUGFIX: Fix destination path of CommonJS' build package.json.

v0.6.0 (2021-03-22)

Full changelog

  • #123: BUGFIX: Fix support for ES6 environments.
    • Include .js extensions in all local import statements.
      • ES6' module resolution algorithm does not currently support automatic resolution of file extensions and does not have the ability to import directories that have an index file. The extension and the name of the file being import need to always be specified. See: https://nodejs.org/api/esm.html#esm_customizing_esm_specifier_resolution_algorithm
    • Add a type: module declaration to the main package.json.
      • Without this statement, ES6 support is disabled. See: https://nodejs.org/docs/latest-v13.x/api/esm.html#esm_enabling.:
      • To keep support for CommonJS, in our CommonJS build we include a package.json that overrides the type: module of the main package.json with a type: commonjs.

v0.5.0 (2021-03-18)

Full changelog

  • #96: Provide a ping encryption plugin.
    • This plugin listens to the afterPingCollection event. It receives the collected payload of a ping and returns an encrypted version of it using a JWK provided upon instantiation.
  • #95: Add a plugins property to the configuration options and create an event abstraction for triggering internal Glean events.
    • The only internal event triggered at this point is the afterPingCollection event, which is triggered after ping collection and logging, and before ping storing.
    • Plugins are built to listen to a specific Glean event. Each plugin must define an action, which is executed every time the event they are listening to is triggered.
  • #101: BUGFIX: Only validate Debug View Tag and Source Tags when they are present.
  • #102: BUGFIX: Include a Glean User-Agent header in all pings.
  • #97: Add support for labeled metric types (string, boolean and counter).
  • #105: Introduce and publish the glean command for using the glean-parser in a virtual environment.

v0.4.0 (2021-03-10)

Full changelog

  • #92: Remove web-ext-types from peerDependencies list.
  • #98: Add external APIs for setting the Debug View Tag and Source Tags.
  • #99: BUGFIX: Add a default ping value in the testing APIs.

v0.3.0 (2021-02-24)

Full changelog

  • #90: Provide exports for CommonJS and browser environments.
  • #90: BUGFIX: Accept lifetimes as strings when instantiating metric types.
  • #90: BUGFIX: Fix type declaration paths.
  • #90: BUGFIX: Make web-ext-types a peer dependency.

v0.2.0 (2021-02-23)

  • #85: Include type declarations in @mozilla/glean webext package bundle.

Full changelog

v0.1.1 (2021-02-17)

Full changelog

  • #77: Include README.md file in @mozilla/glean package bundle.

v0.1.0 (2021-02-17)

Full changelog

  • #73: Add this changelog file.
  • #42: Implement the deletion-request ping.
  • #41: Implement the logPings debug tool.
    • When logPings is enabled, pings are logged upon collection.
  • #40: Use the dispatcher in all Glean external API functions. Namely:
    • Metric recording functions;
    • Ping submission;
    • initialize and setUploadEnabled.
  • #36: Implement the event metric type.
  • #31: Implement a task Dispatcher to help in executing Promises in a deterministic order.
  • #26: Implement the setUploadEnabled API.
  • #25: Implement an adapter that leverages browser APIs to upload pings.
  • #24: Implement a ping upload manager.
  • #23: Implement the initialize API and glean internal metrics.
  • #22: Implement the PingType structure and a ping maker.
  • #20: Implement the datetime metric type.
  • #17: Implement the UUID metric type.
  • #14: Implement the counter metric type.
  • #13: Implement the string metric type.
  • #11: Implement the boolean metric type.
  • #9: Implement a metrics database module.
  • #8: Implement a web extension version of the underlying storage module.
  • #6: Implement an abstract underlying storage module.

This Week in Glean (TWiG)

“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.

Blog posts

Contribution Guidelines

There are two important questions to answer before adding new content to this book:

  • Where to include this content?
  • In which format to present it?

This guide aims to provide context for answering both questions.

Table of contents

Where to add new content?

This book is divided in five different sections. Each section contains pages that are of a specific type. New content will fit in one of these section. Following is an explanation on what kind of content fits in each section.

Overview

Is the content you want to add an essay or high level explanation on some aspect of Glean?

The overview section is the place to provide important higher level context for users of Glean. This section may include essays about Glean’s views, principles, data practices, etc. It also contains primers on information such as what are the Glean SDKs

User Guides

Is the content you want to add a general purpose guide on a Glean aspect or feature?

This section is the place for Glean SDK user guides. Pages under this section contain prose format guides on how to include Glean in a project, how to add metrics, how to create pings, and so on.

Pages on this section may link to the API Reference pages, but should not themselves be API References.

Guides can be quite long, thus we should favor having one page per SDK instead of using tabs.

API Reference

Is the content you want to add a developer reference on a specific Glean API?

This section of the book contains reference for all of Glean’s user APIs. Its pages follow a strict structure.

Each API description contains:

  • A title with the name of the API.
    • It’s important to use titles, because they automatically generate links to that API.
  • A brief description of what the API does.
  • Tabs with examples for using that API in each SDK.
    • Even if a certain SDK does not contain a given API, the tab will be included in the tabs bar in the disabled state.

The API Reference pages should not contain any guides in prose format, they should all be linked from the User’s Guide when convenient.

SDK Specific Information

Is the content you want to add a SDK specific guide on a Glean feature?

Different SDKs may require some dedicated pages, these section contains these pages. Each SDK has a top level section under this section, specific pages live under these titles.

Appendix

Is the content you want to add support content for the rest of the content on book?

The appendix contains support information related to the Glean SDKs or the content of this book.

In which format to present content?

General guidelines

Each page of the book should be written as if it were the first page a user is visiting ever. There should be links to other pages of the book wherever there is missing context in the current page. This is important, because documentations are first and foremost reference books, manuals. They should not be expected to be read in order.

Prefer using headers whenever a new topic is introduced

mdbook (the tool used to build this book) turns all headers into links. Which is useful to refer to specific documentation sections.

Favor creating new pages, instead of adding unrelated content to an already existing page

This makes it easier to find content through the Summary.

Custom elements

Tabs

Tabs are useful for providing small code examples of Glean's APIs for each SDK. A tabs section starts with the tab_header include and ends with the tab_footer include. Each tab is declared in between these include statements.

Each tab content is placed inside an html div tag with the data-lang and class="tab" attributes. The data-lang attribute contains the title of the tab. Titles must match for different tabs on the same SDK. Whenever a user clicks in a tab with a specific title, all tabs with that same title will be opened by default, until the user clicks in a tab with a different title.

Every tab section should contain tabs for all Glean SDKs, even if an SDK does not provide the API in question. In this case, the tab div should still be there without any inner HTML. When that is the case that tab will be rendered in a disabled state.

These are the tabs every tab section is expected to contain, in order:

  • Kotlin
  • Java
  • Swift
  • Python
  • Rust
  • JavaScript
  • Firefox Desktop

Finally, here is an example code for a tabs sections:

{{#include ../../shared/tab_header.md}}
<div data-lang="Kotlin" class="tab">
  Kotlin information...
</div>
<div data-lang="Java" class="tab">
  Java information...
</div>
<div data-lang="Swift" class="tab">
  Swift information...
</div>
<div data-lang="Python" class="tab">
  Python information...`
</div>
<div data-lang="Rust" class="tab">
  Rust information...
</div>
<!--
  In this example, JavaScript and Firefox Desktop
  would show up as disabled in the final page.
-->
<div data-lang="JavaScript" class="tab"></div>
<div data-lang="Firefox Desktop" class="tab"></div>
{{#include ../../shared/tab_footer.md}}

And this is how those tabs will look like:

Kotlin information...
Java information...
Swift information...
Python information...`
Rust information...

Tab tooltips

Tabs in a disabled i.e. tabs that do not have any content, will show a tooltip when hovered.

By default, this tooltip will show the message <lang> doe not provide this API. The following data-* attributes can be used to modify this message.

data-bug

This attribute expects a Bugzilla bug number. When this attribute is added a link to the provided bug will be added to the tooltip text.

data-info

This attribute expects free form text or valid HTML. Be careful when adding long texts here. If a text needs to be too long, consider adding it as an actual section / paragraph to the page instead of as a tooltip.

This is how you can use the above attributes.

{{#include ../../shared/tab_header.md}}
...

<!-- No attribute, default text will show up. -->
<div data-lang="Rust" class="tab"></div>
<!-- data-bug attribute, default text will show up + link to bug. -->
<div data-lang="JavaScript" class="tab" data-bug="000000"></div>
<!-- data-info attribute, free form text will show up. -->
<div data-lang="Firefox Desktop" class="tab" data-info="Hello, Glean world!"></div>
{{#include ../../shared/tab_footer.md}}

And this is how each tool tip is rendered.

Custom block quotes

Sometimes it is necessary to bring attention to a special piece of information, or simply to provide extra context related to the a given text.

In order to do that, there are three custom block quote formats available.

Info quote

An information block quote format, to provide useful extra context for a given text.

Warning quote

A warning block quote format, to provide useful warning related to a given text.

Stop quote

A stronger warning block quote format, to provide useful warning related to a given text in a more emphatic format. Use these sparingly.

To include such quotes, again you can use mdbook include statements.

For the above quotes, this is the corresponding code.

{{#include ../../shared/blockquote-info.html}}

##### Info quote

> An information blockquote format, to provide useful extra context for a given text.

{{#include ../../shared/blockquote-warning.html}}

##### Warning quote

> A warning blockquote format, to provide useful warning related to a given text.

{{#include ../../shared/blockquote-stop.html}}

##### Stop quote

> A stronger warning blockquote format, to provide useful warning related to a given text in a
> more emphatic format. Use these sparingly.

It is possible to use any level header with these special block quotes and also no header at all.