Introduction
The Glean SDKs are modern cross-platform telemetry client libraries and are a part of the Glean project.
The Glean SDKs are available for several programming languages and development environments. Each SDK aims to contain the same group of features with similar, but idiomatic APIs.
To learn more about each SDK, refer to the SDKs overview page.
To get started adding Glean to your project, choose one of the following guides:
- Kotlin
- Get started adding Glean to an Android application or library.
- Swift
- Get started adding Glean to an iOS application or library.
- Python
- Get started adding Glean to any Python project.
- Rust
- Get started adding Glean to any Rust project or library.
- JavaScript
- Get started adding Glean to a website, web extension or Node.js project.
- QML
- Get started adding Glean to a Qt/QML application or library.
- Server
- Get started adding Glean to a server-side application.
For development documentation on the Glean SDK
, refer to the Glean SDK development book.
Sections
User Guides
This section of the book contains step-by-step guides and essays detailing how to achieve specific tasks with each Glean SDK.
It contains guides on the first steps of integrating Glean into your project, choosing the right metric type for you, debugging products that use Glean and Glean's built-in error reporting mechanism.
If you want to start using Glean to report data, this is the section you should read.
API Reference
This section of the book contains reference pages for Glean’s user facing APIs.
If you are looking for information a specific Glean API, this is the section you should check out.
SDK Specific Information
This section contains guides and essays regarding specific usage information and possibilities in each Glean SDK.
Check out this section for more information on the SDK you are using.
Appendix
Glossary
In this book we use a lot of Glean specific terminology. In the glossary, we go through many of the terms used throughout this book and describe exactly what we mean when we use them.
Changelog
This section contains detailed notes about changes in Glean, per release.
This Week in Glean
“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.
Contribution Guidelines
This section contains detailed information on where and how to include new content to this book.
Contact
To contact the Glean team you can:
- Find us in the #glean channel on chat.mozilla.org.
- To report issues or request changes, file a bug in Bugzilla in Data Platform & Tools :: Glean: SDK.
- Send an email to glean-team@mozilla.com.
- The Glean SDKs team is: :janerik, :dexter, :travis, :chutten, :perrymcmanis.
License
The Glean SDKs Source Code is subject to the terms of the Mozilla Public License v2.0. You can obtain a copy of the MPL at https://mozilla.org/MPL/2.0/.
Adding Glean to your project
This page describes the steps for adding Glean to a project. This does not include the steps for adding a new metrics or pings to an existing Glean integration. If that is what your are looking for, refer to the Adding new metrics or the Adding new pings guide.
Glean integration checklist
The Glean integration checklist can help to ensure your Glean SDK-using product is meeting all of the recommended guidelines.
Products (applications or libraries) using a Glean SDK to collect telemetry must:
-
Integrate the Glean SDK into the build system. Since the Glean SDK does some code generation for your metrics at build time, this requires a few more steps than just adding a library.
-
Go through data review process for all newly collected data.
-
Ensure that telemetry coming from automated testing or continuous integration is either not sent to the telemetry server or tagged with the
automation
tag using thesourceTag
feature. -
At least one week before releasing your product, enable your product's application id and metrics to be ingested by the data platform (and, as a consequence, indexed by the Glean Dictionary).
Important consideration for libraries: For libraries that are adding Glean, you will need to indicate which applications use the library as a dependency so that the library metrics get correctly indexed and added to the products that consume the library. If the library is added to a new product later, then it is necessary to file a new [bug][dataeng-bug] to add it as a dependency to that product in order for the library metrics to be collected along with the data from the new product.
Additionally, applications (but not libraries) must:
-
Request a data review to add Glean to your application (since it can send data out of the box).
-
Initialize Glean as early as possible at application startup.
-
Provide a way for users to turn data collection off (e.g. providing settings to control
Glean.setUploadEnabled()
). The exact method used is application-specific.
Looking for an integration guide?
Step-by-step tutorials for each supported language/platform, can be found on the specific integration guides:
Adding Glean to your Kotlin project
This page provides a step-by-step guide on how to integrate the Glean library into a Kotlin project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Currently, this SDK only supports the Android platform.
Setting up the dependency
The Glean Kotlin SDK is published on maven.mozilla.org.
To use it, you need to add the following to your project's top-level build file,
in the allprojects
block (see e.g. Glean SDK's own build.gradle
):
repositories {
maven {
url "https://maven.mozilla.org/maven2"
}
}
Each module that uses the Glean Kotlin SDK needs to specify it in its build file, in the dependencies
block.
Add this to your Gradle configuration:
implementation "org.mozilla.components:service-glean:{latest-version}"
Pick the correct version
The
{latest-version}
placeholder in the above link should be replaced with the version of Android Components used by the project.
The Glean Kotlin SDK is released as part of android-components. Therefore, it follows android-components' versions. The android-components release page can be used to determine the latest version.
For example, if version 33.0.0 is used, then the include directive becomes:
implementation "org.mozilla.components:service-glean:33.0.0"
Size impact on the application APK
The Glean Kotlin SDK APK ships binary libraries for all the supported platforms. Each library file measures about 600KB. If the final APK size of the consuming project is a concern, please enable ABI splits.
Dependency for local testing
Due to its use of a native library you will need additional setup to allow local testing.
First add a new configuration to your build.gradle
, just before your dependencies
:
configurations {
jnaForTest
}
Then add the following lines to your dependencies
block:
jnaForTest "net.java.dev.jna:jna:5.6.0@jar"
testImplementation files(configurations.jnaForTest.copyRecursive().files)
testImplementation "org.mozilla.telemetry:glean-forUnitTests:${project.ext.glean_version}"
Note: Always use org.mozilla.telemetry:glean-forUnitTests
.
This package is standalone and its version will be exported from the main Glean package automatically.
Setting up metrics and pings code generation
In order for the Glean Kotlin SDK to generate an API for your metrics, two Gradle plugins must be included in your build:
- The Glean Gradle plugin
- JetBrains' Python envs plugin
The Glean Gradle plugin is distributed through Mozilla's Maven, so we need to tell your build where to look for it by adding the following to the top of your build.gradle
:
buildscript {
repositories {
// Include the next clause if you are tracking snapshots of android components
maven {
url "https://snapshots.maven.mozilla.org/maven2"
}
maven {
url "https://maven.mozilla.org/maven2"
}
dependencies {
classpath "org.mozilla.components:tooling-glean-gradle:{android-components-version}"
}
}
}
Important
As above, the
{android-components-version}
placeholder in the above link should be replaced with the version number of android components used in your project.
The JetBrains Python plugin is distributed in the Gradle plugin repository, so it can be included with:
plugins {
id "com.jetbrains.python.envs" version "0.0.26"
}
Right before the end of the same file, we need to apply the Glean Gradle plugin.
Set any additional parameters to control the behavior of the Glean Gradle plugin before calling apply plugin
.
// Optionally, set any parameters to send to the plugin.
ext.gleanGenerateMarkdownDocs = true
apply plugin: "org.mozilla.telemetry.glean-gradle-plugin"
Rosetta 2 required on Apple Silicon
On Apple Silicon machines (M1/M2/M3 MacBooks and iMacs) Rosetta 2 is required for the bundled Python. See the Apple documentation about Rosetta 2 and Bug 1775420 for details.
You can install it withsoftwareupdate --install-rosetta
Offline builds
The Glean Gradle plugin has limited support for offline builds of applications that use the Glean SDK.
Adding Glean to your Swift project
This page provides a step-by-step guide on how to integrate the Glean library into a Swift project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Currently, this SDK only supports the iOS platform.
Requirements
- Python >= 3.8.
Setting up the dependency
The Glean Swift SDK can be consumed as a Swift Package. In your Xcode project add a new package dependency:
https://github.com/mozilla/glean-swift
Use the dependency rule "Up to Next Major Version". Xcode will automatically fetch the latest version for you.
The Glean library will be automatically available to your code when you import it:
import Glean
Setting up metrics and pings code generation
The metrics.yaml
file is parsed at build time and Swift code is generated.
Add a new metrics.yaml
file to your Xcode project.
Follow these steps to automatically run the parser at build time:
- Download the
sdk_generator.sh
script from the Glean repository:https://raw.githubusercontent.com/mozilla/glean/{latest-release}/glean-core/ios/sdk_generator.sh
Pick the correct version
As above, the
{latest-version}
placeholder should be replaced with the version number of Glean Swift SDK release used in this project.
-
Add the
sdk_generator.sh
file to your Xcode project. -
On your application targets' Build Phases settings tab, click the
+
icon and chooseNew Run Script Phase
. Create a Run Script in which you specify your shell (ex:/bin/sh
), add the following contents to the script area below the shell:bash $PWD/sdk_generator.sh
Set additional options to control the behavior of the script.
-
Add the path to your
metrics.yaml
and (optionally)pings.yaml
andtags.yaml
under "Input files":$(SRCROOT)/{project-name}/metrics.yaml $(SRCROOT)/{project-name}/pings.yaml $(SRCROOT)/{project-name}/tags.yaml
-
Add the path to the generated code file to the "Output Files":
$(SRCROOT)/{project-name}/Generated/Metrics.swift
-
If you are using Git, add the following lines to your
.gitignore
file:.venv/ {project-name}/Generated
This will ignore files that are generated at build time by the
sdk_generator.sh
script. They don't need to be kept in version control, as they can be re-generated from yourmetrics.yaml
andpings.yaml
files.
Glean and embedded extensions
Metric collection is a no-op in application extensions and Glean will not run. Since extensions run in a separate sandbox and process from the application, Glean would run in an extension as if it were a completely separate application with different client ids and storage. This complicates things because Glean doesn’t know or care about other processes. Because of this, Glean is purposefully prevented from running in an application extension and if metrics need to be collected from extensions, it's up to the integrating application to pass the information to the base application to record in Glean.
Adding Glean to your Python project
This page provides a step-by-step guide on how to integrate the Glean library into a Python project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Setting up the dependency
We recommend using a virtual environment for your work to isolate the dependencies for your project. There are many popular abstractions on top of virtual environments in the Python ecosystem which can help manage your project dependencies.
The Glean Python SDK currently has prebuilt wheels on PyPI for Windows (i686 and x86_64), Linux/glibc (x86_64) and macOS (x86_64).
For other platforms, including BSD or Linux distributions that don't use glibc, such as Alpine Linux, the glean_sdk
package will be built from source on your machine.
This requires that Cargo and Rust are already installed.
The easiest way to do this is through rustup.
Once you have your virtual environment set up and activated, you can install the Glean Python SDK into it using:
$ python -m pip install glean_sdk
Important
Installing Python wheels is still a rapidly evolving feature of the Python package ecosystem. If the above command fails, try upgrading
pip
:python -m pip install --upgrade pip
Important
The Glean Python SDK make extensive use of type annotations to catch type related errors at build time. We highly recommend adding mypy to your continuous integration workflow to catch errors related to type mismatches early.
Consuming YAML registry files
For Python, the metrics.yaml
file must be available and loaded at runtime.
If your project is a script (i.e. just Python files in a directory), you can load the metrics.yaml
using:
from glean import load_metrics
metrics = load_metrics("metrics.yaml")
# Use a metric on the returned object
metrics.your_category.your_metric.set("value")
If your project is a distributable Python package, you need to include the metrics.yaml
file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename()
to get the filename at runtime.
from glean import load_metrics
from pkg_resources import resource_filename
metrics = load_metrics(resource_filename(__name__, "metrics.yaml"))
# Use a metric on the returned object
metrics.your_category.your_metric.set("value")
Automation steps
Documentation
The documentation for your application or library's metrics and pings are written in metrics.yaml
and pings.yaml
.
For Mozilla projects, this SDK documentation is automatically published on the Glean Dictionary. For non-Mozilla products, it is recommended to generate markdown-based documentation of your metrics and pings into the repository. For most languages and platforms, this transformation can be done automatically as part of the build. However, for some SDKs the integration to automatically generate docs is an additional step.
The Glean Python SDK provides a commandline tool for automatically generating markdown documentation from your metrics.yaml
and pings.yaml
files. To perform that translation, run glean_parser
's translate
command:
python3 -m glean_parser translate -f markdown -o docs metrics.yaml pings.yaml
To get more help about the commandline options:
python3 -m glean_parser translate --help
We recommend integrating this step into your project's documentation build. The details of that integration is left to you, since it depends on the documentation tool being used and how your project is set up.
Metrics linting
Glean includes a "linter" for metrics.yaml
and pings.yaml
files called the glinter
that catches a number of common mistakes in these files.
As part of your continuous integration, you should run the following on your metrics.yaml
and pings.yaml
files:
python3 -m glean_parser glinter metrics.yaml pings.yaml
Parallelism
Most Glean SDKs use a separate worker thread to do most of its work, including any I/O. This thread is fully managed by the SDK as an implementation detail. Therefore, users should feel free to use the Glean SDKs wherever they are most convenient, without worrying about the performance impact of updating metrics and sending pings.
Since the Glean SDKs perform disk and networking I/O, they try to do as much of their work as possible on separate threads and processes. Since there are complex trade-offs and corner cases to support Python parallelism, it is hard to design a one-size-fits-all approach.
Default behavior
When using the Python SDK, most of the Glean's work is done on a separate thread, managed by the SDK itself. The SDK releases the Global Interpreter Lock (GIL) for most of its operations, therefore your application's threads should not be in contention with the Glean's worker thread.
The Glean Python SDK installs an atexit
handler so that its worker thread can cleanly finish when your application exits.
This handler will wait up to 30 seconds for any pending work to complete.
By default, ping uploading is performed in a separate child process. This process will continue to upload any pending pings even after the main process shuts down. This is important for commandline tools where you want to return control to the shell as soon as possible and not be delayed by network connectivity.
Cases where subprocesses aren't possible
The default approach may not work with applications built using PyInstaller
or similar tools which bundle an application together with a Python interpreter making it impossible to spawn new subprocesses of that interpreter. For these cases, there is an option to ensure that ping uploading occurs in the main process. To do this, set the allow_multiprocessing
parameter on the glean.Configuration
object to False
.
Using the multiprocessing
module
Additionally, the default approach does not work if your application uses the multiprocessing
module for parallelism.
The Glean Python SDK can not wait to finish its work in a multiprocessing
subprocess, since atexit
handlers are not supported in that context.
Therefore, if the Glean Python SDK detects that it is running in a multiprocessing
subprocess, all of its work that would normally run on a worker thread will run on the main thread.
In practice, this should not be a performance issue: since the work is already in a subprocess, it will not block the main process of your application.
Adding Glean to your Rust project
This page provides a step-by-step guide on how to integrate the Glean library into a Rust project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Setting up the dependency
The Glean Rust SDK is published on crates.io.
Add it to your dependencies in Cargo.toml
:
[dependencies]
glean = "50.0.0"
Setting up metrics and pings code generation
glean-build
is in beta.
The
glean-build
crate is new and currently in beta. It can be used as a git dependency. Please file a bug if it does not work for you.
At build time you need to generate the metrics and ping API from your definition files.
Add the glean-build
crate as a build dependency in your Cargo.toml
:
[build-dependencies]
glean-build = { git = "https://github.com/mozilla/glean" }
Then add a build.rs
file next to your Cargo.toml
and call the builder:
use glean_build::Builder;
fn main() {
Builder::default()
.file("metrics.yaml")
.file("pings.yaml")
.generate()
.expect("Error generating Glean Rust bindings");
}
Ensure your metrics.yaml
and pings.yaml
files are placed next to your Cargo.toml
or adjust the path in the code above.
You can also leave out any of the files.
Include the generated code
glean-build
will generate a glean_metrics.rs
file that needs to be included in your source code.
To do so add the following lines of code in your src/lib.rs
file:
mod metrics {
include!(concat!(env!("OUT_DIR"), "/glean_metrics.rs"));
}
Alternatively create src/metrics.rs
(or a different name) with only the include line:
include!(concat!(env!("OUT_DIR"), "/glean_metrics.rs"));
Then add mod metrics;
to your src/lib.rs
file.
Use the metrics
In your code you can then access generated metrics nested within their category under the metrics
module (or your chosen name):
metrics::your_category::metric_name.set(true);
See the metric API reference for details.
Adding Glean to your JavaScript project
This page provides a step-by-step guide on how to integrate the Glean JavaScript SDK into a JavaScript project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
The Glean JavaScript SDK allows integration with three distinct JavaScript environments: websites, web extension and Node.js.
Requirements
- Node.js >= 12.20.0
- npm >= 7.0.0
- Webpack >= 5.34.0
- Python >= 3.8
- The
glean
command requires Python to downloadglean_parser
which is a Python library
- The
Browser extension specific requirements
- webextension-polyfill >= 0.8.0
- Glean.js assumes a Promise-based
browser
API: Firefox provides such an API by default. Other browsers may require using a polyfill library such uswebextension-polyfill
when using Glean in browser extensions
- Glean.js assumes a Promise-based
- Host permissions to the telemetry server
- Only necessary if the defined server endpoint denies cross-origin requests
- Not necessary if using the default
https://incoming.telemetry.mozilla.org
.
- "storage" API permissions
Browser extension example configuration
The
manifest.json
file of the sample browser extension available on themozilla/glean.js
repository provides an example on how to define the above permissions as well as how and where to load thewebextension-polyfill
script.
Setting up the dependency
The Glean JavaScript SDK is distributed as an npm package
@mozilla/glean
.
This package has different entry points to access the different SDKs.
@mozilla/glean/web
gives access to the websites SDK@mozilla/glean/webext
gives access to the web extension SDK@mozilla/glean/node
gives access to the Node.js SDK1
The Node.js SDK does not have persistent storage yet. This means, Glean does not persist state throughout application runs. For updates on the implementation of this feature in Node.js, follow Bug 1728807.
Install Glean in your JavaScript project, by running:
npm install @mozilla/glean
Then import Glean into your project:
// Importing the Glean JavaScript SDK for use in **web extensions**
//
// esm
import Glean from "@mozilla/glean/webext";
// cjs
const { default: Glean } = require("@mozilla/glean/webext");
// Importing the Glean JavaScript SDK for use in **websites**
//
// esm
import Glean from "@mozilla/glean/web";
// cjs
const { default: Glean } = require("@mozilla/glean/web");
// Importing the Glean JavaScript SDK for use in **Node.js**
//
// esm
import Glean from "@mozilla/glean/node";
// cjs
const { default: Glean } = require("@mozilla/glean/node");
Browser extension security considerations
In case of privilege-escalation attack into the context of the web extension using Glean, the malicious scripts would be able to call Glean APIs or use the
browser.storage.local
APIs directly. That would be a risk to Glean data, but not caused by Glean. Glean-using extensions should be careful not to relax the default Content-Security-Policy that generally prevents these attacks.
Common import errors
"Cannot find module '@mozilla/glean'"
Glean.js does not have a main
package entry point.
Instead it relies on a series of entry points depending on the platform you are targeting.
In order to import Glean use:
import Glean from '@mozilla/glean/{your-platform}'
"Module not found: Error: Can't resolve '@mozilla/glean/webext' in '...'"
Glean.js relies on Node.js' subpath exports feature to define multiple package entry points.
Please make sure that you are using a supported Node.js runtime and also make sure the tools you are using support this Node.js feature.
Setting up metrics and pings code generation
In JavaScript, the metrics and pings definitions must be parsed at build time.
The @mozilla/glean
package exposes glean_parser through the glean
script.
To parse your YAML registry files using this script, define a new script in your package.json
file:
{
// ...
"scripts": {
// ...
"build:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f javascript -o path/to/generated",
// Or, if you are building for a Typescript project
"build:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f typescript -o path/to/generated"
}
}
Then run this script by calling:
npm run build:glean
Automation steps
Documentation
Prefer using the Glean Dictionary
While it is still possible to generate Markdown documentation, if working on a public Mozilla project rely on the Glean Dictionary for documentation. Your product will be automatically indexed by the Glean Dictionary after it gets enabled in the pipeline.
One of the commands provided by glean_parser
allows users to generate Markdown documentation based on the contents of their YAML registry files.
To perform that translation, use the translate
command with a different output format, as shown below.
In your package.json
, define the following script:
{
// ...
"scripts": {
// ...
"docs:glean": "glean translate path/to/metrics.yaml path/to/pings.yaml -f markdown -o path/to/docs",
}
}
Then run this script by calling:
npm run docs:glean
YAML registry files linting
Glean includes a "linter" for the YAML registry files called the glinter
that catches a number of common mistakes in these files.
To run the linter use the glinter
command.
In your package.json
, define the following script:
{
// ...
"scripts": {
// ...
"lint:glean": "glean glinter path/to/metrics.yaml path/to/pings.yaml",
}
}
Then run this script by calling:
npm run lint:glean
Adding Glean to your Qt/QML project
This page provides a step-by-step guide on how to integrate the Glean.js library into a Qt/QML project.
Nevertheless this is just one of the required steps for integrating Glean successfully into a project. Check you the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Requirements
- Python >= 3.7
- Qt >= 5.15.2
Setting up the dependency
Glean.js' Qt/QML build is distributed as an asset with every Glean.js release. In order to download the latest version visit https://github.com/mozilla/glean.js/releases/latest.
Glean.js is a QML module, so extract the contents of the downloaded file wherever you keep your other modules. Make sure that whichever directory that module is placed in, is part of the QML Import Path.
After doing that, import Glean like so:
import org.mozilla.Glean <version>
Picking the correct version
The
<version>
number is the version of the release you downloaded minus its patch version. For example, if you downloaded Glean.js version0.15.0
your import statement will be:import org.mozilla.Glean 0.15
Consuming YAML registry files
Qt/QML projects need to setup metrics and pings code generation manually.
First install the glean_parser
CLI tool.
pip install glean_parser
Make sure you have the correct glean_parser
version!
Qt/QML support was added to
glean_parser
in version 3.5.0.
Then call glean_parser
from the command line:
glean_parser translate path/to/metrics.yaml path/to/pings.yaml \
-f javascript \
-o path/to/generated/files \
--option platform=qt \
--option version=0.15
The translate
command will takes a list of YAML registry file paths and an output path and parse
the given YAML registry files into QML JavaScript files.
The generated folder will be a QML module. Make sure wherever the generated module is placed is also part of the QML Import Path.
Notice that when building for Qt/QML it is mandatory to give the translate
command two extra options.
--option platform=qt
This option is what changes the output file from standard JavaScript to QML JavaScript.
--option version=<version>
The version passed to this option will be the version of the generated QML module.
Automation steps
Documentation
Prefer using the Glean Dictionary
While it is still possible to generate Markdown documentation, if working on a public Mozilla project rely on the Glean Dictionary for documentation. Your product will be automatically indexed by the Glean Dictionary after it gets enabled in the pipeline.
One of the commands provided by glean_parser
allows users to generate Markdown documentation based
on the contents of their YAML registry files. To perform that translation, use the translate
command
with a different output format, as shown below.
glean_parser translate path/to/metrics.yaml path/to/pings.yaml \
-f markdown \
-o path/to/docs
YAML registry files linting
glean_parser
includes a "linter" for the YAML registry files called the glinter
that catches a
number of common mistakes in these files. To run the linter use the glinter
command.
glean_parser glinter path/to/metrics.yaml path/to/pings.yaml
Debugging
By default, the Glean.js QML module uses a minified version of the Glean.js library. It may be useful to use the unminified version of the library in order to get proper line numbers and function names when debugging crashes.
The bundle provided contains the unminified version of the library.
In order to use it, open the glean.js
file inside the included module and change the line:
.import "glean.lib.js" as Glean
to
.import "glean.dev.js" as Glean
Troubleshooting
submitPing
may cause crashes when debugging iOS devices
The submitPing
function hits a
known bug in the Qt JavaScript interpreter.
This bug is only reproduced in iOS devices, it does not happen in emulators. It also only happens when using the Qt debug library for iOS.
There is no way around this bug other than avoiding the Qt debug library for iOS altogether until it is fixed. Refer to the the Qt debugging documentation on how to do that.
Adding Glean to your Server Application
Glean enables the collection of behavioral metrics through events in server environments. This method does not rely on the Glean SDK but utilizes the Glean parser to generate native code for logging events in a standard format compatible with the ingestion pipeline.
Differences from using the Glean SDK
This implementation of telemetry collection in server environments has some differences compared to using Glean SDK in client applications and Glean.js in the frontend of web applications. Primarily, in server environments the focus is exclusively on event-based metrics, diverging from the broader range of metric types supported by Glean. Additionally, there is no need to incorporate Glean SDK as a dependency in server applications. Instead, the Glean parser is used to generate native code for logging events.
When to use server-side collection
This method is intended for collecting user-level behavioral events in server environments. It is not suitable for collecting system-level metrics or performance data, which should be collected using cloud monitoring tools.
How to add Glean server side collection to your service
- Integrate
glean_parser
into your build system. Follow instructions for other SDK-enabled platforms, e.g. JavaScript. Use a server outputter to generate logging code.glean_parser
currently supports Go, JavaScript/Typescript, Python, and Ruby. - Define your metrics in
metrics.yaml
- Request a data review for the collected data
- Add your product to probe-scraper
How to add a new event to your server side collection
Follow the standard Glean SDK guide for adding metrics to metrics.yaml
file.
Technical details - ingestion
For more technical details on how ingestion works, see the Confluence page.
Enabling data to be ingested by the data platform
This page provides a step-by-step guide on how to enable data from your product to be ingested by the data platform.
This is just one of the required steps for integrating Glean successfully into a product. Check the full Glean integration checklist for a comprehensive list of all the steps involved in doing so.
Requirements
- GitHub Workflows
Add your product to probe scraper
At least one week before releasing your product, file a data engineering bug to enable your product's application id.
This will result in your product being added to probe scraper's
repositories.yaml
.
Validate and publish metrics
After your product has been enabled, you must submit commits to probe scraper to validate and publish metrics.
Metrics will only be published from branches defined in probe scraper's repositories.yaml
, or the Git default branch if not explicitly configured.
This should happen on every CI run to the specified branches.
Nightly jobs will then automatically add published metrics to the Glean Dictionary and other data platform tools.
Enable the GitHub Workflow by creating a new file .github/workflows/glean-probe-scraper.yml
with the following content:
---
name: Glean probe-scraper
on: [push, pull_request]
jobs:
glean-probe-scraper:
uses: mozilla/probe-scraper/.github/workflows/glean.yaml@main
Add your library to probe scraper
At least one week before releasing your product, file a data engineering bug to add your library to probe scraper
and be scraped for metrics as a dependency of another product.
This will result in your library being added to probe scraper's
repositories.yaml
.
Integrating Glean for product managers
This chapter provides guidance for planning the work involved in integrating Glean into your product, for internal Mozilla customers. For a technical coding perspective, see adding Glean to your project.
Glean is the standard telemetry platform required for all new Mozilla products. While there are some upfront costs to integrating Glean in your product, this pays off in easier long-term maintenance and a rich set of self-serve analysis tools. The Glean team is happy to support your telemetry integration and make it successful. Find us in #glean or email glean-team@mozilla.com.
Building a telemetry plan
The Glean SDKs provide support for answering basic product questions out-of-the-box, such as daily active users, product version and platform information.
However, it is also a good idea to have a sense of any additional product-specific questions you are trying to answer with telemetry, and, when possible, in collaboration with a data scientist.
This of course helps for your own planning, but is also invaluable for the Glean team to support you, since we will understand the ultimate goals of your product's telemetry and ensure the design will meet those goals and we can identify any new features that may be required.
It is best to frame this document in the form of questions and use cases rather than as specific data points and schemas.
Integrating Glean into your product
The technical steps for integrating Glean in your product are documented in its own chapter for supported platforms. We recommend having a member of the Glean team review this integration to catch any potential pitfalls.
(Optional) Adapting Glean to your platform
The Glean SDKs are a collection of cross platform libraries and tools that facilitate collection of Glean conforming telemetry from applications.
Consult the list of the currently supported platforms and languages.
If your product's tech stack isn't currently supported, please reach out to the Glean team: significant work will be required to create a new integration.
In previous efforts, this has ranged from 1 to 3 months FTE of work, so it is important to plan for this work well in advance.
While the first phase of this work generally requires the specialized expertise of the Glean team, the second half can benefit from outside developers to move faster.
(Optional) Designing ping submission
The Glean SDKs periodically send telemetry to our servers in a bundle known as a "ping".
For mobile applications with common interaction models, such as web browsers, the Glean SDKs provide basic pings out-of-the-box.
For other kinds of products, it may be necessary to carefully design what triggers the submission of a ping.
It is important to have a solid telemetry plan (see above) so we can make sure the ping submission will be able to answer the telemetry questions required of the product.
(Optional) New metric types
The Glean SDKs have a number of different metric types that it can collect.
Metric types provide "guardrails" to make sure that telemetry is being collected correctly, and to present the data at analysis time more automatically.
Occasionally, products need to collect data that doesn't fit neatly into one of the available metric types.
Glean has a process to request and introduce more metric types and we will work with you to design something appropriate.
This design and implementation work is at least 4 weeks, though we are working on the foundation to accelerate that.
Having a telemetry plan (see above) will help to identify this work early.
Integrating Glean into GLAM
To use GLAM for analysis of your application's data file a ticket in the GLAM repository.
A data engineer from the GLAM team will reach out to you if further information is required.
Adding new metrics
Table of Contents
- Process overview
- Choosing a metric type
- For how long do you need to collect this data?
- When should the Glean SDK automatically clear the measurement?
- What should this new metric be called?
- What if none of these metric types is the right fit?
- How do I make sure my metric is working?
- Adding the metric to the
metrics.yaml
file - Using the metric from your code
Process overview
When adding a new metric, the process is:
- Consider the question you are trying to answer with this data, and choose the metric type and parameters to use.
- Add a new entry to
metrics.yaml
. - Add code to your project to record into the metric by calling the Glean SDK.
Important: Any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK.
Choosing a metric type
The following is a set of questions to ask about the data being collected to help better determine which metric type to use.
Is it a single measurement?
If the value is true or false, use a boolean metric.
If the value is a string, use a string metric. For example, to record the name of the default search engine.
Beware: string metrics are exceedingly general, and you are probably best served by selecting the most specific metric for the job, since you'll get better error checking and richer analysis tools for free. For example, avoid storing a number in a string metric --- you probably want a counter metric instead.
If you need to store multiple string values in a metric, use a string list metric. For example, you may want to record the list of other Mozilla products installed on the device.
For all of the metric types in this section that measure single values, it is especially important to consider how the lifetime of the value relates to the ping it is being sent in. Since these metrics don't perform any aggregation on the client side, when a ping containing the metric is submitted, it will contain only the "last known" value for the metric, potentially resulting in data loss. There is further discussion of metric lifetimes below.
Are you measuring user behavior?
For tracking user behavior, it is usually meaningful to know the over of events that lead to the use of a feature. Therefore, for user behavior, an event metric is usually the best choice.
Be aware, however, that events can be particularly expensive to transmit, store and analyze, so should not be used for higher-frequency measurements - though this is less of a concern in server environments.
Are you counting things?
If you want to know how many times something happened, use a counter metric. If you are counting a group of related things, or you don't know what all of the things to count are at build time, use a labeled counter metric.
If you need to know how many times something happened relative to the number of times something else happened, use a rate metric.
If you need to know when the things being counted happened relative to other things, consider using an event.
Are you measuring time?
If you need to record an absolute time, use a datetime metric. Datetimes are recorded in the user's local time, according to their device's real time clock, along with a timezone offset from UTC. Datetime metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question.
If you need to record how long something takes you have a few options.
If you need to measure the total time spent doing a particular task, look to the timespan metric. Timespan metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. Note that this metric should only be used to measure time on a single thread. If multiple overlapping timespans are measured for the same metric, an invalid state error is recorded.
If you need to measure the relative occurrences of many timings, use a timing distribution. It builds a histogram of timing measurements, and is safe to record multiple concurrent timespans on different threads.
If you need to know the time between multiple distinct actions that aren't a simple "begin" and "end" pair, consider using an event.
For how long do you need to collect this data?
Think carefully about how long the metric will be needed, and set the expires
parameter to disable the metric at the earliest possible time.
This is an important component of Mozilla's lean data practices.
When the metric passes its expiration date (determined at build time), it will automatically stop collecting data.
When a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com
to the notification_emails
addresses associated with the metric.
At that time, the metric should be removed, which involves removing it from the metrics.yaml
file and removing uses of it in the source code.
Removing a metric does not affect the availability of data already collected by the pipeline.
If the metric is still needed after its expiration date, it should go back for another round of data review to have its expiration date extended.
Important: Ensure that telemetry alerts are received and are reviewed in a timely manner. Expired metrics don't record any data, extending or removing a metric should be done in time. Consider adding both a group email address and an individual who is responsible for this metric to the
notification_emails
list.
When should the Glean SDK automatically clear the measurement?
The lifetime
parameter of a metric defines when its value will be cleared. There are three lifetime options available:
ping
(default)
The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.
application
The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.
user
NOTE: Reach out to the Glean team before using this.
The metric is part of the user's profile and will live as long as the profile lives.
This is often not the best choice unless the metric records a value that really needs
to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id
,
the day the product was first executed. It is rare to use this lifetime outside of some metrics
that are built in to the Glean SDK.
While lifetimes are important to understand for all metric types, they are particularly important for the metric types that record single values and don't aggregate on the client (boolean
, string
, labeled_string
, string_list
, datetime
and uuid
), since these metrics will send the "last known" value and missing the earlier values could be a form of unintended data loss.
A lifetime example
Let's work through an example to see how these lifetimes play out in practice. Let's suppose we have a user preference, "turbo mode", which defaults to false
, but the user can turn it to true
at any time. We want to know when this flag is true
so we can measure its affect on other metrics in the same ping. In the following diagram, we look at a time period that sends 4 pings across two separate runs of the application. We assume here, that like the Glean SDK's built-in metrics ping, the developer writing the metric isn't in control of when the ping is submitted.
In this diagram, the ping measurement windows are represented as rectangles, but the moment the ping is "submitted" is represented by its right edge. The user changes the "turbo mode" setting from false
to true
in the first run, and then toggles it again twice in the second run.
-
A. Ping lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. It is included in the first ping after being recorded (Ping 2), which causes it to be cleared.
-
B. Ping lifetime, set on init and change: The default value is included in Ping 1, and the changed value is included in Ping 2, which causes it to be cleared. It therefore misses Ping 3, but when the application is started, it is recorded again and it is included in Ping 4. However, this causes it to be cleared again and it is not in Ping 5.
-
C. Application lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. After the value is changed, it is included in Pings 2 and 3, but then due to application restart it is cleared, so it is not included until the value is manually toggled again.
-
D. Application, set on init and change: The default value is included in Ping 1, and the changed value is included in Pings 2 and 3. Even though the application startup causes it to be cleared, it is set again, and all subsequent pings also have the value.
-
E. User, set on change: The default value is missing from Ping 1, but since
user
lifetime metrics aren't cleared unless the user profile is reset (e.g. on Android, when the product is uninstalled), it is included in all subsequent pings. -
F. User, set on init and change: Since
user
lifetime metrics aren't cleared unless the user profile is reset, it is included in all subsequent pings. This would be true even if the "turbo mode" preference were never changed again.
Note that for all of the metric configurations, the toggle of the preference off and on during Ping 4 is completely missed. If you need to create a ping containing one, and only one, value for this metric, consider using a custom ping to create a ping whose lifetime matches the lifetime of the value.
What if none of these lifetimes are appropriate?
If the timing at which the metric is sent in the ping needs to closely match the timing of the metrics value, the best option is to use a custom ping to manually control when pings are sent.
This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.
What should this new metric be called?
Metric names have a maximum length of 30 characters.
Reuse names from other applications
There's a lot of value using the same name for analogous metrics collected across different products. For example, BigQuery makes it simple to join columns with the same name across multiple tables. Therefore, we encourage you to investigate if a similar metric is already being collected by another product. If it is, there may be an opportunity for code reuse across these products, and if all the projects are using the Glean SDK, it's easy for libraries to send their own metrics. If sharing the code doesn't make sense, at a minimum we recommend using the same metric name for similar actions and concepts whenever possible.
Make names unique within an application
Metric identifiers (the combination of a metric's category and name) must be unique across all metrics that are sent by a single application.
This includes not only the metrics defined in the app's metrics.yaml
, but the metrics.yaml
of any Glean SDK-using library that the application uses, including the Glean SDK itself.
Therefore, care should be taken to name things specifically enough so as to avoid namespace collisions.
In practice, this generally involves thinking carefully about the category
of the metric, more than the name
.
Note: Duplicate metric identifiers are not currently detected at build time. See bug 1578383 for progress on that. However, the probe_scraper process, which runs nightly, will detect duplicate metrics and e-mail the
notification_emails
associated with the given metrics.
Be as specific as possible
More broadly, you should choose the names of metrics to be as specific as possible. It is not necessary to put the type of the metric in the category or name, since this information is retained in other ways through the entire end-to-end system.
For example, if defining a set of events related to search, put them in a category called search
, rather than just events
or search_events
. The events
word here would be redundant.
What if none of these metric types is the right fit?
The current set of metrics the Glean SDKs support is based on known common use cases, but new use cases are discovered all the time.
Please reach out to us on #glean:mozilla.org. If you think you need a new metric type, we have a process for that.
How do I make sure my metric is working?
The Glean SDK has rich support for writing unit tests involving metrics. Writing a good unit test is a large topic, but in general, you should write unit tests for all new telemetry that does the following:
-
Performs the operation being measured.
-
Asserts that metrics contain the expected data, using the
testGetValue
API on the metric. -
Where applicable, asserts that no errors are recorded, such as when values are out of range, using the
testGetNumRecordedErrors
API.
In addition to unit tests, it is good practice to validate the incoming data for the new metric on a pre-release channel to make sure things are working as expected.
Adding the metric to the metrics.yaml
file
The metrics.yaml
file defines the metrics your application or library will send.
They are organized into categories.
The overall organization is:
# Required to indicate this is a `metrics.yaml` file
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0
toolbar:
click:
type: event
description: |
Event to record toolbar clicks.
metadata:
tags:
- Interaction
notification_emails:
- CHANGE-ME@example.com
bugs:
- https://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
expires: 2019-06-01 # <-- Update to a date in the future
double_click:
...
Refer to the metrics YAML registry format for a full reference
on the metrics.yaml
file structure.
Using the metric from your code
The reference documentation for each metric type goes into detail about using each metric type from your code.
Note that all Glean metrics are write-only. Outside of unit tests, it is impossible to retrieve a value from the Glean SDK's database. While this may seem limiting, this is required to:
- enforce the semantics of certain metric types (e.g. that Counters can only be incremented).
- ensure the lifetime of the metric (when it is cleared or reset) is correctly handled.
Capitalization
One thing to note is that we try to adhere to the coding conventions of each language wherever possible, so the metric name and category in the metrics.yaml
(which is in snake_case
) may be changed to some other case convention, such as camelCase
, when used from code.
Event extras and labels are never capitalized, no matter the target language.
Category and metric names in the metrics.yaml
are in snake_case
,
but given the Kotlin coding standards defined by ktlint,
these identifiers must be camelCase
in Kotlin.
For example, the metric defined in the metrics.yaml
as:
views:
login_opened:
...
is accessible in Kotlin as:
import org.mozilla.yourApplication.GleanMetrics.Views
GleanMetrics.Views.loginOpened...
Category and metric names in the metrics.yaml
are in snake_case
,
but given the Swift coding standards defined by swiftlint,
these identifiers must be camelCase
in Swift.
For example, the metric defined in the metrics.yaml
as:
views:
login_opened:
...
is accessible in Kotlin as:
GleanMetrics.Views.loginOpened...
Category and metric names in the metrics.yaml
are in snake_case
, which matches the PEP8 standard, so no translation is needed for Python.
Given the Rust coding standards defined by
clippy,
identifiers should all be snake_case
.
This includes category names which in the metrics.yaml
are dotted.snake_case
:
compound.category:
metric_name:
...
In Rust this becomes:
use firefox_on_glean::metrics;
metrics::compound_category::metric_name...
JavaScript identifiers are customarily camelCase
.
This requires transforming a metric defined in the metrics.yaml
as:
compound.category:
metric_name:
...
to a form useful in JS as:
import * as compoundCategory from "./path/to/generated/files/compoundCategory.js";
compoundCategory.metricName...
Firefox Desktop has
Coding Style Guidelines
for both C++ and JS.
This results in, for a metric defined in the metrics.yaml
as:
compound.category:
metric_name:
...
an identifier that looks like:
C++
#include "mozilla/glean/GleanMetrics.h"
mozilla::glean::compound_category::metric_name...
JavaScript
Glean.compoundCategory.metricName...
Adding new metrics
Table of Contents
- Process overview
- Choosing a metric type
- For how long do you need to collect this data?
- When should the Glean SDK automatically clear the measurement?
- What should this new metric be called?
- What if none of these metric types is the right fit?
- How do I make sure my metric is working?
- Adding the metric to the
metrics.yaml
file - Using the metric from your code
Process overview
When adding a new metric, the process is:
- Consider the question you are trying to answer with this data, and choose the metric type and parameters to use.
- Add a new entry to
metrics.yaml
. - Add code to your project to record into the metric by calling the Glean SDK.
Important: Any new data collection requires documentation and data-review. This is also required for any new metric automatically collected by the Glean SDK.
Choosing a metric type
The following is a set of questions to ask about the data being collected to help better determine which metric type to use.
Is it a single measurement?
If the value is true or false, use a boolean metric.
If the value is a string, use a string metric. For example, to record the name of the default search engine.
Beware: string metrics are exceedingly general, and you are probably best served by selecting the most specific metric for the job, since you'll get better error checking and richer analysis tools for free. For example, avoid storing a number in a string metric --- you probably want a counter metric instead.
If you need to store multiple string values in a metric, use a string list metric. For example, you may want to record the list of other Mozilla products installed on the device.
For all of the metric types in this section that measure single values, it is especially important to consider how the lifetime of the value relates to the ping it is being sent in. Since these metrics don't perform any aggregation on the client side, when a ping containing the metric is submitted, it will contain only the "last known" value for the metric, potentially resulting in data loss. There is further discussion of metric lifetimes below.
Are you measuring user behavior?
For tracking user behavior, it is usually meaningful to know the over of events that lead to the use of a feature. Therefore, for user behavior, an event metric is usually the best choice.
Be aware, however, that events can be particularly expensive to transmit, store and analyze, so should not be used for higher-frequency measurements - though this is less of a concern in server environments.
Are you counting things?
If you want to know how many times something happened, use a counter metric. If you are counting a group of related things, or you don't know what all of the things to count are at build time, use a labeled counter metric.
If you need to know how many times something happened relative to the number of times something else happened, use a rate metric.
If you need to know when the things being counted happened relative to other things, consider using an event.
Are you measuring time?
If you need to record an absolute time, use a datetime metric. Datetimes are recorded in the user's local time, according to their device's real time clock, along with a timezone offset from UTC. Datetime metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question.
If you need to record how long something takes you have a few options.
If you need to measure the total time spent doing a particular task, look to the timespan metric. Timespan metrics allow specifying the resolution they are collected at, and to stay lean, they should only be collected at the minimum resolution required to answer your question. Note that this metric should only be used to measure time on a single thread. If multiple overlapping timespans are measured for the same metric, an invalid state error is recorded.
If you need to measure the relative occurrences of many timings, use a timing distribution. It builds a histogram of timing measurements, and is safe to record multiple concurrent timespans on different threads.
If you need to know the time between multiple distinct actions that aren't a simple "begin" and "end" pair, consider using an event.
For how long do you need to collect this data?
Think carefully about how long the metric will be needed, and set the expires
parameter to disable the metric at the earliest possible time.
This is an important component of Mozilla's lean data practices.
When the metric passes its expiration date (determined at build time), it will automatically stop collecting data.
When a metric's expiration is within in 14 days, emails will be sent from telemetry-alerts@mozilla.com
to the notification_emails
addresses associated with the metric.
At that time, the metric should be removed, which involves removing it from the metrics.yaml
file and removing uses of it in the source code.
Removing a metric does not affect the availability of data already collected by the pipeline.
If the metric is still needed after its expiration date, it should go back for another round of data review to have its expiration date extended.
Important: Ensure that telemetry alerts are received and are reviewed in a timely manner. Expired metrics don't record any data, extending or removing a metric should be done in time. Consider adding both a group email address and an individual who is responsible for this metric to the
notification_emails
list.
When should the Glean SDK automatically clear the measurement?
The lifetime
parameter of a metric defines when its value will be cleared. There are three lifetime options available:
ping
(default)
The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.
application
The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.
user
NOTE: Reach out to the Glean team before using this.
The metric is part of the user's profile and will live as long as the profile lives.
This is often not the best choice unless the metric records a value that really needs
to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id
,
the day the product was first executed. It is rare to use this lifetime outside of some metrics
that are built in to the Glean SDK.
While lifetimes are important to understand for all metric types, they are particularly important for the metric types that record single values and don't aggregate on the client (boolean
, string
, labeled_string
, string_list
, datetime
and uuid
), since these metrics will send the "last known" value and missing the earlier values could be a form of unintended data loss.
A lifetime example
Let's work through an example to see how these lifetimes play out in practice. Let's suppose we have a user preference, "turbo mode", which defaults to false
, but the user can turn it to true
at any time. We want to know when this flag is true
so we can measure its affect on other metrics in the same ping. In the following diagram, we look at a time period that sends 4 pings across two separate runs of the application. We assume here, that like the Glean SDK's built-in metrics ping, the developer writing the metric isn't in control of when the ping is submitted.
In this diagram, the ping measurement windows are represented as rectangles, but the moment the ping is "submitted" is represented by its right edge. The user changes the "turbo mode" setting from false
to true
in the first run, and then toggles it again twice in the second run.
-
A. Ping lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. It is included in the first ping after being recorded (Ping 2), which causes it to be cleared.
-
B. Ping lifetime, set on init and change: The default value is included in Ping 1, and the changed value is included in Ping 2, which causes it to be cleared. It therefore misses Ping 3, but when the application is started, it is recorded again and it is included in Ping 4. However, this causes it to be cleared again and it is not in Ping 5.
-
C. Application lifetime, set on change: The value isn't included in Ping 1, because Glean doesn't know about it yet. After the value is changed, it is included in Pings 2 and 3, but then due to application restart it is cleared, so it is not included until the value is manually toggled again.
-
D. Application, set on init and change: The default value is included in Ping 1, and the changed value is included in Pings 2 and 3. Even though the application startup causes it to be cleared, it is set again, and all subsequent pings also have the value.
-
E. User, set on change: The default value is missing from Ping 1, but since
user
lifetime metrics aren't cleared unless the user profile is reset (e.g. on Android, when the product is uninstalled), it is included in all subsequent pings. -
F. User, set on init and change: Since
user
lifetime metrics aren't cleared unless the user profile is reset, it is included in all subsequent pings. This would be true even if the "turbo mode" preference were never changed again.
Note that for all of the metric configurations, the toggle of the preference off and on during Ping 4 is completely missed. If you need to create a ping containing one, and only one, value for this metric, consider using a custom ping to create a ping whose lifetime matches the lifetime of the value.
What if none of these lifetimes are appropriate?
If the timing at which the metric is sent in the ping needs to closely match the timing of the metrics value, the best option is to use a custom ping to manually control when pings are sent.
This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.
What should this new metric be called?
Metric names have a maximum length of 30 characters.
Reuse names from other applications
There's a lot of value using the same name for analogous metrics collected across different products. For example, BigQuery makes it simple to join columns with the same name across multiple tables. Therefore, we encourage you to investigate if a similar metric is already being collected by another product. If it is, there may be an opportunity for code reuse across these products, and if all the projects are using the Glean SDK, it's easy for libraries to send their own metrics. If sharing the code doesn't make sense, at a minimum we recommend using the same metric name for similar actions and concepts whenever possible.
Make names unique within an application
Metric identifiers (the combination of a metric's category and name) must be unique across all metrics that are sent by a single application.
This includes not only the metrics defined in the app's metrics.yaml
, but the metrics.yaml
of any Glean SDK-using library that the application uses, including the Glean SDK itself.
Therefore, care should be taken to name things specifically enough so as to avoid namespace collisions.
In practice, this generally involves thinking carefully about the category
of the metric, more than the name
.
Note: Duplicate metric identifiers are not currently detected at build time. See bug 1578383 for progress on that. However, the probe_scraper process, which runs nightly, will detect duplicate metrics and e-mail the
notification_emails
associated with the given metrics.
Be as specific as possible
More broadly, you should choose the names of metrics to be as specific as possible. It is not necessary to put the type of the metric in the category or name, since this information is retained in other ways through the entire end-to-end system.
For example, if defining a set of events related to search, put them in a category called search
, rather than just events
or search_events
. The events
word here would be redundant.
What if none of these metric types is the right fit?
The current set of metrics the Glean SDKs support is based on known common use cases, but new use cases are discovered all the time.
Please reach out to us on #glean:mozilla.org. If you think you need a new metric type, we have a process for that.
How do I make sure my metric is working?
The Glean SDK has rich support for writing unit tests involving metrics. Writing a good unit test is a large topic, but in general, you should write unit tests for all new telemetry that does the following:
-
Performs the operation being measured.
-
Asserts that metrics contain the expected data, using the
testGetValue
API on the metric. -
Where applicable, asserts that no errors are recorded, such as when values are out of range, using the
testGetNumRecordedErrors
API.
In addition to unit tests, it is good practice to validate the incoming data for the new metric on a pre-release channel to make sure things are working as expected.
Adding the metric to the metrics.yaml
file
The metrics.yaml
file defines the metrics your application or library will send.
They are organized into categories.
The overall organization is:
# Required to indicate this is a `metrics.yaml` file
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0
toolbar:
click:
type: event
description: |
Event to record toolbar clicks.
metadata:
tags:
- Interaction
notification_emails:
- CHANGE-ME@example.com
bugs:
- https://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
expires: 2019-06-01 # <-- Update to a date in the future
double_click:
...
Refer to the metrics YAML registry format for a full reference
on the metrics.yaml
file structure.
Using the metric from your code
The reference documentation for each metric type goes into detail about using each metric type from your code.
Note that all Glean metrics are write-only. Outside of unit tests, it is impossible to retrieve a value from the Glean SDK's database. While this may seem limiting, this is required to:
- enforce the semantics of certain metric types (e.g. that Counters can only be incremented).
- ensure the lifetime of the metric (when it is cleared or reset) is correctly handled.
Capitalization
One thing to note is that we try to adhere to the coding conventions of each language wherever possible, so the metric name and category in the metrics.yaml
(which is in snake_case
) may be changed to some other case convention, such as camelCase
, when used from code.
Event extras and labels are never capitalized, no matter the target language.
Category and metric names in the metrics.yaml
are in snake_case
,
but given the Kotlin coding standards defined by ktlint,
these identifiers must be camelCase
in Kotlin.
For example, the metric defined in the metrics.yaml
as:
views:
login_opened:
...
is accessible in Kotlin as:
import org.mozilla.yourApplication.GleanMetrics.Views
GleanMetrics.Views.loginOpened...
Category and metric names in the metrics.yaml
are in snake_case
,
but given the Swift coding standards defined by swiftlint,
these identifiers must be camelCase
in Swift.
For example, the metric defined in the metrics.yaml
as:
views:
login_opened:
...
is accessible in Kotlin as:
GleanMetrics.Views.loginOpened...
Category and metric names in the metrics.yaml
are in snake_case
, which matches the PEP8 standard, so no translation is needed for Python.
Given the Rust coding standards defined by
clippy,
identifiers should all be snake_case
.
This includes category names which in the metrics.yaml
are dotted.snake_case
:
compound.category:
metric_name:
...
In Rust this becomes:
use firefox_on_glean::metrics;
metrics::compound_category::metric_name...
JavaScript identifiers are customarily camelCase
.
This requires transforming a metric defined in the metrics.yaml
as:
compound.category:
metric_name:
...
to a form useful in JS as:
import * as compoundCategory from "./path/to/generated/files/compoundCategory.js";
compoundCategory.metricName...
Firefox Desktop has
Coding Style Guidelines
for both C++ and JS.
This results in, for a metric defined in the metrics.yaml
as:
compound.category:
metric_name:
...
an identifier that looks like:
C++
#include "mozilla/glean/GleanMetrics.h"
mozilla::glean::compound_category::metric_name...
JavaScript
Glean.compoundCategory.metricName...
Unit testing Glean metrics
In order to support unit testing inside of client applications using the Glean SDK, a set of testing API functions have been included. The intent is to make the Glean SDKs easier to test 'out of the box' in any client application it may be used in. These functions expose a way to inspect and validate recorded metric values within the client application. but are restricted to test code only. (Outside of a testing context, Glean APIs are otherwise write-only so that it can enforce semantics and constraints about data).
To encourage using the testing APIs, it is also possible to generate testing coverage reports to show which metrics in your project are tested.
Example of using the test API
In order to enable metrics testing APIs in each SDK, Glean must be reset and put in testing mode. For documentation on how to do that, refer to Initializing - Testing API.
Check out full examples of using the metric testing API on each Glean SDK. All examples omit the step of resetting Glean for tests to focus solely on metrics unit testing.
// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record(
BrowserEngagementExtras(font = "Courier")
)
// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()
// Retrieve a snapshot of the recorded events
val events = BrowserEngagement.click.testGetValue()!!
// Check if we collected all 3 events in the snapshot
assertEquals(3, events.size)
// Check extra key/value for first event in the list
assertEquals("Courier", events.elementAt(0).extra["font"])
// Record a metric value with extra to validate against
GleanMetrics.BrowserEngagement.click.record([.font: "Courier"])
// Record more events without extras attached
BrowserEngagement.click.record()
BrowserEngagement.click.record()
// Retrieve a snapshot of the recorded events
let events = BrowserEngagement.click.testGetValue()!
// Check if we collected all 3 events in the snapshot
XCTAssertEqual(3, events.count)
// Check extra key/value for first event in the list
XCTAssertEqual("Courier", events[0].extra?["font"])
from glean import load_metrics
metrics = load_metrics("metrics.yaml")
# Record a metric value with extra to validate against
metrics.url.visit.add(1)
# Check if we collected any events into the 'click' metric
assert metrics.url.visit.test_get_value() is not Null
# Retrieve a snapshot of the recorded events
assert 1 == metrics.url.visit.test_get_value()
Generating testing coverage reports
Glean can generate coverage reports to track which metrics are tested in your unit test suite.
There are three steps to integrate it into your continuous integration workflow: recording coverage, post-processing the results, and uploading the results.
Recording coverage
Glean testing coverage is enabled by setting the GLEAN_TEST_COVERAGE
environment variable to the name of a file to store results.
It is good practice to set it to the absolute path to a file, since some testing harnesses (such as cargo test
) may change the current working directory.
GLEAN_TEST_COVERAGE=$(realpath glean_coverage.txt) make test
Post-processing the results
A post-processing step is required to convert the raw output in the file specified by GLEAN_TEST_COVERAGE
into usable output for coverage reporting tools. Currently, the only coverage reporting tool supported is codecov.io.
This post-processor is available in the coverage
subcommand in the glean_parser
tool.
For some build systems, glean_parser
is already installed for you by the build system integration at the following locations:
- On Android/Gradle,
$GRADLE_HOME/glean/bootstrap-4.5.11/Miniconda3/bin/glean_parser
- On iOS,
$PROJECT_ROOT/.venv/bin/glean_parser
- For other systems, install
glean_parser
usingpip install glean_parser
The glean_parser coverage
command requires the following parameters:
-f
: The output format to produce, for examplecodecovio
to produce codecov.io's custom format.-o
: The path to the output file, for examplecodecov.json
.-c
: The input raw coverage file.glean_coverage.txt
in the example above.- A list of the
metrics.yaml
files in your repository.
For example, to produce output for codecov.io:
glean_parser coverage -f codecovio -o glean_coverage.json -c glean_coverage.txt app/metrics.yaml
In this example, the glean_coverage.json
file is now ready for uploading to codecov.io.
Uploading coverage
If using codecov.io
, the uploader doesn't send coverage results for YAML files by default. Pass the -X yaml
option to the uploader to make sure they are included:
bash <(curl -s https://codecov.io/bash) -X yaml
Validating the collected data
It is worth investing time when instrumentation is added to the product to understand if the data looks reasonable and expected, and to take action if it does not. It is important to highlight that an automated rigorous test suite for testing metrics is an important precondition for building confidence in newly collected data (especially business-critical ones).
The following checklist could help guide this validation effort.
-
Before releasing the product with the new data collection, make sure the data looks as expected by generating sample data on a local machine and inspecting it on the Glean Debug View(see the debugging facilities):
a. Is the data showing up in the correct ping(s)?
b. Does the metric report the expected data?
c. If exercising the same path again, is it expected for the data to be submitted again? And does it?
-
As users start adopting the version of the product with the new data collection (usually within a few days of release), the initial data coming in should be checked, to understand how the measurements are behaving in the wild:
a. Does this organically-sent data satisfy the same quality expectations the manually-sent data did in Step 1?
b. Is the metric showing up correctly in the Glean Dictionary?
c. Is there any new error being reported for the new data points? If so, does this point to an edge case that should be documented and/or fixed in the code?
d. As the first three or four days pass, distributions will converge towards their final shapes. Consider extreme values; are there a very high number of zero/minimum values when there shouldn't be, or values near what you would realistically expect to be the maximum (e.g. a timespan for a single day that is reporting close to 86,400 seconds)? In case of oddities in the data, how much of the product population is affected? Does this require changing the instrumentation or documenting?
How to annotate metrics without changing the source code?
Data practitioners that lack familiarity with YAML or product-specific development workflows can still document any discovered edge-cases and anomalies by identifying the metric in the Glean Dictionary and initiate adding commentary from the metric page.
- After enough data is collected from the product population, are the expectations from the previous points still met?
Does the product support multiple release channels?
In case of multiple distinct product populations, the above checklist should be ideally run against all of them. For example, in case of Firefox, the checklist should be run for the Nightly population first, then on the other channels as the collection moves across the release trains.
Error reporting
The Glean SDKs record the number of errors that occur when metrics are passed invalid data or are otherwise used incorrectly.
This information is reported back in special labeled counter metrics in the glean.error
category.
Error metrics are included in the same pings as the metric that caused the error.
Additionally, error metrics are always sent in the metrics
ping ping.
The following categories of errors are recorded:
invalid_value
: The metric value was invalid.invalid_label
: The label on a labeled metric was invalid.invalid_state
: The metric caught an invalid state while recording.invalid_overflow
: The metric value to be recorded overflows the metric-specific upper range.invalid_type
: The metric value is not of the expected type. This error type is only recorded by the Glean JavaScript SDK. This error may only happen in dynamically typed languages.
For example, if you had a string metric and passed it a string that was too long:
MyMetrics.stringMetric.set("this_string_is_longer_than_the_limit_for_string_metrics")
The following error metric counter would be incremented:
Glean.error.invalidOverflow["my_metrics.string_metric"].add(1)
Resulting in the following keys in the ping:
{
"metrics": {
"labeled_counter": {
"glean.error.invalid_overflow": {
"my_metrics.string_metric": 1
}
}
}
}
If you have a debug build of the Glean SDK, details about the errors being recorded are included in the logs. This detailed information is not included in Glean pings.
The Glean JavaScript SDK provides a slightly different set of metrics and pings
If you are looking for the metrics collected by Glean.js, refer to the documentation over on the
@mozilla/glean.js
repository.
Metrics
This document enumerates the metrics collected by this project using the Glean SDK. This project may depend on other projects which also collect metrics. This means you might have to go searching through the dependency tree to get a full picture of everything collected by this project.
Pings
all-pings
These metrics are sent in every ping.
All Glean pings contain built-in metrics in the ping_info
and client_info
sections.
In addition to those built-in metrics, the following metrics are added to the ping:
Name | Type | Description | Data reviews | Extras | Expiration | Data Sensitivity |
---|---|---|---|---|---|---|
glean.client.annotation.experimentation_id | string | An experimentation identifier derived and provided by the application for the purpose of experimentation enrollment. | Bug 1848201 | never | 1 | |
glean.error.invalid_label | labeled_counter | Counts the number of times a metric was set with an invalid label. The labels are the category.name identifier of the metric. | Bug 1499761 | never | 1 | |
glean.error.invalid_overflow | labeled_counter | Counts the number of times a metric was set a value that overflowed. The labels are the category.name identifier of the metric. | Bug 1591912 | never | 1 | |
glean.error.invalid_state | labeled_counter | Counts the number of times a timing metric was used incorrectly. The labels are the category.name identifier of the metric. | Bug 1499761 | never | 1 | |
glean.error.invalid_value | labeled_counter | Counts the number of times a metric was set to an invalid value. The labels are the category.name identifier of the metric. | Bug 1499761 | never | 1 | |
glean.restarted | event | Recorded when the Glean SDK is restarted. Only included in custom pings that record events. For more information, please consult the Custom Ping documentation. | Bug 1716725 | never | 1 |
baseline
This is a built-in ping that is assembled out of the box by the Glean SDK.
See the Glean SDK documentation for the baseline
ping.
This ping is sent if empty.
This ping includes the client id.
Data reviews for this ping:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1512938#c3
- https://bugzilla.mozilla.org/show_bug.cgi?id=1599877#c25
Bugs related to this ping:
Reasons this ping may be sent:
-
active
: The ping was submitted when the application became active again, which includes when the application starts. In earlier versions, this was calledforeground
.*Note*: this ping will not contain the `glean.baseline.duration` metric.
-
dirty_startup
: The ping was submitted at startup, because the application process was killed before the Glean SDK had the chance to generate this ping, before becoming inactive, in the last session.*Note*: this ping will not contain the `glean.baseline.duration` metric.
-
inactive
: The ping was submitted when becoming inactive. In earlier versions, this was calledbackground
.
All Glean pings contain built-in metrics in the ping_info
and client_info
sections.
In addition to those built-in metrics, the following metrics are added to the ping:
Name | Type | Description | Data reviews | Extras | Expiration | Data Sensitivity |
---|---|---|---|---|---|---|
glean.baseline.duration | timespan | The duration of the last foreground session. | Bug 1512938 | never | 1, 2 | |
glean.validation.pings_submitted | labeled_counter | A count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping) | Bug 1586764 | never | 1 |
deletion-request
This is a built-in ping that is assembled out of the box by the Glean SDK.
See the Glean SDK documentation for the deletion-request
ping.
This ping is sent if empty.
This ping includes the client id.
Data reviews for this ping:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1587095#c6
- https://bugzilla.mozilla.org/show_bug.cgi?id=1702622#c2
Bugs related to this ping:
Reasons this ping may be sent:
-
at_init
: The ping was submitted at startup. Glean discovered that between the last time it was run and this time, upload of data has been disabled. -
set_upload_enabled
: The ping was submitted between Glean init and Glean shutdown. Glean was told after init but before shutdown that upload has changed from enabled to disabled.
All Glean pings contain built-in metrics in the ping_info
and client_info
sections.
This ping contains no metrics.
metrics
This is a built-in ping that is assembled out of the box by the Glean SDK.
See the Glean SDK documentation for the metrics
ping.
This ping includes the client id.
Data reviews for this ping:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1512938#c3
- https://bugzilla.mozilla.org/show_bug.cgi?id=1557048#c13
Bugs related to this ping:
Reasons this ping may be sent:
-
overdue
: The last ping wasn't submitted on the current calendar day, but it's after 4am, so this ping submitted immediately -
reschedule
: A ping was just submitted. This ping was rescheduled for the next calendar day at 4am. -
today
: The last ping wasn't submitted on the current calendar day, but it is still before 4am, so schedule to send this ping on the current calendar day at 4am. -
tomorrow
: The last ping was already submitted on the current calendar day, so schedule this ping for the next calendar day at 4am. -
upgrade
: This ping was submitted at startup because the application was just upgraded.
All Glean pings contain built-in metrics in the ping_info
and client_info
sections.
In addition to those built-in metrics, the following metrics are added to the ping:
Name | Type | Description | Data reviews | Extras | Expiration | Data Sensitivity |
---|---|---|---|---|---|---|
glean.database.size | memory_distribution | The size of the database file at startup. | Bug 1656589 | never | 1 | |
glean.error.io | counter | The number of times we encountered an IO error when writing a pending ping to disk. | Bug 1686233 | never | 1 | |
glean.error.preinit_tasks_overflow | counter | The number of tasks that overflowed the pre-initialization buffer. Only sent if the buffer ever overflows. In Version 0 this reported the total number of tasks enqueued. | Bug 1609482 | never | 1 | |
glean.upload.deleted_pings_after_quota_hit | counter | The number of pings deleted after the quota for the size of the pending pings directory or number of files is hit. Since quota is only calculated for the pending pings directory, and deletion request ping live in a different directory, deletion request pings are never deleted. | Bug 1601550 | never | 1 | |
glean.upload.discarded_exceeding_pings_size | memory_distribution | The size of pings that exceeded the maximum ping size allowed for upload. | Bug 1597761 | never | 1 | |
glean.upload.in_flight_pings_dropped | counter | How many pings were dropped because we found them already in-flight. | Bug 1816401 | never | 1 | |
glean.upload.missing_send_ids | counter | How many ping upload responses did we not record as a success or failure (in glean.upload.send_success or glean.upload.send_failue , respectively) due to an inconsistency in our internal bookkeeping? | Bug 1816400 | never | 1 | |
glean.upload.pending_pings | counter | The total number of pending pings at startup. This does not include deletion-request pings. | Bug 1665041 | never | 1 | |
glean.upload.pending_pings_directory_size | memory_distribution | The size of the pending pings directory upon initialization of Glean. This does not include the size of the deletion request pings directory. | Bug 1601550 | never | 1 | |
glean.upload.ping_upload_failure | labeled_counter | Counts the number of ping upload failures, by type of failure. This includes failures for all ping types, though the counts appear in the next successfully sent metrics ping. | Bug 1589124 |
| never | 1 |
glean.upload.send_failure | timing_distribution | Time needed for a failed send of a ping to the servers and getting a reply back. | Bug 1814592 | never | 1 | |
glean.upload.send_success | timing_distribution | Time needed for a successful send of a ping to the servers and getting a reply back | Bug 1814592 | never | 1 | |
glean.validation.foreground_count | counter | On mobile, the number of times the application went to foreground. | Bug 1683707 | never | 1 | |
glean.validation.pings_submitted | labeled_counter | A count of the pings submitted, by ping type. This metric appears in both the metrics and baseline pings. - On the metrics ping, the counts include the number of pings sent since the last metrics ping (including the last metrics ping) - On the baseline ping, the counts include the number of pings send since the last baseline ping (including the last baseline ping) | Bug 1586764 | never | 1 | |
glean.validation.shutdown_dispatcher_wait | timing_distribution | Time waited for the dispatcher to unblock during shutdown. Most samples are expected to be below the 10s timeout used. | Bug 1828066 | never | 1 | |
glean.validation.shutdown_wait | timing_distribution | Time waited for the uploader at shutdown. | Bug 1814592 | never | 1 |
Data categories are defined here.
Pings
A ping is a bundle of related metrics, gathered in a payload to be transmitted. The ping payload is encoded in JSON format and contains one or more of the common sections with shared information data.
If data collection is enabled, the chosen Glean SDK may provide a set of built-in pings that are assembled out of the box without any developer intervention.
Table of contents
Payload structure
Every ping payload has the following keys at the top-level:
-
The
ping_info
section contains core metadata that is included in every ping that doesn't set themetadata.include_info_sections
property tofalse
. -
The
client_info
section contains information that identifies the client. It is included in every ping that doesn't set themetadata.include_info_sections
property tofalse
. When included, it contains a persistent client identifierclient_id
, except when theinclude_client_id
property is set tofalse
.
The following keys are only present if any metrics or events were recorded for the given ping:
-
The
metrics
section contains the submitted values for all metric types except for events. It has keys for each of the metric types, under which is data for each metric. -
The
events
section contains the events recorded in the ping.
See the payload documentation
for more details for each metric type in the metrics
and events
section.
The ping_info
section
Metadata about the ping itself.
This section is included in every ping that doesn't set the
metadata.include_info_sections
property to false
.
The following fields are included in the ping_info
section.
Optional fields are marked accordingly.
seq
A running counter of the number of times pings of this type have been sent.
start_time
Type: Datetime, Lifetime: User
The time of the start of collection of the data in the ping, in local time and with millisecond precision (by default), including timezone information. (Note: Custom pings can opt-out of precise timestamps and use minute precision.)
end_time
Type: Datetime, Lifetime: Ping
The time of the end of collection of the data in the ping, in local time and with millisecond precision (by default), including timezone information. This is also the time this ping was generated and is likely well before ping transmission time. (Note: Custom pings can opt-out of precise timestamps and use minute precision.)
reason
(optional)
The reason the ping was submitted. The specific set of values
and their meanings are defined for each metric type in the reasons
field in the pings.yaml
file.
experiments
(optional)
A dictionary of active experiments.
This object contains experiment annotations keyed by the experiment id
.
Each annotation contains the experiment branch
the client is enrolled in
and may contain a string to string map with additional data in the extra
key.
Both the id
and branch
are truncated to 30 characters.
See Using the Experiments API
on how to record experiments data.
{
"<id>": {
"branch": "branch-id",
"extra": {
"some-key": "a-value"
}
}
}
The client_info
section
A limited amount of metrics that are generally useful across products. The data is provided by the embedding application or automatically fetched by the Glean SDK. It is collected at initialization time and sent in every ping afterwards. For historical reasons it contains metrics that are only useful on a certain platform.
This section is included in every ping that doesn't set the
metadata.include_info_sections
property to false
.
Additional metrics require a proposal
Adding new metrics maintained by the Glean SDKs team will require a full proposal and details on why that value is useful across multiple platforms and products and needs Glean SDKs team ownership.
The Glean SDKs are not taking ownership of new metrics that are platform- or product-specific.
The following fields are included in the client_info
section.
Optional fields are marked accordingly.
app_build
Type: String, Lifetime: Application
The build identifier generated by the CI system (e.g. "1234/A").
If the value was not provided through configuration, this metric gets set to Unknown
.
app_channel
(optional)
Type: String, Lifetime: Application
The product-provided release channel (e.g. "beta").
app_display_version
Type: String, Lifetime: Application
The user-visible version string (e.g. "1.0.3").
The meaning of the string (e.g. whether semver or a git hash) is application-specific.
If the value was not provided through configuration, this metric gets set to Unknown
.
build_date
(optional)
Type: Datetime, Lifetime: Application
architecture
Type: String, Lifetime: Application
The architecture of the device (e.g. "arm", "x86").
client_id
(optional)
A UUID identifying a profile and allowing user-oriented correlation of data.
device_manufacturer
(optional)
Type: String, Lifetime: Application
The manufacturer of the device the application is running on. Not set if the device manufacturer can't be determined (e.g. on Desktop).
device_model
(optional)
Type: String, Lifetime: Application
The model of the device the application is running on.
On Android, this is Build.MODEL
, the user-visible marketing name, like "Pixel 2 XL".
Not set if the device model can't be determined (e.g. on Desktop).
first_run_date
Type: Datetime, Lifetime: User
The date of the first run of the application, in local time and with day precision, including timezone information.
os
Type: String, Lifetime: Application
The name of the operating system (e.g. "Linux", "Android", "iOS").
os_version
Type: String, Lifetime: Application
The user-visible version of the operating system (e.g. "1.2.3").
If the version detection fails, this metric gets set to Unknown
.
android_sdk_version
(optional)
Type: String, Lifetime: Application
The Android specific SDK version of the software running on this hardware device (e.g. "23").
windows_build_number
(optional)
Type: Quantity, Lifetime: Application
The optional Windows build number, reported by Windows (e.g. 22000)
and not set for other platforms.
telemetry_sdk_build
Type: String, Lifetime: Application
The version of the Glean SDK.
locale
(optional)
Type: String, Lifetime: Application
The locale of the application during initialization (e.g. "es-ES"). If the locale can't be determined on the system, the value is "und", to indicate "undetermined".
Ping submission
The pings that the Glean SDKs generate are submitted to the Mozilla servers at specific paths, in order to provide additional metadata without the need to unpack the ping payload.
URL
A typical submission URL looks like
"<server-address>/submit/<application-id>/<doc-type>/<glean-schema-version>/<document-id>"
where:
<server-address>
: the address of the server that receives the pings;<application-id>
: a unique application id, automatically detected by the Glean SDK; this is the value returned byContext.getPackageName()
;<doc-type>
: the name of the ping; this can be one of the pings available out of the box with the Glean SDK, or a custom ping;<glean-schema-version>
: the version of the Glean ping schema;<document-id>
: a unique identifier for this ping.
Limitations
To keep resource usage in check, the Glean SDK enforces some limitations on ping uploading and ping storage.
Rate limiting
Only up to 15 ping submissions every 60 seconds are allowed.
For the JavaScript SDK that limit is higher and up to 40 ping submissions every 60 seconds are allowed.
Request body size limiting
The body of a ping request may have up to 1MB (after compression). Pings that exceed this size are discarded
and don't get uploaded. Size and number of discarded pings are recorded on the internal
Glean metric glean.upload.discarded_exceeding_pings_size
.
Storage quota
Pending pings are stored on disk. Storage is scanned every time Glean is initialized and upon scanning Glean checks its size. If it exceeds a size of 10MB or 250 pending pings, pings are deleted to get the storage back to an accepted size. Pings are deleted oldest first, until the storage size is below the quota.
The number of deleted pings due to exceeding storage quota is recorded on the metric
glean.upload.deleted_pings_after_quota_hit
and the size of the pending pings directory is recorded (regardless on whether quota has been reached)
on the metric glean.upload.pending_pings_directory_size
.
Deletion request pings are not subject to this limitation and never get deleted.
Submitted headers
A pre-defined set of headers is additionally sent along with the submitted ping.
Content-Type
Describes the data sent to the server. Value is always application/json; charset=utf-8
.
Date
Submission date/time in GMT/UTC+0 offset, e.g. Mon, 23 Jan 2019 10:10:10 GMT+00:00
.
User-Agent
(deprecated)
Up to Glean v44.0.0 and Glean.js v0.13.0 this contained the Glean SDK version and platform information.
Newer Glean SDKs do not overwrite this header.
See X-Telemetry-Agent
for details.
Clients might still send it, for example, when sending pings from browsers it will contain the characteristic browser UA string.
This header is parsed by the Glean pipeline and can be queried at analysis time through
the metadata.user_agent.*
fields in the ping tables.
X-Telemetry-Agent
The Glean SDK version and platform this ping is sent from. Useful for debugging purposes when pings are sent to the error stream. as it describes the application and the Glean SDK used for sending the ping.
It's looks like Glean/40.0.0 (Kotlin on Android)
, where 40.0.0
is the Glean Kotlin SDK version number
and Kotlin on Android
is the name of the language used by the SDK that sent the request
plus the name of the platform it is running on.
X-Debug-Id
(optional)
Debug header attached to Glean pings by using the debug APIs,
e.g. test-tag
.
When this header is present, the ping is redirected to the Glean Debug View.
X-Source-Tags
(optional)
A list of tags to associate with the ping, useful for clustering pings at analysis time,
for example to tell data generated from CI from other data e.g. automation, perf
.
This header is attached to Glean pings by using the debug APIs.
Custom pings
Applications can define metrics that are sent in custom pings. Unlike the built-in pings, custom pings are sent explicitly by the application.
This is useful when the scheduling of the built-in pings (metrics, baseline and events) are not appropriate for your data. Since the timing of the submission of custom pings is handled by the application, the measurement window is under the application's control.
This is especially useful when metrics need to be tightly related to one another, for example when you need to measure the distribution of frame paint times when a particular rendering backend is in use. If these metrics were in different pings, with different measurement windows, it is much harder to do that kind of reasoning with much certainty.
Defining a custom ping
Custom pings must be defined in a pings.yaml
file, placed in the same directory alongside your app's metrics.yaml
file.
For example, to define a custom ping called search
specifically for search information:
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0
search:
description: >
A ping to record search data.
metadata:
tags:
- Search
include_client_id: false
notification_emails:
- CHANGE-ME@example.com
bugs:
- http://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
Tags are an optional feature you can use to provide an additional layer of categorization to pings.
Any tags specified in the metadata
section of a ping must have a corresponding entry in a tags YAML registry for your project.
Refer to the pings YAML registry format for a full reference
on the pings.yaml
file structure.
Sending metrics in a custom ping
To send a metric on a custom ping, you add the custom ping's name to the send_in_pings
parameter in the metrics.yaml
file.
Ping metadata must be loaded before sending!
After defining a custom ping, before it can be used for sending data, its metadata must be loaded into your application or library.
For example, to define a new metric to record the default search engine, which is sent in a custom ping called search
, put search
in the send_in_pings
parameter. Note that it is an error to specify a ping in send_in_pings
that does not also have an entry in pings.yaml
.
search.default:
name:
type: string
description: >
The name of the default search engine.
send_in_pings:
- search
If this metric should also be sent in the default ping for the given metric type, you can add the special value default
to send_in_pings
:
send_in_pings:
- search
- default
The glean.restarted
event
For custom pings that contain event metrics, the glean.restarted
event is injected by Glean
on every application restart that may happen during the pings measurement window.
Note: All leading and trailing glean.restarted
events are omitted from each ping.
Event timestamps throughout application restarts
Event timestamps are always calculated relative to the first event in a ping. The first event
will always have timestamp 0
and subsequent events will have timestamps corresponding to the
elapsed amount of milliseconds since that first event.
That is also the case for events recorded throughout restarts.
Example
In the below example payload, there were two events recorded on the first application run.
The first event is timestamp 0
and the second event happens one second after the first one,
so it has timestamp 1000
.
The application is restarted one hour after the first event and a glean.restarted
event is
recorded, timestamp 3600000
. Finally, an event is recorded during the second application run
two seconds after restart, timestamp 3602000
.
{
...
"events": [
{
"timestamp": 0,
"category": "examples",
"name": "event_example",
},
{
"timestamp": 1000,
"category": "examples",
"name": "event_example"
},
{
"timestamp": 3600000,
"category": "glean",
"name": "restarted"
},
{
"timestamp": 3602000,
"category": "examples",
"name": "event_example"
},
]
}
Caveat: Handling decreasing time offsets
For events recorded in a single application run, Glean relies on a monotonically increasing timer to calculate event timestamps, while for calculating the time elapsed between application runs Glean has to rely on the computer clock, which is not necessarily monotonically increasing.
In the case that timestamps in between application runs are not monotonically increasing, Glean will take the value of the previous timestamp and add one millisecond, thus guaranteeing that timestamps are always increasing.
Checking for decreasing time offsets between restarts
When this edge case is hit, Glean records an
InvalidValue
error for theglean.restarted
metric. This metric may be consulted at analysis time. It is sent in the same ping where the error happened.
In the below example payload, the first and second application runs go exactly like in the example above.
The only difference is that when the restart happens, the offset between the absolute time of the first event and the absolute time of the restart is not enough to keep the timestamps increasing. That may happen for many reasons, such as a change in timezones or simply a manual change in the clock by the user.
In this case, Glean will ignore the incorrect timestamp and add one millisecond to the last timestamp of the previous run, in order to keep the monotonically increasing nature of the timestamps.
{
...
"events": [
{
"timestamp": 0,
"category": "examples",
"name": "event_example",
},
{
"timestamp": 1000,
"category": "examples",
"name": "event_example"
},
{
"timestamp": 1001,
"category": "glean",
"name": "restarted"
},
{
"timestamp": 3001,
"category": "examples",
"name": "event_example"
},
]
}
Testing custom pings
Applications defining custom pings can use use the ping testing API to test these pings in unit tests.
General testing strategy
The schedule of custom pings depends on the specific application implementation, since it is up to the SDK user to define the ping semantics. This makes the testing strategy a bit more complex, but usually boiling down to:
- Triggering the code path that accumulates/records the data.
- Defining a callback validation function using the ping testing API.
- Finally triggering the code path that submits the custom ping or submitting the ping using the
submit
API.
Pings sent by Glean
If data collection is enabled, the Glean SDKs provide a set of built-in pings that are assembled out of the box without any developer intervention. The following is a list of these built-in pings:
baseline
ping: A small ping sent every time the application goes to foreground and background. Going to foreground also includes when the application starts.deletion-request
ping: Sent when the user disables telemetry in order to request a deletion of their data.events
ping: The default ping for events. Sent every time the application goes to background or a certain number of events is reached. Is not sent when there are no events recorded, even if there are other metrics with values.metrics
ping: The default ping for metrics. Sent approximately daily.
Applications can also define and send their own custom pings when the schedules of these pings is not suitable.
There is also a high-level overview of how the metrics
and baseline
pings relate and the timings they record.
Available pings per platform
SDK | baseline | deletion-request | events | metrics |
---|---|---|---|---|
Kotlin | ✅ | ✅ | ✅ | ✅ |
Swift | ✅ | ✅ | ✅ | ✅ |
Python | ✅ 1 | ✅ | ✅ 2 | ❌ |
Rust | ✅ | ✅ | ✅ | ✅ |
JavaScript | ❌ | ✅ | ✅ | ❌ |
Firefox Desktop | ✅ | ✅ | ✅ | ✅ |
Not sent automatically. Use the handle_client_active
and handle_client_inactive
API.
Sent on startup when pending events are stored. Additionally sent when handle_client_inactive
is called.
Defining foreground and background state
These docs refer to application 'foreground' and 'background' state in several places.
Foreground
For Android, this specifically means the activity becomes visible to the user, it has entered the Started
state, and the system invokes the onStart()
callback.
Background
This specifically means when the activity is no longer visible to the user, it has entered the Stopped
state, and the system invokes the onStop()
callback.
This may occur, if the user uses Overview
button to change to another app, the user presses the Back
button and
navigates to a previous application or the home screen, or if the user presses the Home
button to return to the
home screen. This can also occur if the user navigates away from the application through some notification or
other means.
The system may also call onStop()
when the activity has finished running, and is about to be terminated.
Foreground
For iOS, the Glean Swift SDK attaches to the willEnterForegroundNotification
.
This notification is posted by the OS shortly before an app leaves the background state on its way to becoming the active app.
Background
For iOS, this specifically means when the app is no longer visible to the user, or when the UIApplicationDelegate
receives the applicationDidEnterBackground
event.
This may occur if the user opens the task switcher to change to another app, or if the user presses the Home
button
to show the home screen. This can also occur if the user navigates away from the app through a notification or other
means.
Note: Glean does not currently support Scene based lifecycle events that were introduced in iOS 13.
The baseline
ping
Description
This ping is intended to provide metrics that are managed by the Glean SDKs themselves, and not explicitly set by the application or included in the application's metrics.yaml
file.
Platform availability
SDK | Kotlin | Swift | Python | Rust | JavaScript | Firefox Desktop |
---|---|---|---|---|---|---|
baseline ping | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ |
Scheduling
The baseline
ping is automatically submitted with a reason: active
when the application becomes active (on mobile it means getting to foreground). These baseline pings do not contain duration
.
The baseline
ping is automatically submitted with a reason: inactive
when the application becomes inactive (on mobile it means getting to background).
If no baseline
ping is triggered when becoming inactive (e.g. the process is abruptly killed) a baseline
ping with reason dirty_startup
will be submitted on the next application startup. This only happens from the second application start onward.
See also the ping schedules and timing overview.
Contents
The baseline
ping also includes the common ping sections found in all pings.
It also includes a number of metrics defined in the Glean SDKs themselves.
Querying ping contents
A quick note about querying ping contents (i.e. for sql.telemetry.mozilla.org): Each metric in the baseline ping is organized by its metric type, and uses a namespace of glean.baseline
. For instance, in order to select duration
you would use metrics.timespan['glean.baseline.duration']
. If you were trying to select a String based metric such as os
, then you would use metrics.string['glean.baseline.os']
Example baseline ping
{
"ping_info": {
"experiments": {
"third_party_library": {
"branch": "enabled"
}
},
"seq": 0,
"start_time": "2019-03-29T09:50-04:00",
"end_time": "2019-03-29T09:53-04:00",
"reason": "foreground"
},
"client_info": {
"telemetry_sdk_build": "0.49.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "27",
"os_version": "8.1.0",
"device_manufacturer": "Google",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"metrics": {
"timespan": {
"glean.baseline.duration": {
"value": 52,
"time_unit": "second"
}
}
}
}
The deletion-request
ping
Description
This ping is submitted when a user opts out of sending technical and interaction data.
This ping contains the client id.
This ping is intended to communicate to the Data Pipeline that the user wishes to have their reported Telemetry data deleted. As such it attempts to send itself at the moment the user opts out of data collection, and continues to try and send itself.
Adding secondary ids
It is possible to send secondary ids in the deletion request ping. For instance, if the application is migrating from legacy telemetry to Glean, the legacy client ids can be added to the deletion request ping by creating a
metrics.yaml
entry for the id to be added with asend_in_pings
value ofdeletion_request
.An example
metrics.yaml
entry might look like this:legacy_client_id: type: uuid description: A UUID uniquely identifying the legacy client. send_in_pings: - deletion_request ...
Platform availability
SDK | Kotlin | Swift | Python | Rust | JavaScript | Firefox Desktop |
---|---|---|---|---|---|---|
deletion-request ping | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Scheduling
The deletion-request
ping is automatically submitted when upload is disabled in Glean.
If upload fails, it is retried after Glean is initialized.
Contents
The deletion-request
does not contain additional metrics aside from secondary ids that have been added.
Example deletion-request
ping
{
"ping_info": {
"seq": 0,
"start_time": "2019-12-06T09:50-04:00",
"end_time": "2019-12-06T09:53-04:00"
},
"client_info": {
"telemetry_sdk_build": "22.0.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "28",
"os_version": "9",
"device_manufacturer": "Google",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"metrics": {
"uuid": {
"legacy_client_id": "5faffa6d-6147-4d22-a93e-c1dbd6e06171"
}
}
}
The events
ping
Description
The events ping's purpose is to transport event metric information.
If the application crashes, an events
ping is generated next time the application starts with events that were not sent before the crash.
Platform availability
SDK | Kotlin | Swift | Python | Rust | JavaScript | Firefox Desktop |
---|---|---|---|---|---|---|
events ping | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Scheduling
The events
ping is automatically submitted under the following circumstances:
-
If there are any recorded events to send when the application becomes inactive (on mobile, this means going to background).
-
When the queue of events exceeds
Glean.configuration.maxEvents
(default 1 for Glean.js, 500 for all other SDKs). This configuration option can be changed at initialization or through Server Knobs. -
If there are any unsent events found on disk when starting the application. (This results in this ping never containing the
glean.restarted
event.)
Python and JavaScript caveats
Since the Glean Python and JavaScript SDKs don't have a generic concept of "inactivity", case (1) above cannot be handled automatically.
On Python, users can call the
handle_client_inactive
API to let Glean know the app is inactive and that will trigger submission of theevents
ping.On JavaScript there is no such API and only cases (2) and (3) apply.
Contents
At the top-level, this ping contains the following keys:
-
client_info
: The information common to all pings. -
ping_info
: The information common to all pings. -
events
: An array of all of the events that have occurred since the last time theevents
ping was sent.
Each entry in the events
array is an object with the following properties:
-
"timestamp"
: The milliseconds relative to the first event in the ping. -
"category"
: The category of the event, as defined by its location in themetrics.yaml
file. -
"name"
: The name of the event, as defined in themetrics.yaml
file. -
"extra"
(optional): A mapping of strings to strings providing additional data about the event. Keys are restricted to 40 UTF-8 bytes while values in theextra
object are limited a maximum length of 500 UTF-8 bytes.
Example event JSON
{
"ping_info": {
"experiments": {
"third_party_library": {
"branch": "enabled"
}
},
"seq": 0,
"start_time": "2019-03-29T09:50-04:00",
"end_time": "2019-03-29T10:02-04:00"
},
"client_info": {
"telemetry_sdk_build": "0.49.0",
"first_run_date": "2019-03-29-04:00",
"os": "Android",
"android_sdk_version": "27",
"os_version": "8.1.0",
"device_manufacturer": "Google",
"device_model": "Android SDK built for x86",
"architecture": "x86",
"app_build": "1",
"app_display_version": "1.0",
"client_id": "35dab852-74db-43f4-8aa0-88884211e545"
},
"events": [
{
"timestamp": 0,
"category": "examples",
"name": "event_example",
"extra": {
"metadata1": "extra",
"metadata2": "more_extra"
}
},
{
"timestamp": 1000,
"category": "examples",
"name": "event_example"
}
]
}
The metrics
ping
Description
The metrics
ping is intended for all of the metrics that are explicitly set by the application or are included in the application's metrics.yaml
file (except events).
The reported data is tied to the ping's measurement window, which is the time between the collection of two metrics
pings.
Ideally, this window is expected to be about 24 hours, given that the collection is scheduled daily at 04:00.
However, the metrics ping is only submitted while the application is actually running, so in practice, it may not meet the 04:00 target very frequently.
Data in the ping_info
section of the ping can be used to infer the length of this window and the reason that triggered the ping to be submitted.
If the application crashes, unsent recorded metrics are sent along with the next metrics
ping.
Additionally, it is undesirable to mix metric recording from different versions of the application. Therefore, if a version upgrade is detected, the metrics
ping is collected immediately before further metrics from the new version are recorded.
Platform availability
SDK | Kotlin | Swift | Python | Rust | JavaScript | Firefox Desktop |
---|---|---|---|---|---|---|
metrics ping | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ |
Scheduling
The desired behavior is to collect the ping at the first available opportunity after 04:00 local time on a new calendar day, but given constraints of the platform, it can only be submitted while the application is running. This breaks down into three scenarios:
- the application was just installed;
- the application was just upgraded (the version of the app is different from the last time the app was run);
- the application was just started (after a crash or a long inactivity period);
- the application was running at 04:00.
In the first case, since the application was just installed, if the due time for the current calendar day has passed, a metrics
ping is immediately generated and scheduled for sending (reason code overdue
). Otherwise, if the due time for the current calendar day has not passed, a ping collection is scheduled for that time (reason code today
).
In the second case, if a version change is detected at startup, the metrics ping is immediately submitted so that metrics from one version are not aggregated with metrics from another version (reason code upgrade
).
In the third case, if the metrics
ping was not already collected on the current calendar day, and it is before 04:00, a collection is scheduled for 04:00 on the current calendar day (reason code today
).
If it is after 04:00, a new collection is scheduled immediately (reason code overdue
).
Lastly, if a ping was already collected on the current calendar day, the next one is scheduled for collecting at 04:00 on the next calendar day (reason code tomorrow
).
In the fourth and last case, the application is running during a scheduled ping collection time.
The next ping is scheduled for 04:00 the next calendar day (reason code reschedule
).
More scheduling examples are included below.
See also the ping schedules and timing overview.
Contents
The metrics
ping contains all of the metrics defined in metrics.yaml
(except events) that don't specify a ping or where default
is specified in their send in pings
property.
Additionally, error metrics in the glean.error
category are included in the metrics
ping.
The metrics
ping shall also include the common ping_info
and 'client_info' sections. It also includes a number of metrics defined in the Glean SDKs themselves.
Querying ping contents
Information about query ping contents is available in Accessing Glean data in the Firefox data docs.
Scheduling Examples
Crossing due time with the application closed
-
The application is opened on Feb 7 on 15:00, closed on 15:05.
- Glean records one metric A (say startup time in ms) during this measurement window MW1.
-
The application is opened again on Feb 8 on 17:00.
-
Glean notes that we passed local 04:00 since MW1.
-
Glean closes MW1, with:
start_time=Feb7/15:00
;end_time=Feb8/17:00
.
-
Glean records metric A again, into MW2, which has a start_time of Feb8/17:00.
Crossing due time and changing timezones
-
The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.
- Glean records one metric A (say startup time in ms) during this measurement window MW1.
-
The application is opened again on Feb 8 on 17:00 in timezone UTC+1.
-
Glean notes that we passed local 04:00 UTC+1 since MW1.
-
Glean closes MW1, with:
start_time=Feb7/15:00/UTC
;end_time=Feb8/17:00/UTC+1
.
-
Glean records metric A again, into MW2.
-
The application doesn’t run in a week
-
The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.
- Glean records one metric A (say startup time in ms) during this measurement window MW1.
-
The application is opened again on Feb 16 on 17:00 in timezone UTC.
-
Glean notes that we passed local 04:00 UTC since MW1.
-
Glean closes MW1, with:
start_time=Feb7/15:00/UTC
;end_time=Feb16/17:00/UTC
.
-
Glean records metric A again, into MW2.
-
The application doesn’t run for a week, and when it’s finally re-opened the timezone has changed
-
The application is opened on Feb 7 on 15:00 in timezone UTC, closed on 15:05.
- Glean records one metric A (say startup time in ms) during this measurement window MW1.
-
The application is opened again on Feb 16 on 17:00 in timezone UTC+1.
-
Glean notes that we passed local 04:00 UTC+1 since MW1.
-
Glean closes MW1, with:
start_time=Feb7/15:00/UTC
end_time=Feb16/17:00/UTC+1
.
-
Glean records metric A again, into MW2.
-
The user changes timezone in an extreme enough fashion that they cross 04:00 twice on the same date
-
The application is opened on Feb 7 at 15:00 in timezone UTC+11, closed at 15:05.
- Glean records one metric A (say startup time in ms) during this measurement window MW1.
-
The application is opened again on Feb 8 at 04:30 in timezone UTC+11.
-
Glean notes that we passed local 04:00 UTC+11.
-
Glean closes MW1, with:
start_time=Feb7/15:00/UTC+11
;end_time=Feb8/04:30/UTC+11
.
-
Glean records metric A again, into MW2.
-
-
The user changes to timezone UTC-10 and opens the application at Feb 7 at 22:00 in timezone UTC-10
- Glean records metric A again, into MW2 (not MW1, which was already sent).
-
The user opens the application at Feb 8 05:00 in timezone UTC-10
- Glean notes that we have not yet passed local 04:00 on Feb 9
- Measurement window MW2 remains the current measurement window
-
The user opens the application at Feb 9 07:00 in timezone UTC-10
-
Glean notes that we have passed local 04:00 on Feb 9
-
Glean closes MW2 with:
start_time=Feb8/04:30/UTC+11
;end_time=Feb9/19:00/UTC-10
.
-
Glean records metric A again, into MW3.
-
Ping schedules and timings overview
Full reference details about the metrics
and baseline
ping schedules are detailed elsewhere.
The following diagram shows a typical timeline of a mobile application, when pings are sent and what timing-related information is included.
There are two distinct runs of the application, where the OS shutdown the application at the end of Run 1, and the user started it up again at the beginning of Run 2.
There are three distinct foreground sessions, where the application was visible on the screen and the user was able to interact with it.
The rectangles for the baseline
and metrics
pings represent the measurement windows of those pings, which always start exactly at the end of the preceding ping. The ping_info.start_time
and ping_info.end_time
metrics included in these pings correspond to these beginning and the end of their measurement windows.
The baseline.duration
metric (included only in baseline
pings) corresponds to amount of time the application spent on the foreground, which, since measurement window always extend to the next ping, is not always the same thing as the baseline
ping's measurement window.
The submission_timestamp
is the time the ping was received at the telemetry endpoint, added by the ingestion pipeline. It is not exactly the same as ping_info.end_time
, since there may be various networking and system latencies both on the client and in the ingestion pipeline (represented by the dotted horizontal line, not to scale). Also of note is that start_time
/end_time
are measured using the client's real-time clock in its local timezone, which is not a fully reliable source of time.
The "Baseline 4" ping illustrates an important corner case. When "Session 2" ended, the OS also shut down the entire process, and the Glean SDK did not have an opportunity to send a baseline
ping immediately. In this case, it is sent at the next available opportunity when the application starts up again in "Run 2". This baseline
ping is annotated with the reason code dirty_startup
.
The "Metrics 2" ping likewise illustrates another important corner case. "Metrics 1" was able to be sent at the target time of 04:00 (local device time) because the application was currently running. However, the next time 04:00 came around, the application was not active, so the Glean SDK was unable to send a metrics
ping. It is sent at the next available opportunity, when the application starts up again in "Run 2". This metrics
ping is annotated with the reason code overdue
.
NOTE: Ping scheduling and other application lifecycle dependent activities are not set up when Glean is used in a non-main process. See initializing
Server Knobs: Glean Data Control Plane
Glean provides Server Knobs, a Data Control Plane through which Glean runtime settings can be changed remotely including the ability to enable, disable or throttle metrics and pings through a Nimbus rollout or experiment.
Products can use this capability to control "data traffic", similar to how a network control plane controls "network traffic".
Server Knobs provides the ability to do the following:
- Allow runtime changes to data collection without needing to land code and ride release trains.
- Eliminate the need for manual creation and maintenance of feature flags specific to data collection.
- Sampling of measurements from a subset of the population so that we do not collect or ingest more data than is necessary from high traffic areas of an application instrumented with Glean metrics.
- Operational safety through being able to react to high-volume or unwanted data.
- Visibility into sampling and sampling rates for remotely configured metrics.
Contents
Data Control Plane (a.k.a. Server Knobs)
Glean provides a Data Control Plane through which metrics can be enabled, disabled or throttled through a Nimbus rollout or experiment.
Products can use this capability to control "data-traffic", similar to how a network control plane controls "network-traffic".
This provides the ability to do the following:
- Allow runtime changes to data collection without needing to land code and ride release trains.
- Eliminate the need for manual creation and maintenance of feature flags specific to data collection.
- Sampling of measurements from a subset of the population so that we do not collect or ingest more data than is necessary from high traffic areas of an application instrumented with Glean metrics.
- Operational safety through being able to react to high-volume or unwanted data.
- Visibility into sampling and sampling rates for remotely configured metrics.
For information on controlling pings with Server Knobs, see the metrics documentation for Server Knobs - Pings.
Contents
- Example Scenarios
- Product Integration
- Experimenter Configuration
- Advanced Topics
- Frequently Asked Questions
Example Scenarios
Scenario 1
Landing a metric that is disabled by default and then enabling it for some segment of the population
This scenario can be expected in cases such as when instrumenting high-traffic areas of the browser. These are instrumentations that would normally generate a lot of data because they are recorded frequently for every user.
In this case, the telemetry which has the potential to be high-volume would land with the “disabled” property of the metric set to “true”. This will ensure that it does not record data by default.
An example metric definition with this property set would look something like this:
urlbar:
impression:
disabled: true
type: event
description: Recorded when urlbar results are shown to the user.
...
Once the instrumentation is landed, it can now be enabled for a subset of the population through a Nimbus rollout or experiment without further code changes.
Through Nimbus, we have the ability to sample the population by setting the audience size to a certain percentage of the eligible population. Nimbus also provides the ability to target clients based on the available targeting parameters for a particular application (for instance, Firefox Desktop’s available targeting parameters).
This can be used to slowly roll out instrumentations to the population in order to validate the data we are collecting before measuring the entire population and potentially avoiding costs and overhead by collecting data that isn’t useful.
Note: if planning to use this feature for permanently keeping a data collection on for the whole population, please consider enabling the metrics by default by setting
disabled: false
in the metrics.yaml. Then you can "down-sample" if necessary (see Scenario 2 below).
Scenario 2
Landing a metric that is enabled by default and then disabling it for a segment of the population
This is effectively the inverse of Scenario 1, instead of landing the metrics disabled by default, they are landed as enabled so that they are normally collecting data from the entire population.
Similar to the first scenario, a Nimbus rollout or experiment can then be launched to configure the metrics as disabled for a subset of the population.
This provides a mechanism by which we can disable the sending of telemetry from an audience that we do not wish to collect telemetry data from.
For instance, this could be useful in tuning out telemetry data coming from automation sources or bad actors. In addition, it provides a way to disable broken, incorrect, or unexpectedly noisy instrumentations as an operational safety mechanism to directly control the volume of the data we collect and ingest.
Product Integration
In order to enable sharing of this functionality between multiple Nimbus Features, the implementation is not defined as part of the stand-alone Glean feature defined in the Nimbus Feature Manifest, but instead is intended to be added as a feature variable to other Nimbus Feature definitions for them to make use of.
Desktop Feature Integration
In order to make use of the remote metric configuration in a Firefox Desktop component, there are two available options.
Integration Option 1:
Glean provides a general Nimbus feature that can be used for configuration of metrics named glean
. Simply select the glean
feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see Nimbus documentation for more information on multi-feature experiments).
The glean
Nimbus feature requires the gleanMetricConfiguration
variable to be used to provide the required metric configuration.
The format of the configuration is a map of the fully qualified metric name (category.name) of the metric to a boolean value representing whether the metric is enabled.
If a metric is omitted from this map, it will default to the value found in the metrics.yaml.
An example configuration for the glean
feature can be found on the Experimenter Configuration page.
Integration Option 2 (Advanced use):
A second option that can give you more control over the metric configuration, especially if there are more than one experiments or rollouts that are currently using the glean
feature from Option 1 above, is to add a Feature Variable to represent the Glean metric configuration in your own feature. This can be accomplished by modifying the FeatureManifest.yaml file, adding a variable through which to pass metric configurations. Glean will handle merging this configuration with other metrics configurations for you (See Advanced Topics for more info on this).
An example feature manifest entry would look like the following:
variables:
... // Definitions of other feature variables
gleanMetricConfiguration:
type: json
description: >-
"Glean metric configuration"
This definition allows for configuration to be set in a Nimbus rollout or experiment and fetched by the client to be applied based on the enrollment. Once the Feature Variable has been defined, the final step is to fetch the configuration from Nimbus and supply it to the Glean API. This can be done during initialization and again any time afterwards, such as in response to receiving an updated configuration from Nimbus. Glean will merge this configuration with any other active configurations and enable or disable the metrics accordingly. An example call to set a configuration through your Nimbus Feature could look like this:
// Fetch the Glean metric configuration from your feature's Nimbus variable
let cfg = lazy.NimbusFeatures.yourNimbusFeatureName.getVariable(
"gleanMetricConfiguration"
);
// Apply the configuration through the Glean API
Services.fog.setMetricsFeatureConfig(JSON.stringify(cfg));
It is also recommended to register to listen for updates for the Nimbus Feature and apply new configurations as soon as possible. The following example illustrates how a Nimbus Feature might register and update the metric configuration whenever there is a change to the Nimbus configuration:
// Register to listen for the `onUpdate` event from Nimbus
lazy.NimbusFeatures.yourNimbusFeatureName.onUpdate(() => {
// Fetch the Glean metric configuration from your feature's Nimbus variable
let cfg = lazy.NimbusFeatures.yourNimbusFeatureName.getVariable(
"gleanMetricConfiguration"
);
// Apply the configuration through the Glean API
Services.fog.setMetricsFeatureConfig(JSON.stringify(cfg));
});
Mobile Feature Integration
Integration Option 1:
Glean provides a general Nimbus feature that can be used for configuration of metrics named glean
. Simply select the glean
feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see Nimbus documentation for more information on multi-feature experiments).
The glean
Nimbus feature requires the gleanMetricConfiguration
variable to be used to provide the required metric configuration.
The format of the configuration is a map of the fully qualified metric name (category.name) of the metric to a boolean value representing whether the metric is enabled.
If a metric is omitted from this map, it will default to the value found in the metrics.yaml.
An example configuration for the glean
feature can be found on the Experimenter Configuration page.
Integration Option 2 (Advanced use):
A second option that can give you more control over the metric configuration, especially if there are more than one experiments or rollouts that are currently using the glean
feature from Option 1 above, is to add a Feature Variable to represent the Glean metric configuration in your own feature.
This can be accomplished by modifying the Nimbus Feature Manifest file, adding a variable through which to pass metric configurations. Glean will handle merging this configuration with other metrics configurations for you (See Advanced Topics for more info on this).
An example feature manifest entry would look like the following:
features:
homescreen:
description: |
The homescreen that the user goes to when they press home or new
tab.
variables:
... // Other homescreen variables
gleanMetricConfiguration:
description: Glean metric configuration
type: String
default: "{}"
Once the Feature Variable has been defined, the final step is to fetch the configuration from Nimbus and supply it to the Glean API. This can be done during initialization and again any time afterwards, such as in response to receiving an updated configuration from Nimbus. Only the latest configuration provided will be applied and any previously configured metrics that are omitted from the new configuration will not be changed. An example call to set a configuration from the “homescreen” Nimbus Feature could look like this:
Glean.applyServerKnobsConfig(FxNimbus.features.homescreen.value().metricsEnabled)
Since mobile experiments only update on initialization of the application, it isn't necessary to register to listen for notifications for experiment updates.
Experimenter Configuration
The structure of this configuration is a key-value collection with the full metric identification of the Glean metric serving as the key in the format <metric_category.metric_name>.
The values of the key-value pair are booleans which represent whether the metric is enabled (true
) or not (false
).
In the example below gleanMetricConfiguration
is the name of the variable defined in the Nimbus feature.
This configuration would be what is entered into the branch configuration setup in Experimenter when defining an experiment or rollout.
Example Configuration:
{
"gleanMetricConfiguration": {
"metrics_enabled": {
"urlbar.abandonment": true,
"urlbar.engagement": true,
"urlbar.impression": true
}
}
}
Advanced Topics
Merging of Configurations from Multiple Features
Since each feature defined as a Nimbus Feature can independently provide a Glean configuration, these must be merged together into a cohesive configuration for the entire set of metrics collected by Glean.
Configurations will be merged together along with the default values in the metrics.yaml file and applied to the appropriate metrics. Only the latest configuration provided for a given metric will be applied and any previously configured metrics that are omitted from the new configuration will not be changed.
Example
Imagine a situation where we have 3 features (A
, B
, C
). Each of these features has an event (A.event
, B.event
, C.event
) and these events all default to disabled from their definition in the metrics.yaml file.
Let’s walk through an example of changing configurations for these features that illustrates how the merging will work:
Initial State
This is what the initial state of the events looks like with no configurations applied. All of the events are falling back to the defaults from the metrics.yaml file. This is the starting point for Scenario 1 in the Example Scenarios.
- Feature A
- No config, default used
- A.event is disabled
- Feature B
- No config, default used
- B.event is disabled
- Feature C
- No config, default used
- C.event is disabled
Second State
In this state, let’s create two rollouts which will provide configurations for features A and B that will enable the events associated with each. The first rollout selects Feature A in experimenter and provides the indicated configuration in the Branch setup page. The second rollout does the same thing, only for Feature B.
- Feature A
- Configuration:
-
{ // Other variable configs // Glean metric config "gleanMetricConfiguration": { "metrics_enabled": { "A.event": true } } }
-
- A.event is enabled
- Configuration:
- Feature B
- Configuration:
-
{ // Other variable configs // Glean metric config "gleanMetricConfiguration": { "metrics_enabled": { "B.event": true } } }
-
- B.event is enabled
- Configuration:
- Feature C
- No config, default used
- C.event is disabled
As you can see, the A.event and B.event are enabled by the configurations while C.event remains disabled because there is no rollout for it.
Third State
In this state, let’s end the rollout for Feature B, start a rollout for Feature C, and launch an experiment for Feature A. Because experiments take precedence over rollouts, this should supersede our configuration from the rollout for Feature A.
- Feature A
- Configuration:
-
{ // Other variable configs // Glean metric config "gleanMetricConfiguration": { "metrics_enabled": { "A.event": false } } }
-
- A.event is disabled
- Configuration:
- Feature B
- No config, default used
- B.event is disabled
- Feature C
- Configuration:
-
{ // Other variable configs // Glean metric config "gleanMetricConfiguration": { "metrics_enabled": { "C.event": true } } }
-
- C.event is enabled
- Configuration:
After the new changes to the currently running rollouts and experiments, this client is now enrolled in the experiment for Feature A and the configuration is suppressing the A.event. Feature B is no longer sending B.event because it is reverting back to the defaults from the metrics.yaml file. And finally, Feature C is sending C.event with the rollout configuration applied.
Fourth State
Finally, in this state, let’s end the rollout for Feature C along with the experiment for Feature A. This should stop the sending of the B.event and C.event and resume sending of the A.event as the rollout configuration will again be applied since the experiment configuration is no longer available.
- Feature A
- Configuration
-
{ // Other variable configs // Glean metric config "gleanMetricConfiguration": { "metrics_enabled": { "A.event": true } } }
-
- A.event is enabled
- Configuration
- Feature B
- No config, default used
- B.event is disabled
- Feature C
- No config, default used
- C.event is disabled
After the new changes to the currently running rollouts and experiments, this client is now enrolled in the experiment for Feature A and the configuration is suppressing the A.event. Feature B is no longer sending B.event because it is reverting back to the defaults from the metrics.yaml file. And finally, Feature C is sending C.event with the rollout configuration applied.
In each case, Glean only updates the configuration associated with the feature that provided it. Nimbus’ feature exclusion would prevent a client from being enrolled in multiple rollouts or experiments for a given feature, so no more than one configuration would be applied per feature for a given client.
Merging Caveats
Because there is currently nothing that ties a particular Nimbus Feature to a set of metrics, care must be taken to avoid feature overlap over a particular metric. If two different features supply conflicting configurations for the same metric, then whether or not the metric is enabled will likely come down to a race condition of whoever set the configuration last.
Frequently Asked Questions
- How can I tell if a given client id has the metric X on?
Once we have established the functionality behind the data control plane, a dashboard for monitoring this will be provided. Details are to be determined.
- Why isn't some client id reporting the metric that should be enabled for all the clients for that channel? (e.g. Some fraction of population may get stuck on “default” config)
Nimbus must be able to both reach and supply a valid configuration to the audience. For some outliers this doesn't work and so may be "unreachable" at times.
Data Control Plane (a.k.a. Server Knobs)
Glean provides a Data Control Plane through which pings can be enabled or disabled through a Nimbus rollout or experiment.
Products can use this capability to control "data-traffic", similar to how a network control plane controls "network-traffic".
This provides the ability to do the following:
- Allow runtime changes to data collection without needing to land code and ride release trains.
- Eliminate the need for manual creation and maintenance of feature flags specific to data collection.
- Sampling of measurements from a subset of the population so that we do not collect or ingest more data than is necessary from high traffic areas of an application instrumented with Glean metrics.
- Operational safety through being able to react to high-volume or unwanted data.
- Visibility into sampling and sampling rates for remotely configured metrics.
For information on controlling metrics with Server Knobs, see the metrics documentation for Server Knobs - Metrics.
Contents
Product Integration
Glean provides a general Nimbus feature named glean
that can be used for configuration of pings. Simply select the glean
feature along with your Nimbus feature from the Experimenter UI when configuring the experiment or rollout (see Nimbus documentation for more information on multi-feature experiments).
The glean
Nimbus feature requires the gleanMetricConfiguration
variable to be used to provide the required metric configuration. The format of the configuration is defined in the Experimenter Configuration section.
If a ping is not included, it will default to the value found in the pings.yaml.
Note that this can also serve as an override for Glean builtin pings disabled using the Configuration property enable_internal_pings=false
during initialization.
Experimenter Configuration
The structure of this configuration is a key-value collection with the name of the Glean ping serving as the keys and the values are booleans representing whether the ping is enabled (true
) or not (false
).
In the example below, gleanMetricConfiguration
is the name of the variable defined in the Nimbus feature.
This configuration would be what is entered into the branch configuration setup in Experimenter when defining an experiment or rollout.
Example Configuration:
{
"gleanMetricConfiguration": {
"pings_enabled": {
"baseline": false,
"events": false,
"metrics": false
}
}
}
Other Server Knobs
Below are additional Glean parameters and settings that are exposed via Server Knobs for use in a Nimbus experiment or rollout.
Contents
Additional Glean settings will be added to Server Knobs as needed or by request.
For information on controlling metrics and pings via Server Knobs, please refer to Controlling Metrics with Server Knobs and Controlling Pings with Server Knobs.
Max Events
By default, Glean batches events together to submit on a single events ping.
The event_threshold
Server Knob controls how many events Glean will collect before submitting an events ping.
For instance, if you wanted to disable batching in order to transmit an events ping after every event is recorded you could set event_threshold: 1
.
Example Configuration:
{
"gleanMetricConfiguration": {
"event_threshold": 1
}
}
Debugging products using the Glean SDK
Glean provides a few debugging features to assist with debugging a product using Glean.
Features
Log Pings
Print the ping payload upon sending a ping.
Debug View Tag
Tags all outgoing pings as debug pings to make them available for real-time validation, on the Glean Debug View.
Glean Debug View
The Glean Debug View enables you to easily see in real-time what data your application is sending.
This data is what actually arrives in our data pipeline, shown in a web interface that is automatically updated when new data arrives. Any data sent from a Glean-instrumented application usually shows up within 10 seconds, updating the pages automatically. Pings are retained for 3 weeks.
Troubleshooting
If nothing is showing up on the dashboard after you set a debugViewTag
and you see
Glean must be enabled before sending pings.
in the logs, Glean is disabled. Check with
the application author on how to re-enable it.
Source Tags
Tags outgoing pings with a maximum of 5 comma-separated tags.
Send Ping
Sends a ping on demand.
Debugging methods
Each Glean SDK may expose one or more of the following methods to interact with and enable these debugging functionalities.
- Enable debugging features through APIs exposed through the Glean singleton;
- Enable debugging features through environment variables set at runtime;
- Enable debugging features through platform specific tooling.
For methods 1. and 2., refer to the API reference section "Debugging" for detailed information on how to use them.
For method 3. please refer to the platform specific pages on how to debug products using Glean.
Platform Specific Information
- Debugging Android applications using the Glean SDK
- Debugging iOS applications using the Glean SDK
- Debugging Python applications using the Glean SDK
- Debugging JavaScript applications using Glean.js
Available debugging methods per platform
Glean API | Environment Variables | Platform Specific Tooling | |
---|---|---|---|
Kotlin | ✅ 1 | ||
Swift | ✅ | ✅ | ✅ 2 |
Python | ✅ | ||
Rust | ✅ | ✅ | |
JavaScript | ✅ | ||
Firefox Desktop | ✅ | ✅ 3 |
The Glean Kotlin SDK exposes the GleanDebugActivity
for interacting with debug features. Although it is technically possible to also use environment variables in Android, the Glean team is not aware of a proper way to set environment variables in Android devices or emulators.
The Glean Swift SDK exposes a custom URL format for interacting with debug features.
In Firefox Desktop, developers may use the interface exposed through about:glean
to log, tag or send pings.
Debugging Android applications using the Glean SDK
The Glean Kotlin SDK exports the GleanDebugActivity
that can be used to toggle debugging features on or off.
Users can invoke this special activity, at run-time, using the following adb
command:
adb shell am start -n [applicationId]/mozilla.telemetry.glean.debug.GleanDebugActivity [extra keys]
In the above:
-
[applicationId]
is the product's application id as defined in the manifest file and/or build script. For the Glean sample application, this isorg.mozilla.samples.gleancore
for a release build andorg.mozilla.samples.gleancore.debug
for a debug build. -
[extra keys]
is a list of extra keys to be passed to the debug activity. See the documentation for the command line switches used to pass the extra keys. These are the currently supported keys:
key | type | description |
---|---|---|
logPings | boolean (--ez ) | If set to true , pings are dumped to logcat; defaults to false |
debugViewTag | string (--es ) | Tags all outgoing pings as debug pings to make them available for real-time validation, on the Glean Debug View. The value must match the pattern [a-zA-Z0-9-]{1,20} . Important: in older versions of the Glean SDK, this was named tagPings |
sourceTags | string array (--esa ) | Tags outgoing pings with a maximum of 5 comma-separated tags. The tags must match the pattern [a-zA-Z0-9-]{1,20} . The automation tag is meant for tagging pings generated on automation: such pings will be specially handled on the pipeline (i.e. discarded from non-live views). Tags starting with glean are reserved for future use. Subsequent calls of this overwrite any previously stored tag |
sendPing | string (--es ) | Sends the ping with the given name immediately |
startNext | string (--es ) | The name of an exported Android Activity , as defined in the product manifest file, to start right after the GleanDebugActivity completes. All the options provided are propagated to this next activity as well. When omitted, the default launcher activity for the product is started instead. |
All the options provided to start the activity are passed over to the main activity for the application to process. This is useful if SDK users wants to debug telemetry while providing additional options to the product to enable specific behaviors.
Note: Due to limitations on Android logcat message size, pings larger than 4KB are broken into multiple log messages when using
logPings
.
For example, to direct a release build of the Glean sample application to (1) dump pings to logcat, (2) tag the ping with the test-metrics-ping
tag, and (3) send the "metrics" ping immediately, the following command can be used:
adb shell am start -n org.mozilla.samples.gleancore/mozilla.telemetry.glean.debug.GleanDebugActivity \
--ez logPings true \
--es sendPing metrics \
--es debugViewTag test-metrics-ping
The logPings
command doesn't trigger ping submission and you won't see any output until a ping has been sent. You can use the sendPing
command to force a ping to be sent, but it could be more desirable to trigger the pings submission on their normal schedule. For instance, the baseline
and events
pings can be triggered by moving the app out of the foreground and the metrics
ping can be triggered normally if it is overdue for the current calendar day.
Note: The device or emulator must be connected to the internet for this to work. Otherwise the job that sends the pings won't be triggered.
If no metrics have been collected, no pings will be sent unless send_if_empty
is set on your ping. See the ping documentation for more information on ping scheduling to learn when pings are sent.
Options that are set using the adb
flags are not immediately reset and will
persist until the application is closed or manually reset.
Glean Kotlin SDK Log messages
When running a Glean SDK-powered app in the Android emulator or on a device connected to your computer via cable, there are several ways to read the log output.
Android Studio
Android Studio can show the logs of a connected emulator or device. To display the log messages for an app:
- Run an app on your device.
- Click View > Tool Windows > Logcat (or click Logcat in the tool window bar).
The Logcat window will show all log messages and allows to filter those by the application ID.
Select the application ID of the product you're debugging.
You can also filter by Glean
only.
More information can be found in the View Logs with Logcat help article.
Command line
On the command line you can show all of the log output using:
adb logcat
This is the unfiltered output of all log messages.
You can match for glean
using grep:
adb logcat | grep -i glean
A simple way to filter for only the application that is being debugged is by using pidcat, a wrapper around adb
, which adds colors and proper filtering by application ID and log level.
Run it like this to filter for an application:
pidcat [applicationId]
In the above [applicationId]
is the product's application id as defined in the manifest file and/or build script. For the Glean sample application, this is org.mozilla.samples.gleancore
for a release build and org.mozilla.samples.gleancore.debug
for a debug build.
Debugging iOS applications using the Glean SDK
Enabling debugging features in iOS through environment variables
Debugging features in iOS can be enabled using environment variables. For more information on the available features accessible through this method and how to enable them, see Debugging API reference.
These environment variables must be set on the device that is running the application.
Enabling debugging features in iOS through a custom URL scheme
For debugging and validation purposes on iOS, the Glean Swift SDK makes use of a custom URL scheme which is implemented within the application. The Glean Swift SDK provides some convenience functions to facilitate this, but it's up to the consuming application to enable this functionality. Applications that enable this feature will be able to launch the application from a URL with the Glean debug commands embedded in the URL itself.
Available commands and query format
All 4 Glean debugging features are available through the custom URL scheme tool.
logPings
: This is either true or false and will cause pings that are submitted to also be echoed to the device's log.debugViewTag
: This command will tag outgoing pings with the provided value, in order to identify them in the Glean Debug View.sourceTags
: This command tags outgoing pings with a maximum of 5 comma-separated tags.sendPing
: This command expects a string name of a ping to force immediate collection and submission of.
The structure of the custom URL uses the following format:
<protocol>://glean?<command 1>=<paramter 1>&<command 2>=<parameter 2> ...
Where:
<protocol>
is the "URL Scheme" that has been added for your app (see Instrumenting the application below), such asglean-sample-app
.- This is followed by
://
and thenglean
which is required for the Glean Swift SDK to recognize the command is meant for it to process. - Following standard URL query format, the next character after
glean
is the?
indicating the beginning of the query. - This is followed by one or more queries in the form of
<command>=<parameter>
, where the command is one of the commands listed above, followed by an=
and then the value or parameter to be used with the command.
There are a few things to consider when creating the custom URL:
- Invalid commands will log an error and cause the entire URL to be ignored.
- Not all commands are required to be encoded in the URL, you can mix and match the commands that you need.
- Multiple instances of commands are not allowed in the same URL and, if present, will cause the entire URL to be ignored.
- The
logPings
command doesn't trigger ping submission and you won't see any output until a ping has been submitted. You can use thesendPing
command to force a ping to be sent, but it could be more desirable to trigger the pings submission on their normal schedule. For instance, thebaseline
andevents
pings can be triggered by moving the app out of the foreground and themetrics
ping can be triggered normally if it is overdue for the current calendar day. See the ping documentation for more information on ping scheduling to learn when pings are sent. - Enabling debugging features through custom URLs overrides any debugging features set through environment variables.
Instrumenting the application for Glean Swift SDK debug functionality
In order to enable the debugging features in an iOS application, it is necessary to add some information to the application's Info.plist
, and add a line and possibly an override for a function in the AppDelegate.swift
.
Register custom URL scheme in Info.plist
Note: If your application already has a custom URL scheme implemented, there is no need to implement a second scheme, you can simply use that and skip to the next section about adding the convenience method. If the app doesn't have a custom URL scheme implemented, then you will need to perform the following instructions to register your app to receive custom URLs.
Find and open the application's Info.plist
and right click any blank area and select Add Row
to create a new key.
You will be prompted to select a key from a drop-down menu, scroll down to and select URL types
. This creates an array item, which can be expanded by clicking the triangle disclosure icon.
Select Item 0
, click on it and click the disclosure icon to expand it and show the URL identifier
line. Double-click the value field and fill in your identifier, typically the same as the bundle ID.
Right-click on Item 0
and select Add Row
from the context menu. In the dropdown menu, select URL Schemes
to add the item.
Click on the disclosure icon of URL Schemes
to expand the item, double-click the value field of Item 0
and key in the value for your application's custom scheme. For instance, the Glean sample app uses glean-sample-app
, which allows for custom URLs to be crafted using that as a protocol, for example: glean-sample-app://glean?logPings=true
Add the Glean.handleCustomUrl()
convenience function and necessary overrides
In order to handle the incoming debug commands, it is necessary to implement the override in the application's AppDelegate.swift
file. Within that function, you can make use of the convenience function provided in Glean handleCustomUrl(url: URL)
.
An example of a simple implementation of this would look like this:
func application(_: UIApplication,
open url: URL,
options _: [UIApplication.OpenURLOptionsKey: Any] = [:]) -> Bool {
// ...
// This does nothing if the url isn't meant for Glean.
Glean.shared.handleCustomUrl(url: url)
// ...
return true
}
If you need additional help setting up a custom URL scheme in your application, please refer to Apple's documentation.
Invoking the Glean-iOS debug commands
Now that the app has the debug functionality enabled, there are a few ways in which we can invoke the debug commands.
Using a web browser
Perhaps the simplest way to invoke the debug functionality is to open a web browser and type/paste the custom URL into the address bar. This is especially useful on an actual device because there isn't a good way to launch from the command line and process the URL for an actual device.
Using the glean-sample-app as an example: to activate ping logging, tag the pings to go to the Glean Debug View, and force the events
ping to be sent, enter the following URL in a web browser on the iOS device:
glean-sample-app://glean?logPings=true&debugViewTag=My-ping-tag&sendPing=events
This should cause iOS to prompt you with a dialog asking if you want to open the URL in the Glean Sample App, and if you select "Okay" then it will launch (or resume if it's already running) the application with the indicated commands and parameters and immediately force the collection and submission of the events ping.
Note: This method does not work if the browser you are using to input the command is the same application you are attempting to pass the Glean debug commands to. So, you couldn't use Firefox for iOS to trigger commands within Firefox for iOS.
It is also possible to encode the URL into a 2D barcode or QR code and launch the app via the camera app. After scanning the encoded URL, the dialog prompting to launch the app should appear as if the URL were entered into the browser address bar.
Using the command line
This method is useful for testing via the Simulator, which typically requires a Mac with Xcode installed, including the Xcode command line tools. In order to perform the same command as above with using the browser to input the URL, you can use the following command in the command line terminal of the Mac:
xcrun simctl openurl booted "glean-sample-app://glean?logPings=true&debugViewTag=My-ping-tag&sendPing=events"
This will launch the simulator and again prompt the user with a dialog box asking if you want to open the URL in the Glean Sample App (or whichever app you are instrumenting and testing).
Glean log messages
The Glean Swift SDK integrates with the unified logging system available on iOS. There are various ways to retrieve log information, see the official documentation.
If debugging in the simulator, the logging messages can be seen in the console window within Xcode.
When running a Glean-powered app in the iOS Simulator or on a device connected to your computer via cable you can use Console.app
to view the system log.
You can filter the logs with category:glean
to only see logs from the Glean SDK.
You can also use the command line utility log
to stream the log output.
Run the following in a shell:
log stream --predicate 'category contains "glean"'
See Diagnosing Issues Using Crash Reports and Device Logs for more information about debugging deployed iOS apps.
Debugging Python applications using the Glean SDK
Debugging features in Python can be enabled using environment variables. For more information on the available features and how to enable them, see the Debugging API reference.
Sending pings
Unlike other platforms, Python doesn't expose convenience methods to send pings on demand.
In case that is necessary, calling the submit
function for a given ping,
such as pings.custom_ping.submit()
, will send it.
Logging pings
Glean offers two options for logging from Python:
- Simple logging API: A simple API that only allows for setting the logging level, but includes all Glean log messages, including those from its networking subprocess. This is also the only mode in which
GLEAN_LOG_PINGS
can be used to display ping contents in the log. - Flexible logging API: Full use of the Python
logging
module, including its features for redirecting to files and custom handling of messages, but does not include messages from the networking subprocess about HTTP requests.
Simple logging API
You can set the logging level for Glean log messages by passing logging.DEBUG
to Glean.initialize
as follows:
import logging
from glean import Glean
Glean.initialize(..., log_level=logging.DEBUG)
If you want to see ping contents as well, set the GLEAN_LOG_PINGS
environment variable to true
.
Flexible logging API
You can set the logging level for the Python logging to DEBUG
as follows:
import logging
logging.basicConfig(level=logging.DEBUG)
All log messages from the Glean Python SDK are on the glean
logger, so if you need to control it independently, you can set a level for just the Glean Python SDK (but note that the global Python logging level also needs to be set as above):
logging.getLogger("glean").setLevel(logging.DEBUG)
The flexible logging API is unable to display networking-related log messages or ping contents with GLEAN_LOG_PINGS
set to true.
See the Python logging documentation for more information.
Debugging JavaScript applications using Glean.js
Debugging features in JavaScript can be enabled through APIs exposed on the Glean object. For more information on the available features and how to enable them, see the Debugging API reference.
Debugging in the browser
Websites running Glean allow you to debug at runtime using the window.Glean
object in the browser console. You can start debugging by simply:
- Opening the browser console
- Calling one of the
window.Glean
APIs:window.Glean.setLogPings
,window.Glean.setDebugViewTag
,window.Glean.setSourceTags
.
These debugging options will persist for the length of the current page session. Once the tab is closed, you will need to make those API calls again.
Sending pings
Unlike other platforms, JavaScript doesn't expose convenience methods to send pings on demand.
In case that is necessary, calling the submit
function for a given ping,
such as pings.customPing.submit()
, will send it.
Note that this method is only effective for custom pings. Glean internal pings are not exposed to users.
Logging pings
By calling Glean.logPings(true)
all subsequent pings sent will be logged to the console.
To access the logs for web extensions on
Firefox
- Go to
about:debugging#/runtime/this-firefox
; - Find the extension you want to see the logs for;
- Click on
Inspect
.
Chromium-based browsers
- Go to
chrome://extensions
; - Find the extension you want to see the logs for;
- Click on
background page
.
How-tos
This chapter contains various how-tos and walkthroughs to help aid you in using Glean.
Server Knobs Walkthrough
A step-by-step guide in setting up and launching a Server Knobs Experiment.
"Real-Time" Events
A guide describing the different methods to collect and transmit data in a "real-time" fashion using Glean.
Telemetry/Data Bug Investigation Recommendations
Recommendations and tips on investigating data anomalies.
Server Knobs: A Complete Walkthrough
Purpose
This documentation serves as a step by step guide on how to create a Server Knobs configuration and make use of it in a Nimbus experiment or rollout. The intent is to explain everything from selecting the metrics or pings you wish to control all the way through launching the experiment or rollout and validating the data is being collected.
Audience
This documentation is aimed at the general users of Nimbus experimentation who wish to enable or disable specific metrics and/or pings as part of their deployment. This documentation assumes the reader has no special knowledge of the inner workings of either Glean or Nimbus, but it does assume that the audience has already undergone the prerequisite Nimbus training program and has access to Experimenter to create experiment and rollout definitions.
Let’s Get Started!
The first step in running a Server Knobs experiment or rollout is creating the definition for it in Experimenter. For the purposes of this walkthrough, a rollout will be used but the instructions are interchangeable if you are instead launching an experiment.
Experiment Setup
Create a new experiment
From the Experimenter landing page, we select the “Create New” button to begin defining the new rollout.
Initial experiment definition
The initial setup requires a name, hypothesis, and the selection of a target application.
Here we enter a human readable name for the rollout and a brief synopsis of what we expect to learn in the “Hypothesis” section. In the final field we select our target application, Firefox Desktop. When that is complete, we click the “Next” button to proceed with the rollout definition.
The next screen we are presented with is the “Summary” page of the experiment/rollout, where we can add additional metadata like a longer description, and link to any briefs or other documentation related to the rollout. We complete the required information and then click on “Save and Continue”
Initial branch configuration
The next screen we are presented with is the branch configuration page. On this page we can select the glean
feature, and check the box to indicate that this is a rollout.
At this point, we now need to create a bit of JSON configuration to put in the box seen here:
Building the Server Knobs configuration for metrics
This JSON configuration is the set of instructions for Glean that lets it know which metrics to enable or disable. In order to do this, we will visit the Glean Dictionary to help identify the metrics we wish to work with and get the identifiers from them needed for the configuration in Experimenter.
Upon arriving at the Glean Dictionary, we must first find the right application.
We are going to select “Firefox for Desktop” to match the application we previously selected in Experimenter. This brings us to a screen where we can search and filter for metrics which are defined in the application.
To help locate the metrics we are interested in, start typing in the search box at the top of the list. For instance, if we were interested in urlbar metrics:
From here we can select a metric, such as “urlbar.engagement” to see more information about it:
Here we can see this metric currently has active sampling configurations in both release and nightly. Let’s say we wish to add one for beta also. Our next step is to click the copy to clipboard button next to “Sampling Configuration Snippet”:
With the snippet copied to the clipboard, we return to Experimenter and our rollout configuration. We can now paste this snippet into the text-box like below:
That’s all that needs to be done here, if this is the only metric we need to configure. But what if we want to configure more than one? Then it’s back to Glean Dictionary to find the rest of the metrics we are interested in. Let’s say we are also interested in the “exposure” metric.
As we select the exposure metric from the list, we can see it isn’t currently being sampled by any experiments or rollouts, and we again find the button to copy the configuration snippet to the clipboard.
Now, we can paste this just below the other snippet inside of Experimenter.
As you can see, the JSON validator isn’t happy and there’s a red squiggle indicating that there’s a problem. We only need a part of the latest pasted snippet, so we copy the ”urlbar.exposure”: true portion of the snippet, and add a comma after the ”urlbar.engagement”: true in the snippet above.
We can then delete the remains of the snippet below, leaving us with a metric configuration with multiple metrics in it. This can be repeated for all the necessary metrics required by the rollout or experiment.
Adding pings to the configuration
This same procedure can be used to enable and disable pings, also. In order to copy a snippet for a ping, navigate to the “Pings” tab for the application on the Glean Dictionary.
From here, select a ping that is desired to be configured remotely. For instance, the “crash” ping:
Just like with the metric, select the button to copy the configuration snippet to your clipboard, then paste it into the Experimenter setup.
This time we need to get everything for the ”pings_enabled” section, copy and paste it below the ”metrics_enabled” section in the snippet above. We also need to add a comma after the metrics section’s curly brace, like this:
Then we can delete the remains of the pasted snippet at the bottom, leaving us with:
Wrapping up
That should be everything needed to enable the two urlbar metrics, as well as the crash ping. Additional experiment branches can be configured in a similar fashion, or if this is a rollout, it should be ready to launch if the metric configuration is all that is needed.
"Real-Time" Events
Defining "real-time" events within the Glean SDK
For the purposes of the Glean SDK and its capabilities, "real-time" is limited to: minimizing the time between instrumentation and reporting. It does not imply or describe how quickly received data is made available for querying.
Methods to achieve this with Glean
Option 1: Configuring Glean to send all events as soon as they are recorded
Glean "events" ping submission can be configured either during initialization or through Server Knobs.
Setting the maximum event threshold to a value of 1
will configure the Glean SDK to submit an "events" ping for each and every event as they
are recorded. By default, the Glean SDK will batch 500 events per "events" ping.
As of November 2024, Desktop Release:
Median user per day:
- 67 events / 3 pings
- The impact of turning on one event per ping based on the median user would result in an increase of approximately 21 times more event ping volume.
85th percentile user per day:
- 305 events / 11 pings
- The impact of turning on one event per ping based on the 85th percentile user would result in an increase of approximately 26 times more event ping volume.
95th percentile user per day:
- 706 events / 19 pings
- The impact of turning on one event per ping based on the 95th percentile user would result in an increase of approximately 36 times more event ping volume.
The current release population of Desktop as a whole sends us over 10 billion events per day in over 340 million event pings. Sending each of those events as a ping would increase the ping volume by 32 times the current rate.
Based on this it is safe to assume that sending 1 event per event ping would increase the ingestion traffic and downstream overhead between 20-40x the current levels with Glean batching of events in the client. This is a significant increase that should be taken into consideration before configuring Glean to disable event batching.
Option 2: Using a custom ping and submitting it immediately ("Pings-as-Events")
If it isn't necessary to receive all Glean SDK events that are instrumented in an application in "real-time", it may be preferable to create a custom ping which contains the relevant information to capture the context around the event and submit it as soon as the application event occurs.
This has some additional advantages over using just an event in that custom pings are less restrictive than the extras attached to the event in what data and Glean SDK metric types can be used.
If it is important to see the event that is being represented as a custom ping in context with other application events, then you only need to
define an event metric and use the send_in_pings
parameter to send it in both the custom ping and the Glean built-in "events" ping. It can
then be seen in sequence and within context of all of the application events, and still be sent in "real-time" as needed.
Considerations
What "real-time" Glean events/pings are not
Configuring the Glean SDK to submit events as soon as they are recorded or using custom pings to submit data immediately does not mean that the data is available for analysis in real time. There are networks to traverse, ingestion pipelines, etl, etc. that are all factors to keep in mind when considering how soon the data is available for analysis purposes. This documentation only purports to cover configuring the Glean SDK to send the data in a real-time fashion and does not make any assumptions about the analysis of data in real-time.
More network requests
For every event recorded or custom ping submitted, a network request will be generated as the ping is submitted for ingestion. By default, the Glean SDK batches up to 500 events per "events" ping, so this has the potential to generate up to 500 times as many network requests than the current defaults for the Glean SDK "events" ping.
More ingestion endpoint traffic
As a result of the increased network requests, the ingestion endpoint will need to handle this additional traffic. This increases the load of all the processing steps that are involved with ingesting event data from an application.
Storage space requirements
Typically the raw dataset for Glean events contains 1-500 events in a single row of the database. This row also includes metadata such as information about the client application and the ping itself. With only a single event per "events" ping, the replication of this metadata across the database will use additional space to house this repeated information that should rarely if ever change between events
Telemetry/Data Bug Investigation Recommendations
This document outlines several diagnostic categories and the insights they may offer when investigating unusual telemetry patterns or data anomalies.
1. Countries
- Purpose: Identify geographical patterns that could explain anomalies.
- Column Name:
metadata.geo.country
- Considerations:
- Are there ongoing national holidays or similar events that could affect data?
- Is the region known for bot activity or unusual behavior?
2. ISP (Internet Service Provider)
- Purpose: Analyze data at a more granular level than countries to identify potential automation or bot activity.
- Column Name:
metadata.isp.name
- Considerations:
- Could the anomaly be traced back to a single ISP, potentially indicating automation?
- Be mindful of the large number of ISPs; consider applying filters (e.g.,
HAVING
clause) to exclude smaller ISPs.
3. Product Version / Build ID
- Purpose: Check if issues began with a specific product version or build.
- Column Names:
client_info.app_display_version
,client_info.app_build
- Considerations:
- Did the issue arise after a particular version update? If so, collaborate with the product team to identify changes.
- Ensure that the build ID matches a known Mozilla build. If not, it could be a clone, fork, or side-load build.
4. Glean SDK Version
- Purpose: Determine whether the issue is tied to a specific Glean SDK version.
- Column Name:
client_info.telemetry_sdk_build
- Considerations:
- Did the anomaly start after an update to Glean? Work with the Glean team to verify version changes.
5. Other Library Version Changes
- Purpose: Identify possible regressions due to library updates.
- Considerations:
- Review updates to Application Services, Gecko, and other dependencies (e.g., Viaduct, rkv) that could affect telemetry collection.
6. OS/Platform SDK Version
- Purpose: Check if Operating System or platform SDK changes are impacting data collection.
- Column Names:
client_info.os_version
(Android only:client_info.android_sdk_version
) - Considerations:
- Have there been changes to platform lifecycle events or background task behaviors (e.g., 0-duration pings, or ping submission issues)?
- Has the OS changed the behaviour of system APIs?
7. Time Differences: start/end_time vs. submission_timestamp
- Purpose: Assess the delay between telemetry collection and submission.
- Column Names:
ping_info.parsed_start_time
,ping_info.parsed_end_time
,submission_timestamp
- Considerations:
- Are the recorded timestamps reasonable, both in terms of the ping time window and the delay from collection to submission?
8. Glean Errors
- Purpose: Identify telemetry or network errors related to data collection.
- Considerations:
- Are there networking errors, ingestion issues, or other telemetry failures that could be related to the anomaly?
9. Hardware Details (Manufacturer/Version) (Mobile platforms only)
- Purpose: Determine if the issue is hardware-specific.
- Column Names:
client_info.device_manufacturer
,client_info.device_model
- Considerations:
- Does the anomaly occur primarily on older or newer hardware models?
10. Ping reason
- Purpose: Determine the reason a ping was sent.
- Column Names:
ping_info.reason
- Considerations:
- Does the anomaly occur primarily for a specific reason?
- The built-in pings have different ping reasons based on their schedule
YAML Registry Format
User defined Glean pings and metrics are declared in YAML files, which must be parsed by
glean_parser
to generate public APIs
for said metrics and pings.
These files also serve the purpose of documenting metrics and pings. They are consumed by the
probe-scraper
tool, which generates a REST API to
access metrics and pings information consumed by most other tools in the Glean ecosystem, such as
GLAM and the Glean Dictionary.
Moreover, for products that do not wish to use the Glean Dictionary as their metrics and pings documentation source, glean_parser
provides an option to generate Markdown documentation for metrics and pings based on these files. For more information of that, refer to the help output
of the translate
command, by running in your terminal:
$ glean_parser translate --help
metrics.yaml
file
For a full reference on the metrics.yaml
format, refer to the
Metrics YAML Registry Format page.
pings.yaml
file
For a full reference on the pings.yaml
format, refer to the
Pings YAML Registry Format page.
tags.yaml
file
For a full reference on the tags.yaml
format, refer to the
Tags YAML Registry Format page.
Metrics YAML Registry Format
Metrics sent by an application or library are defined in YAML files which follow
the metrics.yaml
JSON schema.
This files must be parsed by glean_parser
at build time
in order to generate code in the target language (e.g. Kotlin, Swift, ...). The generated code is
what becomes the public API to access the project's metrics.
For more information on how to introduce the glean_parser
build step for a specific language /
environment, refer to the "Adding Glean to your project"
section of this book.
Note on the naming of these files
Although we refer to metrics definitions YAML files as
metrics.yaml
throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.
File structure
---
# Schema
$schema: moz://mozilla.org/schemas/glean/metrics/2-0-0
$tags:
- frontend
# Category
toolbar:
# Name
click:
# Metric Parameters
type: event
description: |
Event to record toolbar clicks.
metadata:
tags:
- Interaction
notification_emails:
- CHANGE-ME@example.com
bugs:
- https://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
expires: 2019-06-01
double_click:
...
Schema
Declaring the schema at the top of a metrics definitions file is required, as it is what indicates that the current file is a metrics definitions file.
$tags
You may optionally declare tags at the file level that apply to all metrics in that file.
Category
Categories are the top-level keys on metrics definition files. One single definition file may contain multiple categories grouping multiple metrics. They serve the purpose of grouping related metrics in a project.
Categories can contain alphanumeric lower case characters as well as the .
and _
characters
which can be used to provide extra structure, for example category.subcategory
is a valid category.
Category lengths may not exceed 40 characters.
Categories may not start with the string glean
. That prefix is reserved for Glean internal metrics.
See the "Capitalization" note to understand how the category is formatted in generated code.
Name
Metric names are the second-level keys on metrics definition files.
Names may contain alphanumeric lower case characters as well as the _
character. Metric name
lengths may not exceed 30 characters.
"Capitalization" rules also apply to metric names on generated code.
Metric parameters
Specific metric types may have special required parameters in their definition, these parameters are documented in each "Metric Type" reference page.
Following are the parameters common to all metric types.
Required parameters
type
Specifies the type of a metric, like "counter" or "event". This defines which operations are valid for the metric, how it is stored and how data analysis tooling displays it. See the list of supported metric types.
Types should not be changed after release
Once a metric is defined in a product, its
type
should only be changed in rare circumstances. It's better to rename the metric with the new type instead. The ingestion pipeline will create a new column for a metric with a changed type. Any new analysis will need to use the new column going forward. The old column will still be populated with data from old clients.
description
A textual description of the metric for humans. It should describe what the metric does, what it means for analysts, and its edge cases or any other helpful information.
The description field may contain markdown syntax.
Imposed limits on line length
The Glean linter uses a line length limit of 80 characters. If your description is longer, e.g. because it includes longer links, you can disable
yamllint
using the following annotations (and make sure to enableyamllint
again as well):# yamllint disable description: | Your extra long description, that's longer than 80 characters by far. # yamllint enable
notification_emails
A list of email addresses to notify for important events with the metric or when people with context or ownership for the metric need to be contacted.
For example when a metric's expiration is within in 14 days, emails will be sent
from telemetry-alerts@mozilla.com
to the notification_emails
addresses associated with the metric.
Consider adding both a group email address and an individual who is responsible for this metric.
bugs
A list of bugs (e.g. Bugzilla or GitHub) that are relevant to this metric. For example, bugs that track its original implementation or later changes to it.
Each entry should be the full URL to the bug in an issue tracker. The use of numbers alone is deprecated and will be an error in the future.
data_reviews
A list of URIs to any data collection review responses relevant to the metric.
expires
When the metric is set to expire.
After a metric expires, an application will no longer collect or send data related to it. May be one of the following values:
<build date>
: An ISO dateyyyy-mm-dd
in UTC on which the metric expires. For example,2019-03-13
. This date is checked at build time. Except in special cases, this form should be used so that the metric automatically "sunsets" after a period of time. Emails will be sent to thenotification_emails
addresses when the metric is about to expire. Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.<major version>
: An integer greater than 0 representing the major version the metric expires in, For example,11
. The version is checked at build time against the major provided to the glean_parser (see e.g. Build configuration for Android, Build configuration for iOS) and is only valid if a major version is provided at built time. If no major version is provided at build time and expiration by major version is used for a metric, an error is raised. Note that mixing expiration by date and version is not allowed within a product.never
: This metric never expires.expired
: This metric is manually expired.
Optional parameters
tags
default: []
A list of tag names associated with this metric. Must correspond to an entry specified in a tags file.
lifetime
default: ping
Defines the lifetime of the metric. Different lifetimes affect when the metrics value is reset.
ping
(default)
The metric is cleared each time it is submitted in the ping. This is the most common case, and should be used for metrics that are highly dynamic, such as things computed in response to the user's interaction with the application.
application
The metric is related to an application run, and is cleared after the application restarts and any Glean-owned ping, due at startup, is submitted. This should be used for things that are constant during the run of an application, such as the operating system version. In practice, these metrics are generally set during application startup. A common mistake--- using the ping lifetime for these type of metrics---means that they will only be included in the first ping sent during a particular run of the application.
user
NOTE: Reach out to the Glean team before using this.
The metric is part of the user's profile and will live as long as the profile lives.
This is often not the best choice unless the metric records a value that really needs
to be persisted for the full lifetime of the user profile, e.g. an identifier like the client_id
,
the day the product was first executed. It is rare to use this lifetime outside of some metrics
that are built in to the Glean SDK.
send_in_pings
default: events
|metrics
Defines which pings the metric should be sent on.
If not specified, the metric is sent on the default ping,
which is the events
ping for events and the metrics
ping for everything else.
Most metrics don't need to specify this unless they are sent on custom pings.
The special value default
may be used, in case it's required for a metric to be sent
on the default ping as well as in a custom ping.
Adding metrics to every ping
For the small number of metrics that should be in every ping the Glean SDKs will eventually provide a solution. See bug 1695236 for details.
send_in_pings:
- my-custom-ping
- default
disabled
default: false
Data collection for this metric is disabled.
This is useful when you want to temporarily disable the collection for a specific metric without removing references to it in your source code.
Generally, when a metric is no longer needed, it should simply be removed. This does not affect the availability of data already collected by the pipeline.
version
default: 0
The version of the metric. A monotonically increasing integer value. This should be bumped if the metric changes in a backward-incompatible way.
data_sensitivity
default: []
A list of data sensitivity categories that the metric falls under. There are four data collection categories related to data sensitivity defined in Mozilla's data collection review process:
Category 1: Technical Data (technical
)
Information about the machine or Firefox itself. Examples include OS, available memory, crashes and errors, outcome of automated processes like updates, safe browsing, activation, versions, and build id. This also includes compatibility information about features and APIs used by websites, add-ons, and other 3rd-party software that interact with Firefox during usage.
Category 2: Interaction Data (interaction
)
Information about the user’s direct engagement with Firefox. Examples include how many tabs, add-ons, or windows a user has open; uses of specific Firefox features; session length, scrolls and clicks; and the status of discrete user preferences. It also includes information about the user's in-product journeys and product choices helpful to understand engagement (attitudes). For example, selections of add-ons or tiles to determine potential interest categories etc.
Category 3: Stored Content & Communications (stored_content
)
(formerly Web activity data, web_activity
)
Information about what people store, sync, communicate or connect to where the information is generally considered to be more sensitive and personal in nature. Examples include users' saved URLs or URL history, specific web browsing history, general information about their web browsing history (such as TLDs or categories of webpages visited over time) and potentially certain types of interaction data about specific web pages or stories visited (such as highlighted portions of a story). It also includes information such as content saved by users to an individual account like saved URLs, tags, notes, passwords and files as well as communications that users have with one another through a Mozilla service.
Category 4: Highly sensitive data or clearly identifiable personal data (highly_sensitive
)
Information that directly identifies a person, or if combined with other data could identify a person. This data may be embedded within specific website content, such as memory contents, dumps, captures of screen data, or DOM data. Examples include account registration data like name, password, and email address associated with an account, payment data in connection with subscriptions or donations, contact information such as phone numbers or mailing addresses, email addresses associated with surveys, promotions and customer support contacts. It also includes any data from different categories that, when combined, can identify a person, device, household or account. For example Category 1 log data combined with Category 3 saved URLs. Additional examples are: voice audio commands (including a voice audio file), speech-to-text or text-to-speech (including transcripts), biometric data, demographic information, and precise location data associated with a persistent identifier, individual or small population cohorts. This is location inferred or determined from mechanisms other than IP such as wi-fi access points, Bluetooth beacons, cell phone towers or provided directly to us, such as in a survey or a profile.
Pings YAML Registry Format
Custom pings sent by an application or library are defined in YAML files which follow
the pings.yaml
JSON schema.
This files must be parsed by glean_parser
at build time
in order to generate code in the target language (e.g. Kotlin, Swift, ...). The generated code is
what becomes the public API to access the project's custom pings.
For more information on how to introduce the glean_parser
build step for a specific language /
environment, refer to the "Adding Glean to your project"
section of this book.
Note on the naming of these files
Although we refer to pings definitions YAML files as
pings.yaml
throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.
File structure
---
# Schema
$schema: moz://mozilla.org/schemas/glean/pings/2-0-0
# Name
search:
# Ping parameters
description: >
A ping to record search data.
include_client_id: false
notification_emails:
- CHANGE-ME@example.com
bugs:
- http://bugzilla.mozilla.org/123456789/
data_reviews:
- http://example.com/path/to/data-review
Schema
Declaring the schema at the top of a pings definitions file is required, as it is what indicates that the current file is a pings definitions file.
Name
Ping names are the top-level keys on pings definitions files. One single definition file may contain multiple ping declarations.
Ping names are limited to lowercase letters from the ISO basic Latin alphabet and hyphens and a maximum of 30 characters.
Pings may not contain the words custom
or ping
in their names. These are considered redundant
words and will trigger a REDUNDANT_PING
lint failure on glean_parser
.
"Capitalization" rules apply to ping names on generated code.
Reserved ping names
The names
baseline
,metrics
,events
,deletion-request
,default
andall-pings
are reserved and may not be used as the name of a custom ping.
Ping parameters
Required parameters
description
A textual description of the purpose of the ping. It may contain markdown syntax.
metadata
default: {}
A dictionary of extra metadata associated with this ping.
tags
default: []
A list of tag names associated with this ping. Must correspond to an entry specified in a tags file.
ping_schedule
default: []
A list of ping names. When one of those pings is sent, then this ping is
also sent, with the same reason
. This is useful if you want a ping to
be scheduled and sent at the same frequency as another ping, like baseline
.
Pings cannot list themselves under
ping_schedule
, however it is possible to accidentally create cycles of pings where Ping A schedules Ping B, which schedules Ping C, which in turn schedules Ping A. This can result in a constant stream of pings being sent. Please use caution withping_schedule
, and ensure that you have not accidentally created any cycles with the ping references.
include_client_id
A boolean indicating whether to include the client_id in the client_info section of the ping.
notification_emails
A list of email addresses to notify for important events with the ping or when people with context or ownership for the ping need to be contacted.
Consider adding both a group email address and an individual who is responsible for this ping.
bugs
A list of bugs (e.g. Bugzilla or GitHub) that are relevant to this ping. For example, bugs that track its original implementation or later changes to it.
Each entry should be the full URL to the bug in an issue tracker. The use of numbers alone is deprecated and will be an error in the future.
data_reviews
A list of URIs to any data collection review responses relevant to the metric.
Optional parameters
send_if_empty
default: false
A boolean indicating if the ping is sent if it contains no metric data.
reasons
default: {}
The reasons that this ping may be sent. The keys are the reason codes,
and the values are a textual description of each reason.
The ping payload will (optionally) contain one of these reasons in the ping_info.reason
field.
Tags YAML Registry Format
Any number of custom "tags" can be added to any metric or ping.
This can be useful in data discovery tools like the Glean Dictionary.
The tags for an application are defined in YAML files which follow
the tags.yaml
JSON schema.
These files must be parsed by glean_parser
at build time in order to generate the metadata.
For more information on how to introduce the glean_parser
build step for a specific language /
environment, refer to the "Adding Glean to your project"
section of this book.
Note on the naming of these files
Although we refer to tag definitions YAML files as
tags.yaml
throughout Glean documentation this files may be named whatever makes the most sense for each project and may even be broken down into multiple files, if necessary.
File structure
---
# Schema
$schema: moz://mozilla.org/schemas/glean/tags/1-0-0
Search:
description: Metrics or pings in the "search" domain
Schema
Declaring the schema at the top of a tags definitions file is required, as it is what indicates that the current file is a tag definitions file.
Name
Tag names are the top-level keys on tag definitions files. One single definition file may contain multiple tag declarations.
There is no restriction on the name of a tag, aside from the fact that they have a maximum of 80 characters.
Tag parameters
Required parameters
description
A textual description of the tag. It may contain markdown syntax.
The General API
The Glean SDKs have a minimal API available on their top-level Glean
object called the General API.
This API allows, among other things, to enable and disable upload, register custom pings and set experiment data.
Only initialize in the main application!
Glean should only be initialized from the main application, not individual libraries. If you are adding Glean support to a library, you can safely skip this section.
The API
The Glean SDKs provide a general API that supports the following operations. See API reference pages for SDK-specific details.
Operation | Description | Notes |
---|---|---|
initialize | Configure and initialize the Glean SDK. | Initializing the Glean SDK |
setUploadEnabled | Enable or disable Glean collection and upload. | Toggling upload status |
registerPings | Register custom pings generated from pings.yaml . | Custom pings |
setExperimentActive | Indicate that an experiment is running. | Using the Experiments API |
setExperimentInactive | Indicate that an experiment is no longer running.. | Using the Experiments API |
registerEventListener | Register a callback by which a consumer can be notified of all event metrics being recorded. | Glean Event Listener |
Initializing
Glean needs to be initialized in order to be able to send pings, record metrics and perform maintenance tasks. Thus it is advised that Glean be initialized as soon as possible in an application's lifetime and importantly, before any other libraries in the application start using Glean.
Libraries are not required to initialize Glean
Libraries rely on the same Glean singleton as the application in which they are embedded. Hence, they are not expected to initialize Glean as the application should already do that.
Behavior when uninitialized
Any API called before Glean is initialized is queued and applied at initialization. To avoid unbounded memory growth the queue is bounded (currently to a maximum of 1 million tasks), and further calls are dropped.
The number of calls dropped, if any,
is recorded in the glean.error.preinit_tasks_overflow
metric.
Behavior once initialized
When upload is enabled
Once initialized, if upload is enabled, Glean applies all metric recordings and ping submissions, for both user-defined and builtin metrics and pings.
This always happens asynchronously.
When upload is disabled
If upload is disabled, any persisted metrics, events and pings (other than first_run_date
) are cleared.
Pending deletion-request
pings are sent.
Subsequent calls to record metrics and submit pings will be no-ops.
Because Glean does that as part of its initialization, users are required to always initialize Glean.
Glean must be initialized even if upload is disabled.
This does not apply to special builds where telemetry is disabled at build time. In that case, it is acceptable to not call initialize at all.
API
Glean.initialize(configuration)
Initializes Glean.
May only be called once. Subsequent calls to initialize
are no-op.
Configuration
The available initialize
configuration options may vary depending on the SDK.
Below are listed the configuration options available on most SDKs.
Note that on some SDKs some of the options are taken as a configuration object.
Check the respective SDK documentation for details.
Configuration Option | Default value | Description |
---|---|---|
applicationId | On Android/iOS: determined automatically. Otherwise required. | Application identifier. For Android and iOS applications, this is the id used on the platform's respective app store and is extracted automatically from the application context. |
uploadEnabled | Required | The user preference on whether or not data upload is enabled. |
channel | - | The application's release channel. When present, the app_channel will be reported in all pings' client_info sections. |
appBuild | On Android/iOS: determined automatically. Otherwise: - | A build identifier e.g. the build identifier generated by a CI system (e.g. "1234/A"). If not present, app_build will be reported as "Unknown" on all pings' client_info sections. |
appDisplayVersion | - | The user visible version string for the application running Glean. If not present, app_display_version will be reported as "Unknown" on all pings' client_info sections. |
serverEndpoint | https://incoming.telemetry.mozilla.org | The server pings are sent to. |
maxEvents | Glean.js: 1. Other SDKs: 500. | The maximum number of events the Glean storage will hold on to before submitting the 'events' ping. Refer to the events ping documentation for more information on its scheduling. |
httpUploader | - | A custom HTTP uploader instance, that will overwrite Glean's provided uploader. Useful for users that wish to use specific uploader implementations. See Custom Uploaders for more information on how and when the use this feature. |
logLevel | - | The level for how verbose the internal logging is. The level filter options in order from least to most verbose are: Off , Error , Warn , Info , Debug , Trace . See the log crate docs for more information. |
enableEventTimestamps | true | Whether to add a wall clock timestamp to all events. |
rateLimit | 15 pings per 60s interval | Specifies the maximum number of pings that can be uploaded per interval of a specified number of seconds. |
experimentationId | - | Optional. An identifier derived by the application to be sent in all pings for the purpose of experimentation. See the experiments API documentation for more information. |
enableInternalPings | true | Whether to enable the internal "baseline", "events", and "metrics" pings. |
delayPingLifetimeIo | false | Whether Glean should delay persistence of data from metrics with ping lifetime. On Android data is automatically persisted every 1000 writes and on backgrounding when enabled. |
To learn about SDK specific configuration options available, refer to the Reference section.
Always initialize Glean with the correct upload preference
Glean must always be initialized with real values.
Always pass the user preference, e.g.
Glean.initialize(uploadEnabled=userSettings.telemetryEnabled)
or the equivalent for your application.Calling
Glean.setUploadEnabled(false)
at a later point will triggerdeletion-request
pings and regenerate client IDs. This should only be done if the user preference actually changes.
An excellent place to initialize Glean is within the onCreate
method of the class that extends Android's Application
class.
import org.mozilla.yourApplication.GleanMetrics.GleanBuildInfo
import org.mozilla.yourApplication.GleanMetrics.Pings
class SampleApplication : Application() {
override fun onCreate() {
super.onCreate()
// If you have custom pings in your application, you must register them
// using the following command. This command should be omitted for
// applications not using custom pings.
Glean.registerPings(Pings)
// Initialize the Glean library.
Glean.initialize(
applicationContext,
// Here, `settings()` is a method to get user preferences, specific to
// your application and not part of the Glean API.
uploadEnabled = settings().isTelemetryEnabled,
configuration = Configuration(),
buildInfo = GleanBuildInfo.buildInfo
)
}
}
The Glean Kotlin SDK supports use across multiple processes. This is enabled by setting a dataPath
value in the Glean.Configuration
object passed to Glean.initialize
. You do not need to set a dataPath
for your main process. This configuration should only be used by a non-main process.
Requirements for a non-main process:
Glean.initialize
must be called with thedataPath
value set in theGlean.Configuration
.- The default
dataPath
for Glean is{context.applicationInfo.dataDir}/glean_data
. If you try to use this path,Glean.initialize
will fail and throw an error. - Set the default process name as your main process. If this is not set up correctly, pings from the non-main process will not send.
Configuration.Builder().setDefaultProcessName(<main_process_name>)
Note: When initializing from a non-main process with a specified dataPath
, the lifecycle observers will not be set up. This means you will not receive otherwise scheduled baseline or metrics pings.
Consuming Glean through Android Components
When the Glean Kotlin SDK is consumed through Android Components, it is required to configure an HTTP client to be used for upload.
For example:
// Requires `org.mozilla.components:concept-fetch`
import mozilla.components.concept.fetch.Client
// Requires `org.mozilla.components:lib-fetch-httpurlconnection`.
// This can be replaced by other implementations, e.g. `lib-fetch-okhttp`
// or an implementation from `browser-engine-gecko`.
import mozilla.components.lib.fetch.httpurlconnection.HttpURLConnectionClient
import mozilla.components.service.glean.config.Configuration
import mozilla.components.service.glean.net.ConceptFetchHttpUploader
val httpClient = ConceptFetchHttpUploader(lazy { HttpURLConnectionClient() as Client })
val config = Configuration(httpClient = httpClient)
Glean.initialize(
context,
uploadEnabled = true,
configuration = config,
buildInfo = GleanBuildInfo.buildInfo
)
An excellent place to initialize Glean is within the application(_:)
method of the class that extends the UIApplicationDelegate
class.
import Glean
import UIKit
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// If you have custom pings in your application, you must register them
// using the following command. This command should be omitted for
// applications not using custom pings.
Glean.shared.registerPings(GleanMetrics.Pings)
// Initialize the Glean library.
Glean.shared.initialize(
// Here, `Settings` is a method to get user preferences specific to
// your application, and not part of the Glean API.
uploadEnabled = Settings.isTelemetryEnabled,
configuration = Configuration(),
buildInfo = GleanMetrics.GleanBuild.info
)
}
}
The Glean Swift SDK supports use across multiple processes. This is enabled by setting a dataPath
value in the Glean.Configuration
object passed to Glean.initialize
. You do not need to set a dataPath
for your main process. This configuration should only be used by a non-main process.
Requirements for a non-main process:
Glean.initialize
must be called with thedataPath
value set in theGlean.Configuration
.- On iOS devices, Glean stores data in the Application Support directory. The default
dataPath
Glean uses is{FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask)[0]}/glean_data
. If you try to use this path,Glean.initialize
will fail and throw an error.
Note: When initializing from a non-main process with a specified dataPath
, the lifecycle observers will not be set up. This means you will not receive otherwise scheduled baseline or metrics pings.
The main control for the Glean Python SDK is on the glean.Glean
singleton.
from glean import Glean, Configuration
Glean.initialize(
application_id="my-app-id",
application_version="0.1.0",
# Here, `is_telemetry_enabled` is a method to get user preferences specific to
# your application, and not part of the Glean API.
upload_enabled=is_telemetry_enabled(),
configuration=Configuration(),
)
Unlike in other implementations, the Python SDK does not automatically send any pings. See the custom pings documentation about adding custom pings and sending them.
The Glean Rust SDK should be initialized as soon as possible.
use glean::{ClientInfoMetrics, Configuration};
let cfg = Configuration {
data_path,
application_id: "my-app-id".into(),
// Here, `is_telemetry_enabled` is a method to get user preferences specific to
// your application, and not part of the Glean API.
upload_enabled: is_telemetry_enabled(),
max_events: None,
delay_ping_lifetime_io: false,
server_endpoint: Some("https://incoming.telemetry.mozilla.org".into()),
uploader: None,
use_core_mps: true,
};
let client_info = ClientInfoMetrics {
app_build: env!("CARGO_PKG_VERSION").to_string(),
app_display_version: env!("CARGO_PKG_VERSION").to_string(),
channel: None,
locale: None,
};
glean::initialize(cfg, client_info);
The Glean Rust SDK does not support use across multiple processes, and must only be initialized on the application's main process.
Unlike in other implementations, the Rust SDK does not provide a default uploader.
See PingUploader
for details.
import Glean from "@mozilla/glean/<platform>";
Glean.initialize(
"my-app-id",
// Here, `isTelemetryEnabled` is a method to get user preferences specific to
// your application, and not part of the Glean API.
isTelemetryEnabled(),
{
appDisplayVersion: "0.1.0"
}
);
Custom Uploaders
A custom HTTP uploader may be provided at initialization time in order to overwrite Glean's native ping uploader implementation. Each SDK exposes a base class for Glean users to extend into their own custom uploaders.
See BaseUploader
for details on how to implement a custom upload on Kotlin.
See HttpPingUploader
for details on how to implement a custom upload on Swift.
See BaseUploader
for details on how to implement a custom upload on Python.
See PingUploader
for details on how to implement a custom upload on Rust.
import { Uploader, UploadResult, UploadResultStatus } from "@mozilla/glean/uploader";
import Glean from "@mozilla/glean/<platform>";
/**
* My custom uploader implementation
*/
export class MyCustomUploader extends Uploader {
async post(url: string, body: string, headers): Promise<UploadResult> {
// My custom POST request code
}
}
Glean.initialize(
"my-app-id",
// Here, `isTelemetryEnabled` is a method to get user preferences specific to
// your application, and not part of the Glean API.
isTelemetryEnabled(),
{
httpClient: new MyCustomUploader()
}
);
Testing API
When unit testing metrics and pings, Glean needs to be put in testing mode. Initializing Glean for tests is referred to as "resetting". It is advised that Glean is reset before each unit test to prevent side effects of one unit test impacting others.
How to do that and the definition of "testing mode" varies per Glean SDK. Refer to the information below for SDK specific information.
Using the Glean Kotlin SDK's unit testing API requires adding
Robolectric 4.0 or later as a testing dependency.
In Gradle, this can be done by declaring a testImplementation
dependency:
dependencies {
testImplementation "org.robolectric:robolectric:4.3.1"
}
In order to put the Glean Kotlin SDK into testing mode apply the JUnit GleanTestRule
to your test class.
Testing mode will prevent issues with async calls when unit testing the Glean SDK on Kotlin. It also
enables uploading and clears the recorded metrics at the beginning of each test run.
The rule can be used as shown:
@RunWith(AndroidJUnit4::class)
class ActivityCollectingDataTest {
// Apply the GleanTestRule to set up a disposable Glean instance.
// Please note that this clears the Glean data across tests.
@get:Rule
val gleanRule = GleanTestRule(ApplicationProvider.getApplicationContext())
@Test
fun checkCollectedData() {
// The Glean Kotlin SDK testing APIs can be called here.
}
}
This will ensure that metrics are done recording when the other test functions are used.
Note: There's no automatic test rule for Glean tests implemented in Swift.
In order to prevent issues with async calls when unit testing the Glean SDK, it is important to put the Glean Swift SDK into testing mode. When the Glean Swift SDK is in testing mode, it enables uploading and clears the recorded metrics at the beginning of each test run.
Activate it by resetting Glean in your test's setup:
// All pings and metrics testing APIs are marked as `internal`
// so you need to import `Glean` explicitly in test mode.
import XCTest
class GleanUsageTests: XCTestCase {
override func setUp() {
Glean.shared.resetGlean(clearStores: true)
}
// ...
}
This will ensure that metrics are done recording when the other test functions are used.
The Glean Python SDK contains a helper function glean.testing.reset_glean()
for resetting Glean for tests.
It has two required arguments: the application ID, and the application version.
Each reset of the Glean Python SDK will create a new temporary directory for Glean to store its data in. This temporary directory is automatically cleaned up the next time the Glean Python SDK is reset or when the testing framework finishes.
The instructions below assume you are using pytest as the test runner. Other test-running libraries have similar features, but are different in the details.
Create a file conftest.py
at the root of your test directory, and add the following to reset Glean at the start of every test in your suite:
import pytest
from glean import testing
@pytest.fixture(name="reset_glean", scope="function", autouse=True)
def fixture_reset_glean():
testing.reset_glean(application_id="my-app-id", application_version="0.1.0")
Note
Glean uses a global singleton object. Tests need to run single-threaded or need to ensure exclusivity using a lock.
The Glean Rust SDK contains a helper function test_reset_glean()
for resetting Glean for tests.
It has three required arguments:
- the configuration to use
- the client info to use
- whether to clear stores before initialization
You can call it like below in every test:
#![allow(unused)] fn main() { let dir = tempfile::tempdir().unwrap(); let tmpname = dir.path().to_path_buf(); let glean::Configuration { data_path: tmpname, application_id: "app-id".into(), upload_enabled: true, max_events: None, delay_ping_lifetime_io: false, server_endpoint: Some("invalid-test-host".into()), uploader: None, use_core_mps: false, }; let client_info = glean::ClientInfoMetrics::unknown(); glean::test_reset_glean(cfg, client_info, false); }
The Glean JavaScript SDK contains a helper function testResetGlean()
for resetting Glean for tests.
It expects the same list of arguments as Glean.initialize
.
Each reset of the Glean JavaScript SDK will clear stores. Calling testResetGlean
will also make
metrics and pings testing APIs available and replace ping uploading with a mock implementation that
does not make real HTTP requests.
import { testResetGlean } from "@mozilla/glean/testing"
describe("myTestSuite", () => {
beforeEach(async () => {
await testResetGlean("my-test-id");
});
});
Reference
Toggling upload status
The Glean SDKs provide an API for toggling Glean's upload status after initialization.
Applications instrumented with Glean are expected to provide some form of user interface to allow for toggling the upload status.
Disabling upload
When upload is disabled, the Glean SDK will perform the following tasks:
- Submit a
deletion-request
ping. - Cancel scheduled ping uploads.
- Clear metrics and pings data from the client, except for the
first_run_date
metric.
While upload is disabled, metrics aren't recorded and no data is uploaded.
Enabling upload
When upload is enabled, the Glean SDK will re-initialize its core metrics.
The only core metric that is not re-initialized is the first_run_date
metric.
While upload is enabled all metrics are recorded as expected and pings are sent to the telemetry servers.
API
Glean.setUploadEnabled(boolean)
Enables or disables upload.
If called prior to initialize this function is a no-op.
If the upload state is not actually changed in between calls to this function, it is also a no-op.
import mozilla.telemetry.glean.Glean
open class MainActivity : AppCompatActivity() {
override fun onCreate() {
// ...
uploadSwitch.setOnCheckedChangeListener { _, isChecked ->
if (isChecked) {
Glean.setUploadEnabled(true)
} else {
Glean.setUploadEnabled(false)
}
}
}
}
import mozilla.telemetry.glean.Glean
Glean.INSTANCE.setUploadEnabled(false);
import Glean
import UIKit
class ViewController: UIViewController {
@IBOutlet var enableSwitch: UISwitch!
// ...
@IBAction func enableToggled(_: Any) {
Glean.shared.setUploadEnabled(enableSwitch.isOn)
}
}
from glean import Glean
Glean.set_upload_enabled(false)
use glean;
glean::set_upload_enabled(false);
import Glean from "@mozilla/glean/web";
const uploadSwitch = document.querySelector("input[type=checkbox].upload-switch");
uploadSwitch.addEventListener("change", event => {
if (event.target.checked) {
Glean.setUploadEnabled(true);
} else {
Glean.setUploadEnabled(false);
}
});
Reference
Using the experiments API
The Glean SDKs support tagging all their pings with experiments annotations. The annotations are useful to report that experiments were active at the time the measurement were collected. The annotations are reported in the optional experiments
entry in the ping_info
section of all pings.
Experiment annotations are not persisted
The experiment annotations set through this API are not persisted by the Glean SDKs. The application or consuming library is responsible for setting the relevant experiment annotations at each run.
It's not required to define experiment IDs and branches
Experiment IDs and branches don't need to be pre-defined in the Glean SDK registry files. Please also note that the
extra
map is a non-nested arbitraryString
toString
map. It also has limits on the size of the keys and values defined below.
Recording API
setExperimentActive
Annotates Glean pings with experiment data.
// Annotate Glean pings with experiments data.
Glean.setExperimentActive(
experimentId = "blue-button-effective",
branch = "branch-with-blue-button",
extra: mapOf(
"buttonLabel" to "test"
)
)
// Annotate Glean pings with experiments data.
Glean.shared.setExperimentActive(
experimentId: "blue-button-effective",
branch: "branch-with-blue-button",
extra: ["buttonLabel": "test"]
)
from glean import Glean
Glean.set_experiment_active(
experiment_id="blue-button-effective",
branch="branch-with-blue-button",
extra={
"buttonLabel": "test"
}
)
let mut extra = HashMap::new();
extra.insert("buttonLabel".to_string(), "test".to_string());
glean::set_experiment_active(
"blue-button-effective".to_string(),
"branch-with-blue-button".to_string(),
Some(extra),
);
C++
At present there is no dedicated C++ Experiments API for Firefox Desktop
If you require one, please file a bug.
JavaScript
let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
FOG.setExperimentActive(
"blue-button-effective",
"branch-with-blue-button",
{"buttonLabel": "test"}
);
Limits
experimentId
,branch
, and the keys and values of theextra
field are fixed at a maximum length of 100 bytes. Longer strings are truncated. (Specifically, length is measured in the number of bytes when the string is encoded in UTF-8.)extra
map is limited to 20 entries. If passed a map which contains more elements than this, it is truncated to 20 elements. WARNING Which items are truncated is nondeterministic due to the unordered nature of maps. What's left may not necessarily be the first elements added.
Recorded errors
invalid_value
: If the values ofexperimentId
orbranch
are truncated for length, if the keys or values in theextra
map are truncated for length, or if theextra
map is truncated for the number of elements.
setExperimentInactive
Removes the experiment annotation. Should be called when the experiment ends.
Glean.setExperimentInactive("blue-button-effective")
Glean.shared.setExperimentInactive(experimentId: "blue-button-effective")
from glean import Glean
Glean.set_experiment_inactive("blue-button-effective")
glean::set_experiment_inactive("blue-button-effective".to_string());
C++
At present there is no dedicated C++ Experiments API for Firefox Desktop
If you require one, please file a bug.
JavaScript
let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
FOG.setExperimentInactive("blue-button-effective");
Set an experimentation identifier
An experimentation enrollment identifier that is derived and provided by the application can be set through the configuration object passed into the initialize
function. See the section on Initializing Glean for more information on how to set this within the Configuration
object.
This identifier will be set during initialization and sent along with all pings sent by Glean, unless that ping is has opted out of sending the client_id. This identifier is not persisted by Glean and must be persisted by the application if necessary for it to remain consistent between runs.
Limits
The experimentation ID is subject to the same limitations as a string metric type.
Recorded errors
The experimentation ID will produce the same errors as a string metric type.
Testing API
testIsExperimentActive
Reveals if the experiment is annotated in Glean pings.
assertTrue(Glean.testIsExperimentActive("blue-button-effective"))
XCTAssertTrue(Glean.shared.testIsExperimentActive(experimentId: "blue-button-effective"))
from glean import Glean
assert Glean.test_is_experiment_active("blue-button-effective")
assert!(glean::test_is_experiment_active("blue-button-effective".to_string());
testGetExperimentData
Returns the recorded experiment data including branch and extras.
assertEquals(
"branch-with-blue-button", Glean.testGetExperimentData("blue-button-effective")?.branch
)
XCTAssertEqual(
"branch-with-blue-button",
Glean.testGetExperimentData(experimentId: "blue-button-effective")?.branch
)
from glean import Glean
assert (
"branch-with-blue-button" ==
Glean.test_get_experiment_data("blue-button-effective").branch
)
assert_eq!(
"branch-with-blue-button",
glean::test_get_experiment_data("blue-button-effective".to_string()).branch,
);
C++
At present there is no dedicated C++ Experiments API for Firefox Desktop
If you require one, please file a bug.
JavaScript
let FOG = Cc["@mozilla.org/toolkit/glean;1"].createInstance(Ci.nsIFOG);
Assert.equals(
"branch-with-blue-button",
FOG.testGetExperimentData("blue-button-effective").branch
);
testGetExperimentationId
Returns the current Experimentation ID, if any.
assertEquals("alpha-beta-gamma-delta", Glean.testGetExperimentationId())
XCTAssertEqual(
"alpha-beta-gamma-delta",
Glean.shared.testGetExperimentationId()!,
"Experimenatation ids must match"
)
from glean import Glean
assert "alpha-beta-gamma-delta" == Glean.test_get_experimentation_id()
assert_eq!(
"alpha-beta-gamma-delta".to_string(),
glean_test_get_experimentation_id(),
"Experimentation id must match"
);
Reference
Registering custom pings
After defining custom pings glean_parser
is able to generate code from
pings.yaml
files in a Pings
object, which must be instantiated so Glean can send pings by name.
API
registerPings
Loads custom ping metadata into your application or library.
In Kotlin, this object must be registered from your startup code before calling Glean.initialize
(such as in your application's onCreate
method or a function called from that method).
import org.mozilla.yourApplication.GleanMetrics.Pings
override fun onCreate() {
Glean.registerPings(Pings)
Glean.initialize(applicationContext, uploadEnabled = true)
}
In Swift, this object must be registered from your startup code before calling Glean.shared.initialize
(such as in your application's UIApplicationDelegate
application(_:didFinishLaunchingWithOptions:)
method or a function called from that method).
import Glean
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_: UIApplication, didFinishLaunchingWithOptions _: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
Glean.shared.registerPings(GleanMetrics.Pings)
Glean.shared.initialize(uploadEnabled = true)
}
}
For Python, the pings.yaml
file must be available and loaded at runtime.
While the Python SDK does provide a Glean.register_ping_type
function, if your project is a script (i.e. just Python files in a directory), you can load the pings.yaml
before calling Glean.initialize
using:
from glean import load_pings
pings = load_pings("pings.yaml")
Glean.initialize(
application_id="my-app-id",
application_version="0.1.0",
upload_enabled=True,
)
If your project is a distributable Python package, you need to include the pings.yaml
file using one of the myriad ways to include data in a Python package and then use pkg_resources.resource_filename()
to get the filename at runtime.
from glean import load_pings
from pkg_resources import resource_filename
pings = load_pings(resource_filename(__name__, "pings.yaml"))
In Rust custom pings need to be registered individually.
This should be done before calling glean::initialize
.
use your_glean_metrics::pings;
glean::register_ping_type(&pings::custom_ping);
glean::register_ping_type(&pings::search);
glean::initialize(cfg, client_info);