Developing the Glean SDK

Glean logo

The Glean SDK is a modern approach for a Telemetry library and is part of the Glean project.

To contact us you can:

The source code is available on GitHub.

About this book

This book is meant for developers of the Glean SDK.

For user documentation on the Glean SDK, refer to the Glean Book.

For documentation about the broader end-to-end Glean project, refer to the Mozilla Data Documentation.

License

The Glean SDK Source Code is subject to the terms of the Mozilla Public License v2.0. You can obtain a copy of the MPL at https://mozilla.org/MPL/2.0/.

Running the tests

Running all tests

The tests for all languages may be run from the command line:

make test

Windows Note: On Windows, make is not available by default. While not required, installing make will allow you to use the convenience features in the Makefile.

Running the Rust tests

The Rust tests may be run with the following command:

cargo test --all

Log output can be controlled via the environment variable RUST_LOG for the glean_core crate:

export RUST_LOG=glean_core=debug

When running tests with logging you need to tell cargo to not suppress output:

cargo test -- --nocapture

Tests run in parallel by default, leading to interleaving log lines. This makes it harder to understand what's going on. For debugging you can force single-threaded tests:

cargo test -- --nocapture --test-threads=1

Running the Kotlin/Android tests

From the command line

The full Android test suite may be run from the command line with:

./gradlew test

From Android Studio

To run the full Android test suite, in the "Gradle" pane, navigate to glean-core -> Tasks -> verification and double-click either testDebugUnitTest or testReleaseUnitTest (depending on whether you want to run in Debug or Release mode). You can save this task permanently by opening the task dropdown in the toolbar and selecting "Save glean.rs:glean:android [testDebugUnitTest] Configuration".

To run a single Android test, navigate to the file containing the test, and right click on the green arrow in the left margin next to the test. There you have a choice of running or debugging the test.

Running the Swift/iOS tests

From the command line

The full iOS test suite may be run from the command line with:

make test-swift

From Xcode

To run the full iOS test suite, run tests in Xcode (Product -> Test). To run a single Swift test, navigate to the file containing the test, and click on the arrow in the left margin next to the test.

Testing in CI

See Continuous Integration for details.

How the Glean CI works

Current build status: Build Status

The mozilla/glean repository uses both CircleCI and TaskCluster to run tests on each Pull Request, each merge to the main branch and on releases.

Which tasks we run

Pull Request - testing

For each opened pull request we run a number of checks to ensure consistent formatting, no lint failures and that tests across all language bindings pass. These checks are required to pass before a Pull Request is merged. This includes by default the following tasks:

On CircleCI:

  • Formatting & lints for Rust, Kotlin, Swift & Python
  • Consistency checks for generated header files, license checks across dependencies and a schema check
  • Builds & tests for Rust, Kotlin, Python & C#
    • The Python language bindings are build for multiple platforms and Python versions
  • Documentation generation for all languages and the book, including spell check & link check
  • By default no Swift/iOS tests are launched. These can be run manually. See below.

On TaskCluster:

  • A full Android build & test

For all tasks you can always find the full log and generated artifacts by following the "Details" links on each task.

iOS tests on Pull Request

Due to the long runtime and costs iOS tests are not run by default on Pull Requests. Admins of the mozilla/glean repository can run them on demand.

  1. On the pull request scroll to the list of all checks running on that PR on the bottom.
  2. Find the job titled "ci/circleci: iOS/hold".
  3. Click "Details" to get to Circle CI.
  4. Click on the "hold" task, then approve it.
  5. The iOS tasks will now run.

Merge to main

Current build status: Build Status

When pull requests are merged to the main branch we run the same checks as we do on pull requests. Additionally we run the following tasks:

  • Documentation deployment
  • iOS build, unit & integration test

If you notice that they fail please take a look and open a bug to investigate build failures.

Releases

When cutting a new release of the Glean SDK our CI setup is responsible for building, packaging and uploading release builds. We don't run the full test suite again.

We run the following tasks:

On CircleCI:

On TaskCluster:

On release the full TaskCluster suite takes up to 30 minutes to run.

How to find TaskCluster tasks

  1. Go to GitHub releases and find the version
  2. Go to the release commit, indicated by its commit id on the left
  3. Click the green tick or red cross left of the commit title to open the list of all checks.
  4. Find the "Decision task" and click "Details"
  5. From the list of tasks pick the one you want to look at and follow the "View task in Taskcluster" link.

Special behavior

Documentation-only changes

Documentation is deployed from CI, we therefore need it to run on documentation changes. However, some of the long-running code tests can be skipped. For that add the following literal string to the last commit message of your pull request:

[doc only]

Skipping CI completely

It is possible to completely skip running CI checks on a pull request.

To skip tasks on CircleCI include the following literal string in the commit message.
To skip tasks on TaskCluster add the following to the title of your pull request.

[ci skip]

This should only be used for metadata files, such as those in .github, LICENSE or CODE_OF_CONDUCT.md.

Release-like Android builds

Release builds are done on TaskCluster. As it builds for multiple platforms this task is quite long and does not run on pull requests by default.

It is possible trigger the task on pull requests. Add the following literal string to the pull request title:

[ci full]

If added after initially opening the pull request, you need to close, then re-open the pull request to trigger a new build.

The Decision Task will spawn module-build-glean and module-build-glean-gradle-plugin tasks. When they finish you will find the generated files under Artifacts on TaskCluster.

Glean release process

The Glean SDK consists of multiple libraries for different platforms and targets. The main supported libraries are released as one. Development happens on the main repository https://github.com/mozilla/glean. See Contributing for how to contribute changes to the Glean SDK.

The development & release process roughly follows the GitFlow model.

Note: The rest of this section assumes that upstream points to the https://github.com/mozilla/glean repository, while origin points to the developer fork. For some developer workflows, upstream can be the same as origin.

Table of Contents:

Published artifacts

Standard Release

Releases can only be done by one of the Glean maintainers.

  • Main development branch: main
  • Main release branch: release
  • Specific release branch: release-vX.Y.Z
  • Hotfix branch: hotfix-X.Y.(Z+1)

Create a release branch

  1. Announce your intention to create a release in the team chat.
  2. Create a release branch from the main branch:
    git checkout -b release-v25.0.0 main
    
  3. Update the changelog .
    1. Add any missing important changes under the Unreleased changes headline.
    2. Commit any changes to the changelog file due to the previous step.
  4. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch, minor or major version of what is currently released.
    2. Let it create a commit for you.
  5. Push the new release branch:
    git push upstream release-v25.0.0
    
  6. Open a pull request to merge back the specific release branch to the development branch: https://github.com/mozilla/glean/compare/main...release-v25.0.0?expand=1
    • Wait for CI to finish on that branch and ensure it's green.
    • Do not merge this PR yet!
  7. Apply additional commits for bug fixes to this branch.
    • Adding large new features here is strictly prohibited. They need to go to the main branch and wait for the next release.

Finish a release branch

When CI has finished and is green for your specific release branch, you are ready to cut a release.

  1. Check out the main release branch:
    git checkout release
    
  2. Merge the specific release branch:
    git merge --no-ff -X theirs release-v25.0.0
    
  3. Push the main release branch:
    git push upstream release
    
  4. Tag the release on GitHub:
    1. Create a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the release branch as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
    5. Click the green Publish Release button.
  5. Wait for the CI build to complete for the tag.
    • You can check on CircleCI for the running build.
    • You can find the TaskCluster task on the corresponding commit. See How to find TaskCluster tasks for details.
    • On rare occasions CI tasks may fail. You can safely rerun them on CircleCI by selecting Rerun, then Rerun workflow from failed on the top right on the failed task.
  6. Merge the Pull Request opened previously.
    • This is important so that no changes are lost.
    • If this PR is "trivial" (no bugfixes or merge conflicts of note from earlier steps) you may land it without review.
    • There is a separate CircleCI task for the release branch, ensure that it also completes.
  7. Once the above pull request lands, delete the specific release branch.
  8. Post a message to #glean:mozilla.org announcing the new release.
    • Include a copy of the release-specific changelog if you want to be fancy.

Hotfix release for latest version

If the latest released version requires a bug fix, a hotfix branch is used.

Create a hotfix branch

  1. Announce your intention to create a release in the team chat.
  2. Create a hotfix branch from the main release branch:
    git checkout -b hotfix-v25.0.1 release
    
  3. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch version of what is currently released.
    2. Let it create a commit for you.
  4. Push the hotfix branch:
    git push upstream hotfix-v25.0.1
    
  5. Create a local hotfix branch for bugfixes:
    git checkout -b bugfix hotfix-v25.0.1
    
  6. Fix the bug and commit the fix in one or more separate commits.
  7. Push your bug fixes and create a pull request against the hotfix branch: https://github.com/mozilla/glean/compare/hotfix-v25.0.1...your-name:bugfix?expand=1
  8. When that pull request lands, wait for CI to finish on that branch and ensure it's green:

Finish a hotfix branch

When CI has finished and is green for your hotfix branch, you are ready to cut a release, similar to a normal release:

  1. Check out the main release branch:
    git checkout release
    
  2. Merge the hotfix branch:
    git merge --no-ff hotfix-v25.0.1
    
  3. Push the main release branch:
    git push upstream release
    
  4. Tag the release on GitHub:
    1. Create a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the release branch as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
  5. Wait for the CI build to complete for the tag.
  6. Send a pull request to merge back the hotfix branch to the development branch: https://github.com/mozilla/glean/compare/main...hotfix-v25.0.1?expand=1
    • This is important so that no changes are lost.
    • This might have merge conflicts with the main branch, which you need to fix before it is merged.
  7. Once the above pull request lands, delete the hotfix branch.

Hotfix release for previous version

If you need to release a hotfix for a previously released version (that is: not the latest released version), you need a support branch.

Note: This should rarely happen. We generally support only the latest released version of Glean.

Create a support and hotfix branch

  1. Announce your intention to create a release in the team chat.
  2. Create a support branch from the version tag and push it:
    git checkout -b support/v24.0 v24.0.0
    git push upstream support/v24.0
    
  3. Create a hotfix branch for this support branch:
    git checkout -b hotfix-v24.0.1 support/v24.0
    
  4. Fix the bug and commit the fix in one or more separate commits into your hotfix branch.
  5. Push your bug fixes and create a pull request against the support branch: https://github.com/mozilla/glean/compare/support/v24.0...your-name:hotfix-v24.0.1?expand=1
  6. When that pull request lands, wait for CI to finish on that branch and ensure it's green:

Finish a support branch

  1. Check out the support branch:
    git checkout support/v24.0
    
  2. Update the changelog .
    1. Add any missing important changes under the Unreleased changes headline.
    2. Commit any changes to the changelog file due to the previous step.
  3. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch version of the support branch.
    2. Let it create a commit for you.
  4. Push the support branch:
    git push upstream support/v24.0
    
  5. Tag the release on GitHub:
    1. Create a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the support branch (e.g. support/v24.0) as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
  6. Wait for the CI build to complete for the tag.
  7. Send a pull request to merge back any bug fixes to the development branch: https://github.com/mozilla/glean/compare/main...support/v24.0?expand=1
    • This is important so that no changes are lost.
    • This might have merge conflicts with the main branch, which you need to fix before it is merged.
  8. Once the above pull request lands, delete the support branch.

Upgrading android-components to a new version of Glean

On Android, Mozilla products consume the Glean SDK through its wrapper in android-components. The process of vendoring a new version of Glean in to Mozilla Central now handles the upgrade process for android-components and no manual updating is needed here.

Contributing to the Glean SDK

Anyone is welcome to help with the Glean SDK project. Feel free to get in touch with other community members on chat.mozilla.org or through issues on GitHub or Bugzilla.

Participation in this project is governed by the Mozilla Community Participation Guidelines.

Bug Reports

To report issues or request changes, file a bug in Bugzilla in Data Platform & Tools :: Glean: SDK.

If you don't have a Bugzilla account, we also accept issues on GitHub.

Making Code Changes

To work on the code in this repository you will need to be familiar with the Rust programming language. You can get a working rust compiler and toolchain via rustup.

You can check that everything compiles by running the following from the root of your checkout:

make test-rust

If you plan to work on the Android component bindings, you should also review the instructions for setting up an Android build environment.

To run all Kotlin tests:

make test-kotlin

or run tests in Android Studio.

To run all Swift tests:

make test-swift

or run tests in Xcode.

Sending Pull Requests

Patches should be submitted as pull requests (PRs).

Before submitting a PR:

  • Your code must run and pass all the automated tests before you submit your PR for review.
  • "Work in progress" pull requests are allowed to be submitted, but should be clearly labeled as such and should not be merged until all tests pass and the code has been reviewed.
  • For changes to Rust code
    • make test-rust produces no test failures
    • make lint-rust runs without emitting any warnings or errors.
    • make fmt-rust should be used to format your changed Rust code.
  • For changes to Kotlin code
    • make test-kotlin runs without emitting any warnings or errors.
    • make ktlint runs without emitting any warnings.
  • For changes to Swift code
    • make test-swift (or running tests in Xcode) runs without emitting any warnings or errors.
    • make swiftlint runs without emitting any warnings or errors.
  • Your patch should include new tests that cover your changes. It is your and your reviewer's responsibility to ensure your patch includes adequate tests.

When submitting a PR:

  • You agree to license your code under the project's open source license (MPL 2.0).
  • Base your branch off the current main.
  • Add both your code and new tests if relevant.
  • Please do not include merge commits in pull requests; include only commits with the new relevant code.
  • When submitting a PR for a bug, label the PR "Bug "
  • When committing a PR and when it makes sense to skip the CI, add in a [doc only] annotation to the commit message

Code Review

This project is production Mozilla code and subject to our engineering practices and quality standards. Every patch must be peer reviewed by a member of the Glean core team.

Reviewers are defined in the CODEOWNERS file and are automatically added for every pull request. Every pull request needs to be approved by at least one of these people before landing.

The submitter needs to decide on their own discretion whether the changes require a look from more than a single reviewer or any outside developer. Reviewers can also ask for additional approval from other reviewers.

Release

See the Release process on how to release a new version of the Glean SDK.

Code Coverage

In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. (Wikipedia)

This chapter describes how to generate a traditional code coverage report over the Kotlin, Swift and Python code in the Glean SDK repository. To learn how to generate a coverage report about what metrics your project is testing, see the user documentation on generating testing coverage reports.

Generating Kotlin reports locally

Locally you can generate a coverage report with the following command:

./gradlew -Pcoverage :glean:build

After that you'll find an HTML report at the following location:

glean-core/android/build/reports/jacoco/jacocoTestReport/jacocoTestReport/html/index.html

Generating Swift reports locally

Xcode automatically generates code coverage when running tests. You can find the report in the Report Navigator (View -> Navigators -> Show Report Navigator -> Coverage).

Generating Python reports locally

Python code coverage is determined using the coverage.py library.

Run

make coverage-python

to generate code coverage reports in the Glean virtual environment.

After running, the report will be in htmlcov.

Developing documentation

The documentation in this repository pertains to the Glean SDK. That is, the client-side code for Glean telemetry. Documentation for Glean in general, and the Glean-specific parts of the data pipeline and analysis is documented elsewhere in the firefox-data-docs repository.

The main narrative documentation is written in Markdown and converted to static HTML using mdbook.

API docs are also generated from docstrings for Rust, Kotlin, Swift and Python.

Building documentation

Building the narrative (book) documentation

The mdbook crate is required in order to build the narrative documentation:

cargo install mdbook mdbook-mermaid mdbook-open-on-gh

Then both the narrative and Rust API docs can be built with:

make docs
# ...or...
bin/build-rust-docs.sh
# ...or on Windows...
bin/build-rust-docs.bat

The built narrative documentation is saved in build/docs/book, and the Rust API documentation is saved in build/docs/docs.

Building API documentation

Swift API documentation is generated using jazzy. It can be installed using:

  1. Install the latest Ruby: brew install ruby
  2. Make the installed Ruby available: export PATH=/usr/local/opt/ruby/bin:$PATH (and add that line to your .bashrc)
  3. Install the documentation tool: gem install jazzy

To build the Swift API documentation:

make docs-swift

The generated documentation is saved in build/docs/swift.

The Python API docs are generated using pdoc3. It is installed as part of creating the virtual environment for Python development.

To build the Python API documentation:

make docs-python

The generated documentation is saved in build/docs/python.

Internal links within the documentation can be checked using the link-checker tool. External links are currently not checked, since this takes considerable time and frequently fails in CI due to networking restrictions or issues.

Link checking requires building the narrative documentation as well as all of the API documentation for all languages. It is rare to build all of these locally (and in particular, the Swift API documentation can only be built on macOS), therefore it is reasonable to let CI catch broken link errors for you.

If you do want to run the link-checker locally, it can be installed using npm or your system's package manager. Then, run link-checker with:

make linkcheck

Spell checking

The narrative documentation (but not the API documentation) is spell checked using aspell.

On Unix-like platforms, it can be installed using your system's package manager:

sudo apt install aspell-en
# ...or...
sudo dnf install aspell-en
# ...or...
brew install aspell

Note that aspell 0.60.8 or later is required, as that is the first release with markdown support.

You can the spell check the narrative documentation using the following:

make spellcheck
# ...or...
bin/spellcheck.sh

This will bring up an interactive spell-checking environment in the console. Pressing a to add a word will add it to the project's local .dictionary file, and the changes should be committed to git.

Upgrading glean_parser

To upgrade the version of glean_parser used by the Glean SDK, run the bin/update-glean-parser-version.sh script, providing the version as a command line parameter:

bin/update-glean-parser-version.sh 1.28.3

This will update the version in all of the required places. Commit those changes to git and submit a pull request.

No further steps are required to use the new version of glean_parser for code generation: all of the build integrations automatically update glean_parser to the correct version.

For testing the Glean Python bindings, the virtual environment needs to be deleted to force an upgrade of glean_parser:

rm -rf glean-core/python/.venv*

Android bindings

The Glean SDK contains the Kotlin bindings for use by Android applications. It makes use of the underlying Rust component with a Kotlin-specific API on top. It also includes integrations into the Android platform.

Setup the Android Build Environment

Doing a local build of the Glean SDK:

This document describes how to make local builds of the Android bindings in this repository. Most consumers of these bindings do not need to follow this process, but will instead use pre-built bindings.

Prepare your build environment

Typically, this process only needs to be run once, although periodically you may need to repeat some steps (e.g., Rust updates should be done periodically).

Setting up Android dependencies

With Android Studio

The easiest way to install all the dependencies (and automatically handle updates), is by using Android Studio. Once this is installed, start Android Studio and open the Glean project. If Android Studio asks you to upgrade the version of Gradle, decline.

The following dependencies can be installed in Android Studio through Tools > SDK Manager > SDK Tools:

You should be set to build Glean from within Android Studio now.

Manually

Set JAVA_HOME to be the location of Android Studio's JDK which can be found in Android Studio's "Project Structure" menu (you may need to escape spaces in the path).

Note down the location of your SDK. If installed through Android Studio you will find the Android SDK Location in Tools > SDK Manager.

Set the ANDROID_HOME environment variable to that path. Alternatively add the following line to the local.properties file in the root of the Glean checkout (create the file if it does not exist):

sdk.dir=/path/to/sdk

For the Android NDK (check here for the version requirements):

  1. Download NDK from https://developer.android.com/ndk/downloads.
  2. Extract it and put it somewhere ($HOME/.android-ndk-r25 is a reasonable choice, but it doesn't matter).
  3. Add the following line to the local.properties file in the root of the Glean checkout (create the file if it does not exist):
    ndk.dir=/path/to/.android-ndk-r25
    

Setting up Rust

Rust can be installed using rustup, with the following commands:

curl https://sh.rustup.rs -sSf | sh
rustup update

Platform specific toolchains need to be installed for Rust. This can be done using the rustup target command. In order to enable building for real devices and Android emulators, the following targets need to be installed:

rustup target add aarch64-linux-android
rustup target add armv7-linux-androideabi
rustup target add i686-linux-android
rustup target add x86_64-linux-android

Building

Before building:

  • Ensure your repository is up-to-date.
  • Ensure Rust is up-to-date by running rustup update.

With Android Studio

After importing the Glean project into Android Studio it should be all set up and you can build the project using Build > Make Project

Manually

The builds are all performed by ./gradlew and the general syntax used is ./gradlew project:task

Build the whole project and run tests:

./gradlew build

You can see a list of projects by executing ./gradlew projects and a list of tasks by executing ./gradlew tasks.

FAQ

  • Q: Android Studio complains about Python not being found when building.
  • A: Make sure that the python binary is on your PATH. On Windows, in addition to that, it might be required to add its full path to the rust.pythonCommand entry in $project_root/local.properties.

Android SDK / NDK versions

The Glean SDK implementation is currently build against the following versions:

For the full setup see Setup the Android Build Environment.

The versions are defined in the following files. All locations need to be updated on upgrades:

  • Documentation
    • this file (docs/dev/core/internal/sdk-ndk-versions.md)
    • dev/android/setup-android-build-environment.md
  • CI configuration
    • .circleci/config.yml
      • sdkmanager 'build-tools;34.0.0'
      • image: circleci/android:2024.01.1-browsers
    • taskcluster/docker/linux/Dockerfile.
      • ENV ANDROID_BUILD_TOOLS "34.0.0"
      • ENV ANDROID_SDK_VERSION "11076708"
      • ENV ANDROID_PLATFORM_VERSION "34"
      • ENV ANDROID_NDK_VERSION "26.2.11394342"

Logging

Logs from glean-core are only passed through to the Android logging framework when Glean.initialize is called for the first time. This means any logging that might happen before that, e.g. from early metric collection will not be collected.

If these logs are needed for debugging add the following initializer to Glean.kt:

init {
    gleanEnableLogging()
}

Working on unreleased Glean code in android-components

This is a companion to the equivalent instructions for the android-components repository.

Modern Gradle supports composite builds, which allows to substitute on-disk projects for binary publications. Composite builds transparently accomplish what is usually a frustrating loop of:

  1. change library
  2. publish library snapshot to the local Maven repository
  3. consume library snapshot in application

Preparation

Clone the Glean SDK and android-components repositories:

git clone https://github.com/mozilla/glean
git clone https://github.com/mozilla-mobile/firefox-android

Cargo build targets

By default when building Android Components using gradle, the rust-android plugin will compile Glean for every possible platform, which might end up in build failures. You can customize which targets are built in glean/local.properties (more information):

# For physical devices:
rust.targets=arm

# For unit tests:
# rust.targets=darwin # Or linux-*, windows-* (* = x86/x64)

# For emulator only:
rust.targets=x86

Substituting projects

android-components has custom build logic for dealing with composite builds, so you should be able to configure it by simply adding the path to the Glean repository in the correct local.properties file:

In firefox-android/android-components/local.properties:

autoPublish.glean.dir=../glean

This will auto-publish Glean SDK changes to a local repository and consume them in android-components.

Replacing the Glean Gradle plugin

Note: If you only need to replace the glean_parser used in the build see Substituting glean_parser. This is only necessary if you changed GleanGradlePlugin.groovy.

If you need to replace the Glean Gradle Plugin used by other components within Android Components, follow these steps:

  1. In your Glean repository increment the version number in .buildconfig.yml

    libraryVersion: 60.0.0
    
  2. Build and publish the plugin locally:

    ./gradlew publishToMavenLocal
    
  3. In the Android Components repository change the required Glean version buildSrc/src/main/java/Dependencies.kt:

    const val mozilla_glean = "60.0.0"
    
  4. In the Android Components repository add the following at the top of the settings.gradle file:

    pluginManagement {
        repositories {
            mavenLocal()
            gradlePluginPortal()
        }
    }
    

Building any component will now use your locally published Glean Gradle Plugin (and Glean SDK).

Using locally-published Glean in Fenix

Note: this substitution-based approach will not work for testing updates to glean_parser as shipped in the Glean Gradle Plugin. For replacing glean_parser in a local build, see Substituting glean_parser.

Preparation

Clone the Glean SDK, android-components and Fenix repositories:

git clone https://github.com/mozilla/glean
git clone https://github.com/mozilla-mobile/fenix

Substituting projects

android-components has custom build logic for dealing with composite builds, so you should be able to configure it by simply adding the path to the Glean repository in the correct local.properties file:

In fenix/local.properties:

autoPublish.glean.dir=../glean

This will auto-publish Glean SDK changes to a local repository and consume them in Fenix. You should now be able to build and run Fenix (assuming you could before all this).

Caveats

  1. This assumes you have followed the Android/Rust build setup
  2. Make sure you're fully up to date in all repositories, unless you know you need to not be.
  3. This omits the steps if changes needed because, e.g. Glean made a breaking change to an API used in android-components. These should be understandable to fix, you usually should be able to find a PR with the fixes somewhere in the android-component's list of pending PRs (or, failing that, a description of what to do in the Glean changelog).
  4. Ask in the #glean channel on chat.mozilla.org.

Substituting glean_parser

By default the Glean Kotlin SDK requires an exact version of the glean_parser. It's automatically installed as part of the Glean Gradle Plugin.

For upgrading the required glean_parser see Upgrading glean_parser.

For local development the glean_parser can be replaced with a development version. To use glean_parser from a git repository, add this to the project's build.gradle:

ext.gleanParserOverride = "git+ssh://git@github.com/mozilla/glean_parser@main#glean-parser"

Adjust the repository URL as needed. main can be any available branch. Ensure the suffix #glean_parser exists, as it tells the Python package management about the name.

iOS bindings

The Glean SDK contains the Swift bindings for use by iOS applications. It makes use of the underlying Rust component with a Swift-specific API on top. It also includes integrations into the iOS platform.

Setup the iOS Build Environment

Prepare your build environment

  1. Install Xcode 15.0 or higher.
  2. Ensure you have Python 3 installed: brew install python
  3. Install linting and formatting tools: brew install swiftlint
  4. (Optional, only required for building on the CLI) Install xcpretty: gem install xcpretty

Setting up Rust

Rust can be installed using rustup, with the following commands:

  • curl https://sh.rustup.rs -sSf | sh
  • rustup update

Platform specific toolchains need to be installed for Rust. This can be done using the rustup target command. In order to enable building to real devices and iOS emulators, the following targets need to be installed:

rustup target add aarch64-apple-ios x86_64-apple-ios aarch64-apple-ios-sim

Building

This should be relatively straightforward and painless:

  1. Ensure your repository is up-to-date.
  2. Ensure Rust is up-to-date by running rustup update.
  3. Run a build using the command make build-swift
    • To run tests use make test-swift

The above can be skipped if using Xcode. The project directory can be imported and all the build details can be left to the IDE.

Debug an iOS application against different builds of Glean

At times it may be necessary to debug against a local build of Glean or another git fork or branch in order to test new features or specific versions of Glean.

Glean is shipped as a Swift Package (as glean-swift). To replace it in consuming application you need to locally build an XCFramework and replace the Swift package with it. The following steps guide you through this process.

Build the XCFramework

In your local Glean checkout run:

make build-xcframework

This will create the XCFramework in build/archives/Glean.xcframework

Add the Glean XCFramework to the consuming application

  1. Open the project in Xcode.
  2. Remove the glean-swift package dependency in the project's settings.
  3. Clear out any build artifacts. Select the Product menu, then Clean Build Folder.
  4. Right-click in the file tree and select "Add files".
  5. Select the directory generated in your local Glean checkout, build/archives/Glean.xcframework.
  6. In the project settings, select your target.
  7. In the General section of that target, look for the "Frameworks, Libraries, and Embedded Content" section.
  8. Click the + (plus) symbol and add the Glean.xcframework.
  9. Run a full build. Select the Product menu, then Build.

The application is now built with the locally built Glean XCFramework.

After making changes to Glean you will have to rebuild the XCFramework, then clean out build artifacts in the consuming application and rebuild the project.

Upgrade the supported Xcode Version

When it is necessary to update the supported Xcode version in Glean, the bin/update-xcode-version.sh script can make this process easier by updating all of the necessary files and locations.

Usage Example

Example updating to support Xcode version 15.1:

bin/update-xcode-version.sh 15.1

Python bindings

The Glean SDK contains Python bindings for use in Python applications and test frameworks. It makes use of the underlying Rust component with a Python-specific API on top.

Setup the Python Build Environment

This document describes how to set up an environment for the development of the Glean Python bindings.

Instructions for installing a copy of the Glean Python bindings into your own environment for use in your project are described in adding Glean to your project.

Prerequisites

Glean requires Python 3.8 or later.

Make sure it is installed on your system and accessible on the PATH as python3.

Setting up Rust

If you've already set up Rust for building Glean for Android or iOS, you already have everything you need and can skip this section.

Rust can be installed using rustup, with the following commands:

  • curl https://sh.rustup.rs -sSf | sh
  • rustup update

Create a virtual environment

It is recommended to do all Python development inside a virtual environment to make sure there are no interactions with other Python libraries installed on your system.

You may want to manage your own virtual environment if, for example, you are developing a library that is using Glean and you want to install Glean into it. If you are just developing Glean itself, however, Glean's Makefile will handle the creation and use of a virtual environment automatically (though the Makefile unfortunately does not work on Microsoft Windows).

The instructions below all have both the manual instructions and the Makefile shortcuts.

Manual method

The following instructions use the basic virtual environment functionality that comes in the Python standard library. Other higher level tools exist to manage virtual environments, such as pipenv and conda. These will work just as well, but are not documented here. Using an alternative tool would replace the instructions in this section only, but all other instructions below would remain the same.

To create a virtual environment, enter the following command, replacing <venv> with the path to the virtual environment you want to create. You will need to do this only once.

$ python3 -m venv <venv>

Then activate the environment. You will need to do this for each new shell session when you want to develop Glean. The command to use depends on your shell environment.

PlatformShellCommand to activate virtual environment
POSIXbash/zshsource <venv>/bin/activate
fish. <venv>/bin/activate.fish
csh/tcshsource <venv>bin/activate.csh
PowerShell Core<venv>/bin/Activate.ps1
Windowscmd.exe<venv>\Scripts\activate.bat
PowerShell<venv>\Scripts\Activate.ps1

Lastly, install Glean's Python development dependencies into the virtual environment.

pip install -r glean-core/python/requirements_dev.txt

Makefile method

$ make setup-python

The default location of the virtual environment used by the make file is .venvX.Y, where X.Y is the version of Python in use. This makes it possible to build and test for multiple versions of Python in the same checkout.

Note: If you wish to change the location of the virtual environment that the Makefile uses, pass the GLEAN_PYENV environment variable: make setup-python GLEAN_PYENV=mypyenv.

Build the Python bindings

Manual method

Building the Python bindings also builds the Rust shared object for the Glean SDK core.

$ maturin develop

Makefile method

This will implicitly setup the Python environment, rebuild the Rust core and then build the Python bindings.

$ make build-python

Running the Python unit tests

Manual method

Make sure the Python bindings are built, then:

$ py.test glean-core/python/tests

Makefile method

The following will run the Python unit tests using py.test:

$ make test-python

You can send extra parameters to the py.test command by setting the PYTEST_ARGS variable:

$ make test-python PYTEST_ARGS="-s --pdb"

Viewing logging output

Log messages (whether originating in Python or Rust) are emitted using the Python standard library's logging module. This module provides a lot of possibilities for customization, but the easiest way to control the log level globally is with logging.basicConfig:

import logging
logging.basicConfig(level=logging.DEBUG)

Linting, formatting and type checking

The Glean Python bindings use the following tools:

  • Linting/Formatting: ruff
  • Type-checking: mypy

Manual method

$ cd glean-core/python
$ ruff check glean tests
$ ruff format glean tests
$ mypy glean

Makefile method

To just check the lints:

$ make lint-python

To reformat the Python files in-place:

$ make fmt-python

Building the Python API docs

The Python API docs are built using pdoc3.

Manual method

$ python -m pdoc --html glean --force -o build/docs/python

Makefile method

$ make python-docs

Building wheels for Linux

Building on Linux using the above instructions will create Linux binaries that dynamically link against the version of libc installed on your machine. This generally will not be portable to other Linux distributions, and PyPI will not even allow it to be uploaded. In order to create wheels that can be installed on the broadest range of Linux distributions, the Python Packaging Authority's manylinux project maintains a Docker image for building compatible Linux wheels.

The CircleCI configuration handles making these wheels from tagged releases. If you need to reproduce this locally, see the CircleCI job pypi-linux-release for an example of how this Docker image is used in practice.

Building wheels for Windows

The official wheels for Windows are produced on a Linux virtual machine using the Mingw toolchain.

The CircleCI configuration handles making these wheels from tagged releases. If you need to reproduce this locally, see the CircleCI job pypi-windows-release for an example of how this is done in practice.

Rust component

The majority of the Glean SDK is implemented as a Rust component, to be usable across all platforms.

This includes:

  • Implementations of all metric types
  • A storage layer
  • A ping serializer

Rust documentation guidelines

The documentation for the Glean SDK Rust crates is automatically generated by the rustdoc tool. That means the documentation for our Rust code is written as doc comments throughout the code.

Here we outline some guidelines to be followed when writing these doc comments.

Module level documentation

Every module's index file must have a module level doc comment.

The module level documentation must start with a one-liner summary of the purpose of the module, the rest of the text is free form and optional.

Function and constants level documentation

Every public function / constant must have a dedicated doc comment.

Non public functions / constants are not required to, but may have doc comments. These comments should follow the same structure as public functions.

Note Private functions are not added to the generated documentation by default, but may be by using the --document-private-items option.

Structure

Below is an example of the structure of a doc comment in one of the Glean SDK's public functions.

#![allow(unused)]
fn main() {
/// Collects and submits a ping for eventual uploading.
///
/// The ping content is assembled as soon as possible, but upload is not
/// guaranteed to happen immediately, as that depends on the upload policies.
///
/// If the ping currently contains no content, it will not be sent,
/// unless it is configured to be sent if empty.
///
/// # Arguments
///
/// * `ping` - The ping to submit
/// * `reason` - A reason code to include in the ping
///
/// # Returns
///
/// Whether the ping was succesfully assembled and queued.
///
/// # Errors
///
/// If collecting or writing the ping to disk failed.
pub fn submit_ping(&self, ping: &PingType, reason: Option<&str>) -> Result<bool> {
    ...
}
}

Every doc comment must start with a one-liner summary of what that function is about

The text of this section should always be in the third person and describe an action.

If any additional information is necessary, that can be added following a new line after the summary.

Depending on the signature and behavior of the function, additional sections can be added to its doc comment

These sections are:

  1. # Arguments - A list of the functions arguments and a short description of what they are. Note that there is no need to point out the type of the given argument on the doc comment since it is already outlined in the code.
  2. # Returns - A short summary of what this function returns. Note that there is no need to start this summary with "Returns..." since that is already the title of the section.
  3. # Errors - The possible paths that will or won't lead this function to return an error. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  4. # Panics - The possible paths that will or won't lead this function to panic!. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  5. # Safety - In case there are any safety concerns or guarantees, this is the section to point them out. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  6. # Examples - If necessary, this section holds examples of usage of the function.

These sections should only be added to a doc comment when necessary, e.g. we only add an # Arguments section when a function has arguments.

Note The titles of these sections must be level-1 headers. Additional sections not listed here can be level-2 or below.

If the doc comment refers to some external structure, either from the Glean SDK's code base or external, make sure to link to it.

Example:

#![allow(unused)]
fn main() {
/// Gets a [`PingType`] by name.
///
/// # Returns
///
/// The [`PingType`] of a ping if the given name was registered before, `None` otherwise.
///
/// [`PingType`]: metrics/struct.PingType.html
pub fn get_ping_by_name(&self, ping_name: &str) -> Option<&PingType> {
    ...
}
}

Note To avoid polluting the text with multiple links, prefer using reference links which allow for adding the actual links at the end of the text block.

Warnings

Important warnings about a function should be added, when necessary, before the one-liner summary in bold text.

Example:

#![allow(unused)]
fn main() {
/// **Test-only API (exported for FFI purposes).**
///
/// Deletes all stored metrics.
///
/// Note that this also includes the ping sequence numbers, so it has
/// the effect of resetting those to their initial values.
pub fn test_clear_all_stores(&self) {
    ...
}
}

Semantic line breaks

Prefer using semantic line breaks when breaking lines in any doc or non-doc comment.

References

These guidelines are largely based on the Rust API documentation guidelines.

Dependency Management Guidelines

This repository uses third-party code from a variety of sources, so we need to be mindful of how these dependencies will affect our consumers. Considerations include:

We're still evolving our policies in this area, but these are the guidelines we've developed so far.

Rust Code

Unlike Firefox, we do not vendor third-party source code directly into the repository. Instead we rely on Cargo.lock and its hash validation to ensure that each build uses an identical copy of all third-party crates. These are the measures we use for ongoing maintenance of our existing dependencies:

  • Check Cargo.lock into the repository.
  • Generate built artifacts using the --locked flag to cargo build, as an additional assurance that the existing Cargo.lock will be respected.
  • Use cargo-deny for a basic license-compatibility check as part of CI, to guard against human error.

Adding a new dependency, whether we like it or not, is a big deal - that dependency and everything it brings with it will become part of Firefox-branded products that we ship to end users. We try to balance this responsibility against the many benefits of using existing code, as follows:

  • In general, be conservative in adding new third-party dependencies.
    • For trivial functionality, consider just writing it yourself. Remember the cautionary tale of left-pad.
    • Check if we already have a crate in our dependency tree that can provide the needed functionality.
  • Prefer crates that have a a high level of due-diligence already applied, such as:
  • Check that it is clearly licensed and is MPL-2.0 compatible.
  • Take the time to investigate the crate's source and ensure it is suitably high-quality.
    • Be especially wary of uses of unsafe, or of code that is unusually resource-intensive to build.
    • Development dependencies do not require as much scrutiny as dependencies that will ship in consuming applications, but should still be given some thought.
      • There is still the potential for supply-chain compromise with development dependencies!
  • Explicitly describe your consideration of these points in the PR that introduces the new dependency.

Updating to new versions of existing dependencies is a normal part of software development and is not accompanied by any particular ceremony.

Dependency updates

We use Dependabot to automatically open pull requests when dependencies release a new version. All CI tasks run on those pull requests to ensure updates don't break anything.

As glean-core is now also vendored into mozilla-central, including all of its Rust dependencies, we need to be a bit careful about these updates. Landing upgrades of the Glean SDK itself in mozilla-central is a separate process. See Updating the Glean SDK in the Firefox Source Docs. Following are some guidelines to ensure compatibility with mozilla-central:

  • Patch releases of dependencies should be fine in most cases.
  • Minor releases should be compatible. Best to check the changelog of the updated dependency to make sure. These updates should only land in Cargo.lock. No updates in Cargo.toml are necessary
  • Major releases will always need a check against mozilla-central. See Updating the Glean SDK.

In case of uncertainty defer a decision to :janerik or :chutten.

Manual dependency updates

You can manually check for outdated dependencies using cargo-outdated. Individual crate updates for version compatible with the requirement in Cargo.toml can be done using:

cargo update -p $cratename

To update all transitive dependencies to the latest semver-compatible version use:

cargo update

Dependency vetting

The Glean SDK uses cargo-vet to ensure that third-party Rust dependencies have been audited by a trusted entity. For a full overview over cargo-vet's capabilities and usage see the cargo-vet documentation.

New or updated dependencies need to be audited to allow their usage. Dependency audits are shared with downstream Mozilla projects.

3-step guide

  • cargo vet
  • cargo vet diff $crate $old-version $new-version
  • cargo vet certify

Longer guide

Prerequisites

Install cargo-vet:

cargo install cargo-vet

Auditing steps

After adding or updating a dependency start the audit process:

cargo vet

This will scan the dependencies for any missing audits and show instructions how to proceed. For dependency updates you should start by looking at the diff. For new dependencies you will look at the full code.

This will be something like the following command for any dependency:

cargo vet diff $crate $old-version $new-version

Please read the printed criteria and consider them when performing the audit. If unsure please ask other Glean engineers for help.

It will then provide you with a Sourcegraph link to inspect the code. Alternatively you can run with --mode=local to get a diff view in your terminal.

Once you have reviewed run:

cargo vet certify

and follow the instructions.

Finally you will notice the audit being added to supply-chain/audits.toml. Add this file to your commit and create a pull request.

Adding a new metric type

Data in the Glean SDK is stored in so-called metrics. You can find the full list of implemented metric types in the user overview.

Adding a new metric type involves defining the metric type's API, its persisted and in-memory storage as well as its serialization into the ping payload.

glean_parser

In order for your metric to be usable, you must add it to glean_parser so that instances of your new metric can be instantiated and available to our users.

The documentation for how to do this should live in the glean_parser repository, but in short:

  • Your metric type must be added to the metrics schema.
  • Your metric type must be added as a type in the object model
  • Any new parameters outside of the common metric data must also be added to the schema, and be stored in the object model.
  • You must add tests.

The metric type's API

A new metric type is defined in glean-core/src/glean.udl. Each metric type is its own interface with a constructor and all recording and testing functions defined. It supports built-in types as well as new custom types. See the UniFFI documentation for more.

interface CounterMetric {
    constructor(CommonMetricData meta);

    void add(optional i32 amount = 1);
};

The implementation of this metric type is defined in its own file under glean-core/src/metrics/, e.g. glean-core/src/metrics/counter.rs for a Counter.

Start by defining a structure to hold the metric's metadata:

#[derive(Clone, Debug)]
pub struct CounterMetric {
    meta: Arc<CommonMetricData>,
}

Implement the MetricType trait to create a metric from the meta data as well as expose the meta data. This also gives you a should_record method on the metric type.

impl MetricType for CounterMetric {
    fn meta(&self) -> &CommonMetricData {
        &self.meta
    }
}

Its implementation should have a way to create a new metric from the common metric data. It should be the similar for all metric types. Additional metric type parameters are passed as additional arguments.

impl CounterMetric {
    pub fn new(meta: CommonMetricData) -> Self {
        Self {
            meta: Arc::new(meta),
        }
    }
}

Implement each method for the type. The public method should do minimal work synchronously and defer logic & storage functionality to run on the dispatcher. The synchronous implementation should then take glean: &Glean to be able to access the storage.

impl CounterMetric { // same block as above
    pub fn add(&self, amount: i32) {
        let metric = self.clone();
        crate::launch_with_glean(move |glean| metric.add_sync(glean, amount))
    }

    fn add_sync(&self, glean: &Glean, amount: i32) {
        // Always include this check!
        if !self.should_record() {
            return;
        }

        // Do error handling here

        glean
            .storage()
            .record_with(&self.meta, |old_value| match old_value {
                Some(Metric::Counter(old_value)) => Metric::Counter(old_value + amount),
                _ => Metric::Counter(amount),
            })
    }
}

Use glean.storage().record() to record a fixed value or glean.storage.record_with() to construct a new value from the currently stored one.

The storage operation makes use of the metric's variant of the Metric enumeration.

The Metric enumeration

Persistence and in-memory serialization as well as ping payload serialization are handled through the Metric enumeration. This is defined in glean-core/src/metrics/mod.rs. Variants of this enumeration are used in the storage implementation of the metric type.

To add a new metric type, include the metric module and declare its use, then add a new variant to the Metric enum:


mod counter;

// ...

pub use self::counter::CounterMetric;

#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum Metric {
    // ...
    Counter(i32),
}

Then modify the below implementation and define the right ping section name for the new type. This will be used in the ping payload:

impl Metric {
    pub fn ping_section(&self) -> &'static str {
        match self {
            // ...
            Metric::Counter(_) => "counter",
        }
    }
}

Finally, define the ping payload serialization (as JSON). In the simple cases where the in-memory representation maps to its JSON representation it is enough to call the json! macro.

impl Metric { // same block as above
    pub fn as_json(&self) -> JsonValue {
        match self {
            // ...
            Metric::Counter(c) => json!(c),
        }
    }
}

For more complex serialization consider implementing serialization logic as a function returning a serde_json::Value or another object that can be serialized.

For example, the DateTime serializer has the following entry, where get_iso_time_string is a function to convert from the DateTime metric representation to a string:

Metric::Datetime(d, time_unit) => json!(get_iso_time_string(*d, *time_unit)),

Documentation

Documentation for the new metric type must be added to the user book.

  • Add a new file for your new metric in docs/user/reference/metrics/. Its contents should follow the form and content of the other examples in that folder.
  • Reference that file in docs/user/SUMMARY.md so it will be included in the build.
  • Follow the Documentation Contribution Guide.

You must also update the payload documentation with how the metric looks in the payload.

Tests

Tests are written in the Language Bindings and tend to just cover basic functionality:

  • The metric returns the correct value when it has no value
  • The metric correctly reports errors
  • The metric returns the correct value when it has value

At this point the metric type will have an auto-generated API in all target languages. This needs to be re-exported in the target language. The following chapters have details on how to do that for the different languages.

Sometimes a metric type needs some additional modifications to expose a language-specific type, apply additional type conversions or add additional functionality. For how to implement additional modifications to the API see the following chapters.

Adding a new metric type - Kotlin

Re-export generated API

By default a metric type gets an auto-generated API from the definition in glean.udl. This API is exposed under the internal namespace. If this API is sufficient it needs to be re-exported.

Create a new Kotlin file, e.g. glean-core/android/src/main/java/mozilla/telemetry/glean/private/CounterMetricType.kt:

package mozilla.telemetry.glean.private

typealias CounterMetricType = mozilla.telemetry.glean.internal.CounterMetric

Extend and modify API

If the generated API is not sufficient, convenient or needs additional language-specific constructs or conversions the generated API can be wrapped.

Create a new Kotlin file, e.g. glean-core/android/src/main/java/mozilla/telemetry/glean/private/CounterMetricType.kt. Then create a new class, that delegates functionality to the metric type class from the internal namespace.

package mozilla.telemetry.glean.private

import mozilla.telemetry.glean.internal.CounterMetric

class CounterMetricType(meta: CommonMetricData) {
    val inner = CounterMetric(meta)

    // Wrap existing functionality
    fun add(amount: Int = 1) {
        inner.add(amount)
    }

    // Add a new method
    fun addTwo() {
        inner.add(2)
    }
}

Remember to wrap all defined methods of the metric type.

Adding a new metric type - Swift

Re-export generated API

By default a metric type gets an auto-generated API from the definition in glean.udl. If this API is sufficient it needs to be re-exported.

Create a new Swift file, e.g. glean-core/ios/Glean/Metrics/CounterMetric.swift:

public typealias CounterMetricType = CounterMetric

Extend and modify API

If the generated API is not sufficient, convenient or needs additional language-specific constructs or conversions the generated API can be wrapped.

Create a new Swift file, e.g. glean-core/ios/Glean/Metrics/CounterMetric.swift. Then create a new class, that delegates functionality to the generated metric type class.

public class CounterMetricType {
    let inner: CounterMetric

    // Wrap existing functionality
    public func add(_ amount: Int = 1) {
        inner.add(amount)
    }

    // Add a new method
    public func addTwo() {
        inner.add(2)
    }
}

Remember to wrap all defined methods of the metric type.

Adding a new metric type - Python

Re-export generated API

By default a metric type gets an auto-generated API from the definition in glean.udl. If this API is sufficient it needs to be re-exported.

In glean-core/python/glean/metrics/__init__.py add a new re-export:

from .._uniffi import CounterMetric as CounterMetricType

Extend and modify API

If the generated API is not sufficient, convenient or needs additional language-specific constructs or conversions the generated API can be wrapped.

Create a new Python file, e.g. glean-core/python/glean/metrics/counter.py. Then create a new class, that delegates functionality to the generated metric type class.

from .._uniffi import CommonMetricData
from .._uniffi import CounterMetric


class CounterMetricType:
    def __init__(self, common_metric_data: CommonMetricData):
        self._inner = CounterMetric(common_metric_data)

    # Wrap existing functionality
    def add(self, amount = 1):
        self._inner.add(amount)

    # Add a new method
    def add_two(self):
        self._inner.add(2)

The new metric type also needs to be imported from glean-core/python/glean/metrics/__init__.py:

from .counter import CounterMetricType

__all__ = [
    "CounterMetricType",
    # ...
]

It also must be added to the _TYPE_MAPPING in glean-core/python/glean/_loader.py:

_TYPE_MAPPING = {
    "counter": metrics.CounterMetricType,
    # ...
}

Adding a new metric type - Rust

Trait

To ensure the API is stable across Rust consumers and re-exporters (like FOG), you must define a trait for the new metric in glean-core/src/traits. First, add your metric type to mod.rs:

mod counter;

pub use self::counter::Counter;

Then add the trait in e.g. counter.rs.

The trait includes test_get_num_recorded_errors and any metric operation included in the metric type's API (except new). The idea here is to only include operations that make sense for Rust consumers. If there are internal-only or language-specific APIs on the underlying metric, feel free to not include them on the trait.

Spend some time on the comments. These will be the dev-facing API docs for Rust consumers.

Rust Language Binding (RLB) Type

The Rust Language Binding can simply re-export the core metric types. You can find the RLB metric re-exports in glean-core/rlb/src/private.

Add your metric type to mod.rs:

pub use glean_core::Counter;

Documentation

Don't forget to document the new Rust API in the Book's page on the Metric Type (e.g. Counter). Add a tab for Rust, and mimic any other language's example.

Adding a new metric type - data platform

The data platform technically exists outside of the Glean SDK. However, the Glean-specific steps for adding a new Glean metric type to the data platform are documented here for convenience.

Adding a new metric type to mozilla-pipeline-schemas

The mozilla-pipeline-schemas contains JSON schemas that are used to validate the Glean ping payload when reaching the ingestion server. These schemas are written using a simple custom templating system to give more structure and organization to the schemas.

Each individual metric type has its own file in templates/include/glean. For example, here is the schema for the Counter metric type in counter.1.schema.json:

{
  "type": "integer"
}

Add a new file for your new metric type in that directory, containing the JSON schema snippet to validate it. A good resource for learning about JSON schema and the validation keywords that are available is Understanding JSON Schema.

A reference to this template needs to be added to the main Glean schema in templates/include/glean/glean.1.schema.json. For example, the snippet to include the template for the counter metric type is:

        "counter": {
          @GLEAN_BASE_OBJECT_1_JSON@,
          "additionalProperties": @GLEAN_COUNTER_1_JSON@
        },

If adding a labeled metric type as well, the same template from the "core" metric type can be reused:

        "labeled_counter": {
          @GLEAN_BASE_OBJECT_1_JSON@,
          "additionalProperties": {
            @GLEAN_LABELED_GROUP_1_JSON@,
            "additionalProperties": @GLEAN_COUNTER_1_JSON@
          }
        },

After updating the templates, you need to regenerate the fully-qualified schema using these instructions.

The fully-qualified Glean schema is also used by the Glean SDK's unit test suite to make sure that ping payloads validate against the schema. Therefore, whenever the Glean JSON schema is updated, it should also be copied and checked in to the Glean SDK repository. Specifically, copy the generated schema in mozilla-pipeline-schemas/schemas/glean/glean.1.schema.json to the root of the Glean SDK repository.

Adding a new metric type to mozilla-schema-generator

Each new metric type also needs an entry in the Glean configuration in mozilla-schema-generator. The config file for Glean is in glean.yaml. Each entry in that file just needs some boilerplate for each metric type. For example, the snippet for the Counter metric type is:

  counter:
    match:
      send_in_pings:
        not:
          - glean_ping_info
      type: counter

Adding a new metric type to lookml-generator

Each new metric type needs to be enabled in the lookml-generator. The list of allowed Glean metric types is in generator/views/glean_ping_view.py.yaml. Add the new metric type to ALLOWED_TYPES:

ALLOWED_TYPES = DISTRIBUTION_TYPES | {
    # ...
    "counter",
}

Internal documentation

This chapter describes aspects of the Glean SDKs that are internal implementation details.

Note

As implementation details, this information is subject to change at any time. External users should not rely on this information. It is provided as a reference for contributing to Glean only.

This includes:

Reserved ping names

The Glean SDK reserves all ping names in send_in_pings starting with glean_.

This currently includes, but is not limited to:

  • glean_client_info: metrics sent with this ping are added to every ping in its client_info section;
  • glean_internal_info: metrics sent with this ping are added to every ping in its ping_info section.

Additionally, only Glean may specify all-pings. This special value has no effect in the client, but indicates to the backend infrastructure that a metric may appear in any ping.

Clearing metrics when disabling/enabling Glean

When disabling upload (Glean.setUploadEnabled(false)), metrics are also cleared to prevent their storage on the local device, and lessen the likelihood of accidentally sending them. There are a couple of exceptions to this:

  • first_run_date is retained so it isn't reset if metrics are later re-enabled.

  • client_id is set to the special value "c0ffeec0-ffee-c0ff-eec0-ffeec0ffeec0". This should make it possible to detect the error case when metrics are sent from a client that has been disabled.

When re-enabling metrics:

  • first_run_date is left as-is. (It should remain a correct time of first run of the application, unaffected by disabling/enabling the Glean SDK).

  • The client_id is set to a newly-generated random UUID. It has no connection to the client_id used prior to disabling the Glean SDK.

  • Application lifetime metrics owned by Glean are regenerated from scratch so that they will appear in subsequent pings. This is the same process that happens during every startup of the application when the Glean SDK is enabled. The application is also responsible for setting any application lifetime metrics that it manages at this time.

  • Ping lifetime metrics do not need special handling. They will begin recording again after metric uploading is re-enabled.

Database format

The Glean SDK stores all recorded data in a database for persistence. This data is read, written and transformed by the core implementation.

Some internal metrics are stored similar to user-defined metrics, but encode additional implementation-defined information in the key or value of the entry.

We guarantee backwards-compatibility of already stored data. If necessary an old database will be converted to the new format.

Database stores

The Glean SDK will use one store per metric lifetime: user, application and ping. This allows to separately read and clear metrics based on their respective lifetimes.

Key

The key of a database entry uniquely identifies the stored metric data. It encodes additional information about the stored data in the key name using special characters. The full list of special characters in use is:

. # / +

These characters cannot be used in a user-defined ping name, metric category, metric name or label.

A key will usually look like:

ping#category.name[/label]

where:

FieldDescriptionAllowed charactersMaximum length*Note
pingThe ping name this data is stored for[a-z0-9-]30
categoryThe metric's category[a-z0-9._]40Empty string possible.
nameThe metric's name[a-z0-9._#]30
labelThe label (optional)[a-z0-9._-]71

* The maximum length is not enforced for internal metrics, but is enforced for user metrics as per schema definition.

Additional characters in Glean.js

Glean.js uses a different format to store data. Additional information is encoded into database keys, using the + (plus) character to separate parts of data. Watch Bug 1720476 for details.

Value

The value is stored in an implementation-defined format to encode the value's data. It can be read, modified and serialized into the Payload format.

Payload format

The main sections of a Glean ping are described in Ping Sections. This Payload format chapter describes details of the ping payload that are relevant for decoding Glean pings in the pipeline.

NOTE: The payload format is an implementation detail of the Glean SDK and subject to change at any time. External users should not rely on this information. It is provided as a reference for contributing to Glean only.

JSON Schema

Glean's ping payloads have a formal JSON schema defined in the mozilla-pipeline-schemas project. It is written as a set of templates that are expanded by the mozilla-pipeline-schemas build infrastructure into a fully expanded schema.

Metric types

Boolean

A Boolean is represented by its boolean value.

Example

true

Counter

A Counter is represented by its integer value.

Example

17

Quantity

A Quantity is represented by its integer value.

Example

42

String

A String is represented by its string value.

Example

"sample string"

String list

A String List is represented as an array of strings.

Example

["sample string", "another one"]

An empty list is accepted and sent as:

[]

Timespan

A Timespan is represented as an object of their duration as an integer and the time unit.

Field nameTypeDescription
valueIntegerThe value in the marked time unit.
time_unitStringThe time unit.

Example

{
    "time_unit": "milliseconds",
    "value": 10
}

Timing Distribution

A Timing distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent, so that the server can aggregate and visualize distributions, without knowing anything about the specific bucketing function used. This range starts with the first bucket with a non-zero accumulation, and ends at one bucket beyond the last bucket with a non-zero accumulation (so that the upper bound on the last bucket is retained).

For example, the following shows the recorded values vs. what is sent in the payload.

recorded:  1024: 2, 1116: 1,                   1448: 1,
sent:      1024: 2, 1116: 1, 1217: 0, 1327: 0, 1448: 1, 1579: 0

Example:

{
    "sum": 4612,
    "values": {
        "1024": 2,
        "1116": 1,
        "1217": 0,
        "1327": 0,
        "1448": 1,
        "1579": 0
    }
}

Memory Distribution

A Memory distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent. See timing distribution for more details.

Example:

{
    "sum": 3,
    "values": {
        "0": 1,
        "1": 3,
    }
}

UUID

A UUID is represented by the string representation of the UUID.

Example

"29711dc8-a954-11e9-898a-eb4ea7e8fd3f"

URL

A URL is represented by the encoded string representation of the URL.

Example

"https://mysearchengine.com/?query=%25s"

Datetime

A Datetime is represented by its ISO8601 string representation, truncated to the metric's time unit. It always includes the timezone offset.

Example

"2019-07-18T14:06:00.000+02:00"

Event

Events are represented as an array of objects, with one object for each event. Each event object has the following keys:

Field nameTypeDescription
timestampIntegerA monotonically increasing timestamp value, in milliseconds. To avoid leaking absolute times, the first timestamp in the array is always zero, and subsequent timestamps in the array are relative to that reference point.
categoryStringThe event's category. This comes directly from the category under which the metric was defined in the metrics.yaml file.
nameStringThe event's name, as defined in the metrics.yaml file.
extraObject (optional)Extra data associated with the event. Both the keys and values of this object are strings. The keys must be from the set defined for this event in the metrics.yaml file. The values have a maximum length of 50 bytes, when encoded as UTF-8.

Example

[
  {
    "timestamp": 0,
    "category": "app",
    "name": "ss_menu_opened"
  },
  {
    "timestamp": 124,
    "category": "search",
    "name": "performed_search",
    "extra": {
      "source": "default.action"
    }
  }
]

Also see the JSON schema for events.

To avoid losing events when the application is killed by the operating system, events are queued on disk as they are recorded. When the application starts up again, there is no good way to determine if the device has rebooted since the last run and therefore any timestamps recorded in the new run could not be guaranteed to be consistent with those recorded in the previous run. To get around this, on application startup, any queued events are immediately collected into pings and then cleared. These "startup-triggered pings" are likely to have a very short duration, as recorded in ping_info.start_time and ping_info.end_time (see the ping_info section). The maximum timestamp of the events in these pings are quite likely to exceed the duration of the ping, but this is to be expected.

Custom Distribution

A Custom distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent, so that the server can aggregate and visualize distributions, without knowing anything about the specific bucketing function used. This range starts with the first bucket (as specified in the range_min parameter), and ends at one bucket beyond the last bucket with a non-zero accumulation (so that the upper bound on the last bucket is retained).

For example, suppose you had a custom distribution defined by the following parameters:

  • range_min: 10
  • range_max: 200
  • bucket_count: 80
  • histogram_type: 'linear'

The following shows the recorded values vs. what is sent in the payload.

recorded:        12: 2,                      22: 1
sent:     10: 0, 12: 2, 14: 0, 17: 0, 19: 0, 22: 1, 24: 0

Example:

{
    "sum": 3,
    "values": {
        "10": 0,
        "12": 2,
        "14": 0,
        "17": 0,
        "19": 0,
        "22": 1,
        "24": 0
    }
}

Labeled metrics

Currently several labeled metrics are supported:

All are on the top-level represented in the same way, as an object mapping the label to the metric's value. See the individual metric types for details on the value payload:

Example for Labeled Counters

{
    "label1": 2,
    "label2": 17
}

Rate

A Rate is represented by its numerator and denominator.

Example

{
    "numerator": 22,
    "denominator": 7,
}

Text

A Text is represented by its string value.

Example

"sample string"

Object

An Object is represented as either a JSON array or JSON object, depending on the allowed structure. Missing values (null) and empty arrays will not be serialized in the payload.

Example

Object:

{
  "colour": "red",
  "diameter": 5
}

Array with objects:

[
  {
    "type": "ERROR",
    "address": "0x000000"
  },
  {
    "type": "ERROR"
  }
]

Directory structure

This page describes the contents of the directories where Glean stores its data.

All Glean data is inside a single root directory with the name glean_data.

On Android, this directory lives inside the ApplicationInfo.dataDir directory associated with the application.

On iOS, this directory lives inside the Documents directory associated with the application.

For the Python bindings, if no directory is specified, it is stored in a temporary directory and cleared at exit.

Within the glean_data directory are the following contents:

  • db: Contains the rkv database used to persist ping and user lifetime metrics.

  • events: Contains flat files containing persisted events before they are collected into pings.

  • pending_pings: Pings are written here before they are picked up by the ping uploader to send to the submission endpoint.

  • deletion_request: The deletion-request ping is written here before it is picked up by the ping uploader. This directory is separate from the pending_pings directory above, in or for an uploader to pick up only deletion-request pings and send them after general upload is disabled.

  • tmp: Pings are written here and then moved to the pending_pings directory when finished to make sure that partially-written pings to not get queued for sending.
    (The standard system temporary directory is not used for this because it is not guaranteed to be on the same volume as the glean_data directory on Android).

File format

For persistence assembled ping payloads are stored as files on disk in the above directories. At initialization the Glean SDK reads any pending ping files and queues them for eventual upload. The files are not meant to be read by any outside consumers. Its format may change. The Glean SDK will be able to read previous formats if necessary.

The current format has the following newline-delimited lines:

<ping submission path>\n
<json-encoded ping payload>\n
<json-encoded metadata>\n

Debug Pings

For debugging and testing purposes Glean allows to tag pings, which are then available in the Debug Ping Viewer1.

Pings are sent to the same endpoint as all pings, with the addition of one HTTP header:

X-Debug-ID: <tag>

<tag> is a alphanumeric string with a maximum length of 20 characters, used to identify pings in the Debug Ping Viewer.

See Debugging products using the Glean SDK for detailed information how to use this mechanism in applications.


1

Requires a Mozilla login.

Upload mechanism

The glean-core Rust crate does not handle the ping upload directly. Network stacks vastly differ between platforms, applications and operating systems. The Glean SDK leverages the available platform capabilities to implement any network communication.

Glean core controls all upload and coordinates the platform side with its own internals. All language bindings implement ping uploading around a common API and protocol.

The upload module in the language bindings

classDiagram
    class UploadResult {
    }

    class HttpResponse {
        int statusCode
    }

    class UnrecoverableFailure {
    }

    class RecoverableFailure {
    }

    class PingUploader {
        <<interface>>
        +Upload() UploadResult
    }

    class BaseUploader {
        +BaseUploader(PingUploader)
        ~TriggerUploads()
        ~CancelUploads()
    }

    class HttpUploader {
        +Upload() UploadResult
    }

    UploadResult <|-- HttpResponse
    UploadResult <|-- UnrecoverableFailure
    UploadResult <|-- RecoverableFailure
    PingUploader <|-- HttpUploader
    PingUploader o-- BaseUploader

PingUploader

The PingUploader interface describes the contract between the BaseUploader and the SDK or user-provided upload modules.

BaseUploader

The BaseUploader component is responsible for interfacing with the lower level get_upload_task calls and dealing with the logic in a platform-coherent way.

  • Every Glean instance will always have a single BaseUploader instance.
  • The BaseUploader is fed, at Glean initialization, with an instance of an implementation of the PingUploader interface.
  • Whenever BaseUploader thinks it should perform an upload, it will call the provided instance of the PingUploader interface and call upload with the data it's getting from Glean.
  • Any throttling happens at this layer: the core will orchestrate the throttling, while this layer will be responsible to abide to what the core is telling it to do.
  • Any logic for aborting uploads or triggering uploads is provided by this object.

HttpClientUploader

The HttpClientUploader is the default SDK-provided HTTP uploader. It acts as an adapter between the platform-specific upload library and the Glean upload APIs.

Note that most of the languages have now diverged, due to the many iterations, from this design. For example, in Kotlin, the BaseUploader is mostly empty and its functionalities are spread in the PingUploadWorker.

Upload task API

The following diagram visualizes the communication between Glean core (the Rust crate), a Glean language binding (e.g. the Kotlin or Swift implementation) and a Glean end point server.

sequenceDiagram
    participant Glean core
    participant Glean wrapper
    participant Server

    Glean wrapper->>Glean core: get_upload_task()
    Glean core->>Glean wrapper: Task::Upload(PingRequest)
    Glean wrapper-->>Server: POST /submit/{task.id}
    Server-->>Glean wrapper: 200 OK
    Glean wrapper->>Glean core: upload_response(200)
    Glean wrapper->>Glean core: get_upload_task()
    Glean core->>Glean wrapper: Task::Done

Glean core will take care of file management, cleanup, rescheduling and rate limiting1.

1

Rate limiting is achieved by limiting the amount of times a language binding is allowed to get a Task::Upload(PingRequest) from get_upload_task in a given time interval. Currently, the default limit is for a maximum of 15 upload tasks every 60 seconds and there are no exposed methods that allow changing this default (follow Bug 1647630 for updates). If the caller has reached the maximum tasks for the current interval, they will get a Task::Wait regardless if there are other Task::Upload(PingRequest)s queued.

Available APIs

For direct Rust consumers the global Glean object provides these methods:

#![allow(unused)]
fn main() {
/// Gets the next task for an uploader.
fn get_upload_task(&self) -> PingUploadTask

/// Processes the response from an attempt to upload a ping.
fn process_ping_upload_response(&self, uuid: &str, status: UploadResult)
}

See the documentation for further usage and explanation of the additional types:

For other consumers (e.g. Kotlin/Swift/Python implementations) these functions are available:

/// Gets the next task for an uploader.
PingUploadTask glean_get_upload_task();

/// Processes the response from an attempt to upload a ping.
void glean_process_ping_upload_response(string uuid, UploadResult result);

Implementations

Project NameLanguage BindingsOperating SystemApp Lifecycle TypeEnvironment Data source
glean-coreRustallallnone
gleanRustWindows/Mac/LinuxDesktop applicationOS info build-time autodetected, app info passed in
Glean AndroidKotlin, JavaAndroidMobile appAutodetected from the Android environment
Glean iOSSwiftiOSMobile appAutodetected from the iOS environment
Glean.pyPythonWindows/Mac/LinuxallAutodetected at runtime
FOG1Rust/C++/JavaScriptas Firefox supportsDesktop applicationOS info build-time autodetected, app info passed in

Features matrix

Feature/BindingsKotlinSwiftPythonRust
Core metric types
Metrics Testing API
baseline ping23
metrics4
events5
deletion-request ping
Custom pings
Custom pings testing API
Debug Ping View support

1

Firefox on Glean (FOG) is the name of the layer that integrates the Glean SDK into Firefox Desktop. It uses the Glean Rust bindings and exposes the same Rust API inside Firefox and extends it with a C++ and JavaScript API.

2

Not sent automatically. Use the handle_client_active and handle_client_inactive API.

3

Sent automatically on startup if necessary. For active/inactive pings use the handle_client_active and handle_client_inactive API.

4

Needs to be enabled using use_core_mps in the Configuration.

5

Sent on startup when pending events are stored and when reaching the limit. Additionally sent when handle_client_inactive is called.

The following language-specific API docs are available: