Developing the Glean SDK

Glean logo

The Glean SDK is a modern approach for a Telemetry library and is part of the Glean project.

To contact us you can:

The source code is available on GitHub.

About this book

This book is meant for developers of the Glean SDK.

For user documentation on the Glean SDK, refer to the Glean Book.

For documentation about the broader end-to-end Glean project, refer to the Mozilla Data Documentation.

License

The Glean SDK Source Code is subject to the terms of the Mozilla Public License v2.0. You can obtain a copy of the MPL at https://mozilla.org/MPL/2.0/.

Running the tests

Running all tests

The tests for all languages may be run from the command line:

make test

Windows Note: On Windows, make is not available by default. While not required, installing make will allow you to use the convenience features in the Makefile.

Running the Rust tests

The Rust tests may be run with the following command:

cargo test --all

Log output can be controlled via the environment variable RUST_LOG for the glean_core crate:

export RUST_LOG=glean_core=debug

When running tests with logging you need to tell cargo to not suppress output:

cargo test -- --nocapture

Tests run in parallel by default, leading to interleaving log lines. This makes it harder to understand what's going on. For debugging you can force single-threaded tests:

cargo test -- --nocapture --test-threads=1

Running the Kotlin/Android tests

From the command line

The full Android test suite may be run from the command line with:

./gradlew test

From Android Studio

To run the full Android test suite, in the "Gradle" pane, navigate to glean-core -> Tasks -> verification and double-click either testDebugUnitTest or testReleaseUnitTest (depending on whether you want to run in Debug or Release mode). You can save this task permanently by opening the task dropdown in the toolbar and selecting "Save glean.rs:glean:android [testDebugUnitTest] Configuration".

To run a single Android test, navigate to the file containing the test, and right click on the green arrow in the left margin next to the test. There you have a choice of running or debugging the test.

Running the Swift/iOS tests

From the command line

The full iOS test suite may be run from the command line with:

make test-swift

From Xcode

To run the full iOS test suite, run tests in Xcode (Product -> Test). To run a single Swift test, navigate to the file containing the test, and click on the arrow in the left margin next to the test.

Testing in CI

See Continuous Integration for details.

How the Glean CI works

Current build status: Build Status

The mozilla/glean repository uses both CircleCI and TaskCluster to run tests on each Pull Request, each merge to the main branch and on releases.

Which tasks we run

Pull Request - testing

For each opened pull request we run a number of checks to ensure consistent formatting, no lint failures and that tests across all language bindings pass. These checks are required to pass before a Pull Request is merged. This includes by default the following tasks:

On CircleCI:

  • Formatting & lints for Rust, Kotlin, Swift & Python
  • Consistency checks for generated header files, license checks across dependencies and a schema check
  • Builds & tests for Rust, Kotlin, Python & C#
    • The Python language bindings are build for multiple platforms and Python versions
  • Documentation generation for all languages and the book, including spell check & link check
  • By default no Swift/iOS tests are launched. These can be run manually. See below.

On TaskCluster:

  • A full Android build & test

For all tasks you can always find the full log and generated artifacts by following the "Details" links on each task.

iOS tests on Pull Request

Due to the long runtime and costs iOS tests are not run by default on Pull Requests. Admins of the mozilla/glean repository can run them on demand.

  1. On the pull request scroll to the list of all checks running on that PR on the bottom.
  2. Find the job titled "ci/circleci: iOS/hold".
  3. Click "Details" to get to Circle CI.
  4. Click on the "hold" task, then approve it.
  5. The iOS tasks will now run.

Merge to main

Current build status: Build Status

When pull requests are merged to the main branch we run the same checks as we do on pull requests. Additionally we run the following tasks:

  • Documentation deployment
  • iOS build, unit & integration test

If you notice that they fail please take a look and open a bug to investigate build failures.

Releases

When cutting a new release of the Glean SDK our CI setup is responsible for building, packaging and uploading release builds. We don't run the full test suite again.

We run the following tasks:

On CircleCI:

On TaskCluster:

On release the full TaskCluster suite takes up to 30 minutes to run.

How to find TaskCluster tasks

  1. Go to GitHub releases and find the version
  2. Go to the release commit, indicated by its commit id on the left
  3. Click the green tick or red cross left of the commit title to open the list of all checks.
  4. Find the "Decision task" and click "Details"
  5. From the list of tasks pick the one you want to look at and follow the "View task in Taskcluster" link.

Special behavior

Documentation-only changes

Documentation is deployed from CI, we therefore need it to run on documentation changes. However, some of the long-running code tests can be skipped. For that add the following literal string to the last commit message of your pull request:

[doc only]

Skipping CI completely

It is possible to completely skip running CI checks on a pull request.

To skip tasks on CircleCI include the following literal string in the commit message.
To skip tasks on TaskCluster add the following to the title of your pull request.

[ci skip]

This should only be used for metadata files, such as those in .github, LICENSE or CODE_OF_CONDUCT.md.

Release-like Android builds

Release builds are done on TaskCluster. As it builds for multiple platforms this task is quite long and does not run on pull requests by default.

It is possible trigger the task on pull requests. Add the following literal string to the pull request title:

[ci full]

If added after initially opening the pull request, you need to close, then re-open the pull request to trigger a new build.

The Decision Task will spawn module-build-glean and module-build-glean-gradle-plugin tasks. When they finish you will find the generated files under Artifacts on TaskCluster.

Glean release process

The Glean SDK consists of multiple libraries for different platforms and targets. The main supported libraries are released as one. Development happens on the main repository https://github.com/mozilla/glean. See Contributing for how to contribute changes to the Glean SDK.

The development & release process roughly follows the GitFlow model.

Note: The rest of this section assumes that upstream points to the https://github.com/mozilla/glean repository, while origin points to the developer fork. For some developer workflows, upstream can be the same as origin.

Table of Contents:

Published artifacts

Standard Release

Releases can only be done by one of the Glean maintainers.

  • Main development branch: main
  • Main release branch: release
  • Specific release branch: release-vX.Y.Z
  • Hotfix branch: hotfix-X.Y.(Z+1)

Create a release branch

  1. Create a release branch from the main branch:
    git checkout -b release-v25.0.0 main
    
  2. Update the changelog .
    1. Add any missing important changes under the Unreleased changes headline.
    2. Commit any changes to the changelog file due to the previous step.
  3. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch, minor or major version of what is currently released.
    2. Let it create a commit for you.
  4. Push the new release branch:
    git push upstream release-v25.0.0
    
  5. Wait for CI to finish on that branch and ensure it's green:
  6. Apply additional commits for bug fixes to this branch.
    • Adding large new features here is strictly prohibited. They need to go to the main branch and wait for the next release.

Finish a release branch

When CI has finished and is green for your specific release branch, you are ready to cut a release.

  1. Check out the main release branch:
    git checkout release
    
  2. Merge the specific release branch:
    git merge --no-ff release-v25.0.0
    
  3. Push the main release branch:
    git push upstream release
    
  4. Tag the release on GitHub:
    1. Draft a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the release branch as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
  5. Wait for the CI build to complete for the tag.
  6. Send a pull request to merge back the specific release branch to the development branch: https://github.com/mozilla/glean/compare/main...release-v25.0.0?expand=1
    • This is important so that no changes are lost.
    • This might have merge conflicts with the main branch, which you need to fix before it is merged.
  7. Once the above pull request lands, delete the specific release branch.
  8. Update glean-ffi in the iOS megazord. See the application-services documentation for that.

Hotfix release for latest version

If the latest released version requires a bug fix, a hotfix branch is used.

Create a hotfix branch

  1. Create a hotfix branch from the main release branch:
    git checkout -b hotfix-v25.0.1 release
    
  2. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch version of what is currently released.
    2. Let it create a commit for you.
  3. Push the hotfix branch:
    git push upstream hotfix-v25.0.1
    
  4. Create a local hotfix branch for bugfixes:
    git checkout -b bugfix hotfix-v25.0.1
    
  5. Fix the bug and commit the fix in one or more separate commits.
  6. Push your bug fixes and create a pull request against the hotfix branch: https://github.com/mozilla/glean/compare/hotfix-v25.0.1...your-name:bugfix?expand=1
  7. When that pull request lands, wait for CI to finish on that branch and ensure it's green:

Finish a hotfix branch

When CI has finished and is green for your hotfix branch, you are ready to cut a release, similar to a normal release:

  1. Check out the main release branch:
    git checkout release
    
  2. Merge the hotfix branch:
    git merge --no-ff hotfix-v25.0.1
    
  3. Push the main release branch:
    git push upstream release
    
  4. Tag the release on GitHub:
    1. Draft a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the release branch as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
  5. Wait for the CI build to complete for the tag.
  6. Send a pull request to merge back the hotfix branch to the development branch: https://github.com/mozilla/glean/compare/main...hotfix-v25.0.1?expand=1
    • This is important so that no changes are lost.
    • This might have merge conflicts with the main branch, which you need to fix before it is merged.
  7. Once the above pull request lands, delete the hotfix branch.
  8. Update glean-ffi in the iOS megazord. See the application-services documentation for that.

Hotfix release for previous version

If you need to release a hotfix for a previously released version (that is: not the latest released version), you need a support branch.

Note: This should rarely happen. We generally support only the latest released version of Glean.

Create a support and hotfix branch

  1. Create a support branch from the version tag and push it:
    git checkout -b support/v24.0 v24.0.0
    git push upstream support/v24.0
    
  2. Create a hotfix branch for this support branch:
    git checkout -b hotfix-v24.0.1 support/v24.0
    
  3. Fix the bug and commit the fix in one or more separate commits into your hotfix branch.
  4. Push your bug fixes and create a pull request against the support branch: https://github.com/mozilla/glean/compare/support/v24.0...your-name:hotfix-v24.0.1?expand=1
  5. When that pull request lands, wait for CI to finish on that branch and ensure it's green:

Finish a support branch

  1. Check out the support branch:
    git checkout support/v24.0
    
  2. Update the changelog .
    1. Add any missing important changes under the Unreleased changes headline.
    2. Commit any changes to the changelog file due to the previous step.
  3. Run bin/prepare-release.sh <new version> to bump the version number.
    1. The new version should be the next patch version of the support branch.
    2. Let it create a commit for you.
  4. Push the support branch:
    git push upstream support/v24.0
    
  5. Tag the release on GitHub:
    1. Draft a New Release in the GitHub UI (Releases > Draft a New Release).
    2. Enter v<myversion> as the tag. It's important this is the same as the version you specified to the prepare_release.sh script, with the v prefix added.
    3. Select the support branch (e.g. support/v24.0) as the target.
    4. Under the description, paste the contents of the release notes from CHANGELOG.md.
  6. Wait for the CI build to complete for the tag.
  7. Send a pull request to merge back any bug fixes to the development branch: https://github.com/mozilla/glean/compare/main...support/v24.0?expand=1
    • This is important so that no changes are lost.
    • This might have merge conflicts with the main branch, which you need to fix before it is merged.
  8. Once the above pull request lands, delete the support branch.

Upgrading android-components to a new version of Glean

On Android, Mozilla products consume the Glean SDK through its wrapper in android-components. Therefore, when a new Glean SDK release is made, android-components must also be updated.

After following one of the above instructions to make a Glean SDK release:

  1. Ensure that CI has completed and the artifacts are published on Mozilla's Maven repository.

  2. Create a pull request against android-components to update the Glean version with the following changes:

    • The Glean version is updated in the mozilla_glean variable in the buildSrc/src/main/java/Dependencies.kt file.

    • The relevant parts of the Glean changelog copied into the top part of the android-components changelog. This involves copying the Android-specific changes and the general changes to Glean, but can omit other platform-specific changes.

IMPORTANT: Until the Glean Gradle plugin work is complete, all downstream consumers of android-components will also need to update their version of Glean to match the version used in android-components so that their unit tests can run correctly.

In Fenix, for example, the Glean version is specified here.

Contributing to the Glean SDK

Anyone is welcome to help with the Glean SDK project. Feel free to get in touch with other community members on chat.mozilla.org or through issues on GitHub or Bugzilla.

Participation in this project is governed by the Mozilla Community Participation Guidelines.

Bug Reports

To report issues or request changes, file a bug in Bugzilla in Data Platform & Tools :: Glean: SDK.

If you don't have a Bugzilla account, we also accept issues on GitHub.

Making Code Changes

To work on the code in this repository you will need to be familiar with the Rust programming language. You can get a working rust compiler and toolchain via rustup.

You can check that everything compiles by running the following from the root of your checkout:

make test-rust

If you plan to work on the Android component bindings, you should also review the instructions for setting up an Android build environment.

To run all Kotlin tests:

make test-kotlin

or run tests in Android Studio.

To run all Swift tests:

make test-swift

or run tests in Xcode.

Sending Pull Requests

Patches should be submitted as pull requests (PRs).

Before submitting a PR:

  • Your code must run and pass all the automated tests before you submit your PR for review.
  • "Work in progress" pull requests are allowed to be submitted, but should be clearly labeled as such and should not be merged until all tests pass and the code has been reviewed.
  • For changes to Rust code
    • make test-rust produces no test failures
    • make clippy runs without emitting any warnings or errors.
    • make rustfmt does not produce any changes to the code.
  • For changes to Kotlin code
    • make test-kotlin runs without emitting any warnings or errors.
    • make ktlint runs without emitting any warnings.
  • For changes to Swift code
    • make test-swift (or running tests in Xcode) runs without emitting any warnings or errors.
    • make swiftlint runs without emitting any warnings or errors.
  • Your patch should include new tests that cover your changes. It is your and your reviewer's responsibility to ensure your patch includes adequate tests.

When submitting a PR:

  • You agree to license your code under the project's open source license (MPL 2.0).
  • Base your branch off the current main.
  • Add both your code and new tests if relevant.
  • Please do not include merge commits in pull requests; include only commits with the new relevant code.

Code Review

This project is production Mozilla code and subject to our engineering practices and quality standards. Every patch must be peer reviewed by a member of the Glean core team.

Reviewers are defined in the CODEOWNERS file and are automatically added for every pull request. Every pull request needs to be approved by at least one of these people before landing.

The submitter needs to decide on their own discretion whether the changes require a look from more than a single reviewer or any outside developer. Reviewers can also ask for additional approval from other reviewers.

Release

See the Release process on how to release a new version of the Glean SDK.

Code Coverage

In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. (Wikipedia)

This chapter describes how to generate a traditional code coverage report over the Kotlin, Rust, Swift and Python code in the Glean SDK repository. To learn how to generate a coverage report about what metrics your project is testing, see the user documentation on generating testing coverage reports.

Generating Kotlin reports locally

Locally you can generate a coverage report with the following command:

./gradlew -Pcoverage :glean:build

After that you'll find an HTML report at the following location:

glean-core/android/build/reports/jacoco/jacocoTestReport/jacocoTestReport/html/index.html

Generating Rust reports locally

Generating the Rust coverage report requires a significant amount of RAM during the build.

We use grcov to collect and aggregate code coverage information. Releases can be found on the grcov Release page.

The build process requires a Rust Nightly version. Install it using rustup:

rustup toolchain add nightly

To generate an HTML report, genhtml from the lcov package is required. Install it through your system's package manager.

After installation you can build the Rust code and generate a report:

export CARGO_INCREMENTAL=0
export RUSTFLAGS='-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Zno-landing-pads'
cargo +nightly test

zip -0 ccov.zip `find . \( -name "glean*.gc*" \) -print`
grcov ccov.zip -s . -t lcov --llvm --branch --ignore-not-existing --ignore-dir "/*" -o lcov.info
genhtml -o report/ --show-details --highlight --ignore-errors source --legend lcov.info

After that you'll find an HTML report at the following location:

report/index.html

Generating Swift reports locally

Xcode automatically generates code coverage when running tests. You can find the report in the Report Navigator (View -> Navigators -> Show Report Navigator -> Coverage).

Generating Python reports locally

Python code coverage is determined using the coverage.py library.

Run

make python-coverage

to generate code coverage reports in the Glean virtual environment.

After running, the report will be in htmlcov.

Developing documentation

The documentation in this repository pertains to the Glean SDK. That is, the client-side code for Glean telemetry. Documentation for Glean in general, and the Glean-specific parts of the data pipeline and analysis is documented elsewhere in the firefox-data-docs repository.

The main narrative documentation is written in Markdown and converted to static HTML using mdbook.

API docs are also generated from docstrings for Rust, Kotlin, Swift and Python.

Building documentation

Building the narrative (book) documentation

The mdbook crate is required in order to build the narrative documentation:

cargo install mdbook mdbook-mermaid mdbook-open-on-gh

Then both the narrative and Rust API docs can be built with:

make docs
# ...or...
bin/build-rust-docs.sh
# ...or on Windows...
bin/build-rust-docs.bat

The built narrative documentation is saved in build/docs/book, and the Rust API documentation is saved in build/docs/docs.

Building API documentation

Kotlin API documentation is generated using dokka. It is automatically installed by Gradle.

To build the Kotlin API documentation:

make kotlin-docs

The generated documentation is saved in build/docs/javadoc.

Swift API documentation is generated using jazzy. It can be installed using:

  1. Install the latest Ruby: brew install ruby
  2. Make the installed Ruby available: export PATH=/usr/local/opt/ruby/bin:$PATH (and add that line to your .bashrc)
  3. Install the documentation tool: gem install jazzy

To build the Swift API documentation:

make swift-docs

The generated documentation is saved in build/docs/swift.

The Python API docs are generated using pdoc3. It is installed as part of creating the virtual environment for Python development.

To build the Python API documentation:

make python-docs

The generated documentation is saved in build/docs/python.

TODO. To be implemented in bug 1648410.

Checking links

Internal links within the documentation can be checked using the link-checker tool. External links are currently not checked, since this takes considerable time and frequently fails in CI due to networking restrictions or issues.

Link checking requires building the narrative documentation as well as all of the API documentation for all languages. It is rare to build all of these locally (and in particular, the Swift API documentation can only be built on macOS), therefore it is reasonable to let CI catch broken link errors for you.

If you do want to run the link-checker locally, it can be installed using npm or your system's package manager. Then, run link-checker with:

make linkcheck

Spell checking

The narrative documentation (but not the API documentation) is spell checked using aspell.

On Unix-like platforms, it can be installed using your system's package manager:

sudo apt install aspell-en
# ...or...
sudo dnf install aspell-en
# ...or...
brew install aspell

Note that aspell 0.60.8 or later is required, as that is the first release with markdown support.

You can the spell check the narrative documentation using the following:

make spellcheck
# ...or...
bin/spellcheck.sh

This will bring up an interactive spell-checking environment in the console. Pressing a to add a word will add it to the project's local .dictionary file, and the changes should be committed to git.

Upgrading glean_parser

To upgrade the version of glean_parser used by the Glean SDK, run the bin/update-glean-parser-version.sh script, providing the version as a command line parameter:

bin/update-glean-parser-version.sh 1.28.3

This will update the version in all of the required places. Commit those changes to git and submit a pull request.

No further steps are required to use the new version of glean_parser for code generation: all of the build integrations automatically update glean_parser to the correct version.

For testing the Glean Python bindings, the virtual environment needs to be deleted to force an upgrade of glean_parser:

rm -rf glean-core/python/.venv*

Android bindings

The Glean SDK contains the Kotlin bindings for use by Android applications. It makes use of the underlying Rust component with a Kotlin-specific API on top. It also includes integrations into the Android platform.

Setup the Android Build Environment

Doing a local build of the Glean SDK:

This document describes how to make local builds of the Android bindings in this repository. Most consumers of these bindings do not need to follow this process, but will instead use pre-built bindings.

Prepare your build environment

Typically, this process only needs to be run once, although periodically you may need to repeat some steps (e.g., Rust updates should be done periodically).

Setting up Android dependencies

With Android Studio

The easiest way to install all the dependencies (and automatically handle updates), is by using Android Studio. Once this is installed, start Android Studio and open the Glean project. If Android Studio asks you to upgrade the version of Gradle, decline.

The following dependencies can be installed in Android Studio through Tools > SDK Manager > SDK Tools:

  • Android SDK Tools (may already be selected)
  • NDK r21
  • CMake
  • LLDB

You should be set to build Glean from within Android Studio now.

Manually

Set JAVA_HOME to be the location of Android Studio's JDK which can be found in Android Studio's "Project Structure" menu (you may need to escape spaces in the path).

Note down the location of your SDK. If installed through Android Studio you will find the Android SDK Location in Tools > SDK Manager.

Set the ANDROID_HOME environment variable to that path. Alternatively add the following line to the local.properties file in the root of the Glean checkout (create the file if it does not exist):

sdk.dir=/path/to/sdk

For the Android NDK:

  1. Download NDK r21 from https://developer.android.com/ndk/downloads.
  2. Extract it and put it somewhere ($HOME/.android-ndk-r21 is a reasonable choice, but it doesn't matter).
  3. Add the following line to the local.properties file in the root of the Glean checkout (create the file if it does not exist):
    ndk.dir=/path/to/.android-ndk-r21
    

Setting up Rust

Rust can be installed using rustup, with the following commands:

curl https://sh.rustup.rs -sSf | sh
rustup update

Platform specific toolchains need to be installed for Rust. This can be done using the rustup target command. In order to enable building for real devices and Android emulators, the following targets need to be installed:

rustup target add aarch64-linux-android
rustup target add armv7-linux-androideabi
rustup target add i686-linux-android
rustup target add x86_64-linux-android

Building

Before building:

  • Ensure your repository is up-to-date.
  • Ensure Rust is up-to-date by running rustup update.

With Android Studio

After importing the Glean project into Android Studio it should be all set up and you can build the project using Build > Make Project

Manually

The builds are all performed by ./gradlew and the general syntax used is ./gradlew project:task

Build the whole project and run tests:

./gradlew build

You can see a list of projects by executing ./gradlew projects and a list of tasks by executing ./gradlew tasks.

FAQ

  • Q: Android Studio complains about Python not being found when building.
  • A: Make sure that the python binary is on your PATH. On Windows, in addition to that, it might be required to add its full path to the rust.pythonCommand entry in $project_root/local.properties.

Android SDK / NDK versions

The Glean SDK implementation is currently build against the following versions:

For the full setup see Setup the Android Build Environment.

The versions are defined in the following files. All locations need to be updated on upgrades:

  • Documentation
    • this file (docs/dev/core/internal/sdk-ndk-versions.md)
    • dev/android/setup-android-build-environment.md
  • CI configuration
    • .circleci/config.yml
      • sdkmanager 'build-tools;28.0.3'
      • image: circleci/android:api-28-ndk
    • taskcluster/docker/linux/Dockerfile.
      • ENV ANDROID_BUILD_TOOLS "28.0.3"
      • ENV ANDROID_SDK_VERSION "3859397"
      • ENV ANDROID_PLATFORM_VERSION "28"
      • ENV ANDROID_NDK_VERSION "r21"

Working on unreleased Glean code in android-components

This is a companion to the equivalent instructions for the android-components repository.

Modern Gradle supports composite builds, which allows to substitute on-disk projects for binary publications. Composite builds transparently accomplish what is usually a frustrating loop of:

  1. change library
  2. publish library snapshot to the local Maven repository
  3. consume library snapshot in application

Note: this substitution-based approach will not work for testing updates to the Glean Gradle Plugin. For that to work, the library and the plugin need to be published to a local maven and manually imported in the consuming projects. Please refer to Using locally-published Glean in Fenix for how to do that.

Preparation

Clone the Glean SDK and android-components repositories:

git clone https://github.com/mozilla/glean
git clone https://github.com/mozilla-mobile/android-components

Cargo build targets

By default when building Android Components using gradle, the gradle-cargo plugin will compile Glean for every possible platform, which might end up in failures. You can customize which targets are built in glean/local.properties (more information):

# For physical devices:
rust.targets=arm

# For unit tests:
# rust.targets=darwin # Or linux-*, windows-* (* = x86/x64)

# For emulator only:
rust.targets=x86

Substituting projects

android-components has custom build logic for dealing with composite builds, so you should be able to configure it by simply adding the path to the Glean repository in the correct local.properties file:

In android-components/local.properties:

substitutions.glean.dir=../glean

If this doesn't seem to work, or if you need to configure composite builds for a project that does not contain this custom logic, add the following to settings.gradle:

In android-components/settings.gradle:

includeBuild('../glean') {
  dependencySubstitution {
    substitute module('org.mozilla.telemetry:glean') with project(':glean')
    substitute module('org.mozilla.telemetry:glean-forUnitTests') with project(':glean')
  }
}

Composite builds will ensure the Glean SDK is build as part of tasks inside android-components.

Caveat

There's a big gotcha with library substitutions: the Gradle build computes lazily, and AARs don't include their transitive dependencies' JNI libraries. This means that in android-components, ./gradlew :service-glean:assembleDebug does not invoke :glean:cargoBuild, even though :service-glean depends on the substitution for :glean and even if the inputs to Cargo have changed! It's the final consumer of the :service-glean project (or publication) that will incorporate the JNI libraries.

In practice that means you should always be targeting something that produces an APK: a test or a sample module. Then you should find that the cargoBuild tasks are invoked as you expect.

Inside the android-components repository ./gradlew :samples-glean:connectedAndroidTest should work. Other tests like :service-glean:testDebugUnitTest or :support-sync-telemetry:testDebugUnitTest will currently fail, because the JNI libraries are not included.

Notes

This document is based on the equivalent documentation for application-services: Development with the Reference Browser

  1. Transitive substitutions (as shown above) work but require newer Gradle versions (4.10+).

Using locally-published Glean in Fenix

Note: This is a bit tedious, and you might like to try the substitution-based approach documented in Working on unreleased Glean code in android-components. That approach is still fairly new, and the local-publishing approach in this document is necessary if it fails.

Note: This is Fenix-specific only in that some links on the page go to the mozilla-mobile/fenix repository, however these steps should work for e.g. reference-browser, as well. (Same goes for Lockwise, or any other consumer of Glean, but they may use a different structure -- Lockwise has no Dependencies.kt, for example)

Preparation

Clone the Glean SDK, android-components and Fenix repositories:

git clone https://github.com/mozilla/glean
git clone https://github.com/mozilla-mobile/android-components
git clone https://github.com/mozilla-mobile/fenix/

Local publishing

  1. Inside the glean repository root:

    1. In .buildconfig.yml, change libraryVersion to end in -TESTING$N1, where $N is some number that you haven't used for this before.

      Example: libraryVersion: 22.0.0-TESTING1

    2. Check your local.properties file, and add rust.targets=x86 if you're testing on the emulator, rust.targets=arm if you're testing on 32-bit arm (arm64 for 64-bit arm, etc). This will make the build that's done in the next step much faster.

    3. Run ./gradlew publishToMavenLocal. This may take a few minutes.

  2. Inside the android-components repository root:

    1. In .buildconfig.yml, change componentsVersion to end in -TESTING$N1, where $N is some number that you haven't used for this before.

      Example: componentsVersion: 24.0.0-TESTING1

    2. Inside buildSrc/src/main/java/Dependencies.kt, change mozilla_glean to reference the libraryVersion you published in step 2 part 1.

      Example: const val mozilla_glean = "22.0.0-TESTING1"

    3. Inside build.gradle, add mavenLocal() inside allprojects { repositories { <here> } }.

    4. Add the following block in the settings.gradle file, before any other statement in the script, in order to have the Glean Gradle plugin loaded from the local Maven repository.

      pluginManagement {
        repositories {
            mavenLocal()
            gradlePluginPortal()
        }
      }
      
    5. Inside the android-component's local.properties file, ensure substitutions.glean.dir is NOT set.

    6. Run ./gradlew publishToMavenLocal.

  3. Inside the fenix repository root:

    1. Inside build.gradle you need to add mavenLocal() twice.
      First add mavenLocal() inside buildscript like this:

      buildscript {
          repositories {
              mavenLocal()
              ...
      

      Second add mavenLocal() inside allprojects:

      allprojects {
          repositories {
              mavenLocal()
              ...
      
    2. Inside buildSrc/src/main/java/AndroidComponents.kt, change VERSION to the version you defined in step 3 part 1.

      Example:

      const val VERSION = "24.0.0-TESTING1"
      

You should now be able to build and run Fenix (assuming you could before all this).

Caveats

  1. This assumes you have followed the Android/Rust build setup
  2. Make sure you're fully up to date in all repositories, unless you know you need to not be.
  3. This omits the steps if changes needed because, e.g. Glean made a breaking change to an API used in android-components. These should be understandable to fix, you usually should be able to find a PR with the fixes somewhere in the android-component's list of pending PRs (or, failing that, a description of what to do in the Glean changelog).
  4. Ask in the #glean channel on chat.mozilla.org.

Notes

This document is based on the equivalent documentation for application-services: Using locally-published components in Fenix


1

It doesn't have to end with -TESTING$N, it only needs to have the format -someidentifier. -SNAPSHOT$N is also very common to use, however without the numeric suffix, this has specific meaning to gradle, so we avoid it. Additionally, while the $N we have used in our running example has matched (e.g. all of the identifiers ended in -TESTING1, this is not required, so long as you match everything up correctly at the end).

iOS bindings

The Glean SDK contains the Swift bindings for use by iOS applications. It makes use of the underlying Rust component with a Swift-specific API on top. It also includes integrations into the iOS platform.

Setup the iOS Build Environment

Prepare your build environment

  1. Install Xcode 12.4
  2. Install Carthage: brew install carthage
  3. Ensure you have Python 3 installed: brew install python
  4. Install linting and formatting tools: brew install swiftlint
  5. Run bin/bootstrap.sh to download dependencies.
  6. (Optional, only required for building on the CLI) Install xcpretty: gem install xcpretty

Setting up Rust

Rust can be installed using rustup, with the following commands:

  • curl https://sh.rustup.rs -sSf | sh
  • rustup update

Platform specific toolchains need to be installed for Rust. This can be done using the rustup target command. In order to enable building to real devices and iOS emulators, the following targets need to be installed:

rustup target add aarch64-apple-ios x86_64-apple-ios

Install helper tools:

cargo install cargo-lipo

Building

This should be relatively straightforward and painless:

  1. Ensure your repository is up-to-date.
  2. Ensure Rust is up-to-date by running rustup update.
  3. Run a build using the command make build-swift
    • To run tests use make test-swift

The above can be skipped if using Xcode. The project directory can be imported and all the build details can be left to the IDE.

Debug an iOS application against different builds of Glean

At times it may be necessary to debug against a local build of Glean or another git fork or branch in order to test new features or specific versions of Glean.

Since Glean is consumed through Carthage, this can be as simple as modifying the Cartfile for the consuming application.

Building against the latest Glean

For consuming the latest version of Glean, the Cartfile contents would include the following line:

github "mozilla/glean" "main"

This will fetch and compile Glean from the mozilla/glean GitHub repository from the "main" branch.

Building against a specific release of Glean

For consuming a specific version of Glean, you can specify a branch name, tag name, or commit ID, like this:

github "mozilla/glean" "v0.0.1"

Where v0.0.1 is a tagged release name, but could also be a branch name or a specific commit ID like 832b222

If the custom Glean you wish to build from is a different fork on GitHub, you could simply modify the Cartfile to point at your fork like this:

github "myGitHubHandle/glean" "myBranchName"

Replace the myGitHubHandle and myBranchName with your GitHub handle and branch name, as appropriate.

Build from a locally cloned Glean

You can also use Carthage to build from a local clone by replacing the Cartfile line with the following:

git "file:///Users/yourname/path/to/glean" "localBranchName"

Notice that the initial Carthage command is git now instead of github, and we need to use a file URL of the path to the locally cloned Glean

Perform the Carthage update

One last thing not to forget is to run the carthage update command in the directory your Cartfile resides in order to fetch and build Glean.

Once that is done, your local application should be building against the version of Glean that you specified in the Cartfile.

Python bindings

The Glean SDK contains Python bindings for use in Python applications and test frameworks. It makes use of the underlying Rust component with a Python-specific API on top.

Setup the Python Build Environment

This document describes how to set up an environment for the development of the Glean Python bindings.

Instructions for installing a copy of the Glean Python bindings into your own environment for use in your project are described in adding Glean to your project.

Prerequisites

Glean requires Python 3.6 or later.

Make sure it is installed on your system and accessible on the PATH as python3.

Setting up Rust

If you've already set up Rust for building Glean for Android or iOS, you already have everything you need and can skip this section.

Rust can be installed using rustup, with the following commands:

  • curl https://sh.rustup.rs -sSf | sh
  • rustup update

Create a virtual environment

It is recommended to do all Python development inside a virtual environment to make sure there are no interactions with other Python libraries installed on your system.

You may want to manage your own virtual environment if, for example, you are developing a library that is using Glean and you want to install Glean into it. If you are just developing Glean itself, however, Glean's Makefile will handle the creation and use of a virtual environment automatically (though the Makefile unfortunately does not work on Microsoft Windows).

The instructions below all have both the manual instructions and the Makefile shortcuts.

Manual method

The following instructions use the basic virtual environment functionality that comes in the Python standard library. Other higher level tools exist to manage virtual environments, such as pipenv and conda. These will work just as well, but are not documented here. Using an alternative tool would replace the instructions in this section only, but all other instructions below would remain the same.

To create a virtual environment, enter the following command, replacing <venv> with the path to the virtual environment you want to create. You will need to do this only once.

  $ python3 -m venv <venv>

Then activate the environment. You will need to do this for each new shell session when you want to develop Glean. The command to use depends on your shell environment.

PlatformShellCommand to activate virtual environment
POSIXbash/zshsource <venv>/bin/activate
fish. <venv>/bin/activate.fish
csh/tcshsource <venv>bin/activate.csh
PowerShell Core<venv>/bin/Activate.ps1
Windowscmd.exe<venv>\Scripts\activate.bat
PowerShell<venv>\Scripts\Activate.ps1

Lastly, install Glean's Python development dependencies into the virtual environment.

cd glean-core/python
pip install -r requirements_dev.txt

Makefile method

  $ make python-setup

The default location of the virtual environment used by the make file is glean-core/python/.venvX.Y, where X.Y is the version of Python in use. This makes it possible to build and test for multiple versions of Python in the same checkout.

Note: If you wish to change the location of the virtual environment that the Makefile uses, pass the GLEAN_PYENV environment variable: make python-setup GLEAN_PYENV=mypyenv.

By default, the Makefile installs the latest version available of each of Glean's dependencies. If you wish to install the minimum required versions instead, (useful primarily to ensure that Glean doesn't unintentionally use newer APIs in its dependencies) pass the GLEAN_PYDEPS=min environment variable: make python-setup GLEAN_PYDEPS=min.

Build the Python bindings

Manual method

Building the Python bindings also builds the Rust shared object for the Glean SDK core.

  $ cd glean-core/python
  $ python setup.py build install

Makefile method

This will implicitly setup the Python environment, rebuild the Rust core and then build the Python bindings.

  $ make build-python

Running the Python unit tests

Manual method

Make sure the Python bindings are built, then:

  $ cd glean-core/python
  $ py.test

Makefile method

The following will run the Python unit tests using py.test:

  $ make test-python

You can send extra parameters to the py.test command by setting the PYTEST_ARGS variable:

  $ make test-python PYTEST_ARGS="-s --pdb"

Viewing logging output

Log messages (whether originating in Python or Rust) are emitted using the Python standard library's logging module. This module provides a lot of possibilities for customization, but the easiest way to control the log level globally is with logging.basicConfig:

import logging
logging.basicConfig(level=logging.DEBUG)

Linting, formatting and type checking

The Glean Python bindings use the following tools:

Manual method

  $ cd glean-core/python
  $ flake8 glean tests
  $ black glean tests
  $ mypy glean

Makefile method

To just check the lints:

  $ make pythonlint

To reformat the Python files in-place:

  $ make pythonfmt

Building the Python API docs

The Python API docs are built using pdoc3.

Manual method

  $ python -m pdoc --html glean --force -o build/docs/python

Makefile method

  $ make python-docs

Building wheels for Linux

Building on Linux using the above instructions will create Linux binaries that dynamically link against the version of libc installed on your machine. This generally will not be portable to other Linux distributions, and PyPI will not even allow it to be uploaded. In order to create wheels that can be installed on the broadest range of Linux distributions, the Python Packaging Authority's manylinux project maintains a Docker image for building compatible Linux wheels.

The CircleCI configuration handles making these wheels from tagged releases. If you need to reproduce this locally, see the CircleCI job pypi-linux-release for an example of how this Docker image is used in practice.

Building wheels for Windows

The official wheels for Windows are produced on a Linux virtual machine using the Mingw toolchain.

The CircleCI configuration handles making these wheels from tagged releases. If you need to reproduce this locally, see the CircleCI job pypi-windows-release for an example of how this is done in practice.

Rust component

The majority of the Glean SDK is implemented as a Rust component, to be usable across all platforms.

This includes:

  • Implementations of all metric types
  • A storage layer
  • A ping serializer

Rust documentation guidelines

The documentation for the Glean SDK Rust crates is automatically generated by the rustdoc tool. That means the documentation for our Rust code is written as doc comments throughout the code.

Here we outline some guidelines to be followed when writing these doc comments.

Module level documentation

Every module's index file must have a module level doc comment.

The module level documentation must start with a one-liner summary of the purpose of the module, the rest of the text is free form and optional.

Function and constants level documentation

Every public function / constant must have a dedicated doc comment.

Non public functions / constants are not required to, but may have doc comments. These comments should follow the same structure as public functions.

Note Private functions are not added to the generated documentation by default, but may be by using the --document-private-items option.

Structure

Below is an example of the structure of a doc comment in one of the Glean SDK's public functions.


#![allow(unused)]
fn main() {
/// Collects and submits a ping for eventual uploading.
///
/// The ping content is assembled as soon as possible, but upload is not
/// guaranteed to happen immediately, as that depends on the upload policies.
///
/// If the ping currently contains no content, it will not be sent,
/// unless it is configured to be sent if empty.
///
/// # Arguments
///
/// * `ping` - The ping to submit
/// * `reason` - A reason code to include in the ping
///
/// # Returns
///
/// Whether the ping was succesfully assembled and queued.
///
/// # Errors
///
/// If collecting or writing the ping to disk failed.
pub fn submit_ping(&self, ping: &PingType, reason: Option<&str>) -> Result<bool> {
    ...
}
}

Every doc comment must start with a one-liner summary of what that function is about

The text of this section should always be in the third person and describe an action.

If any additional information is necessary, that can be added following a new line after the summary.

Depending on the signature and behavior of the function, additional sections can be added to its doc comment

These sections are:

  1. # Arguments - A list of the functions arguments and a short description of what they are. Note that there is no need to point out the type of the given argument on the doc comment since it is already outlined in the code.
  2. # Returns - A short summary of what this function returns. Note that there is no need to start this summary with "Returns..." since that is already the title of the section.
  3. # Errors - The possible paths that will or won't lead this function to return an error. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  4. # Panics - The possible paths that will or won't lead this function to panic!. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  5. # Safety - In case there are any safety concerns or guarantees, this is the section to point them out. It is at the discretion of the programmer to decide which cases are worth pointing out here.
  6. # Examples - If necessary, this section holds examples of usage of the function.

These sections should only be added to a doc comment when necessary, e.g. we only add an # Arguments section when a function has arguments.

Note The titles of these sections must be level-1 headers. Additional sections not listed here can be level-2 or below.

Link to all the things

If the doc comment refers to some external structure, either from the Glean SDK's code base or external, make sure to link to it.

Example:


#![allow(unused)]
fn main() {
/// Gets a [`PingType`] by name.
///
/// # Returns
///
/// The [`PingType`] of a ping if the given name was registered before, `None` otherwise.
///
/// [`PingType`]: metrics/struct.PingType.html
pub fn get_ping_by_name(&self, ping_name: &str) -> Option<&PingType> {
    ...
}
}

Note To avoid polluting the text with multiple links, prefer using reference links which allow for adding the actual links at the end of the text block.

Warnings

Important warnings about a function should be added, when necessary, before the one-liner summary in bold text.

Example:


#![allow(unused)]
fn main() {
/// **Test-only API (exported for FFI purposes).**
///
/// Deletes all stored metrics.
///
/// Note that this also includes the ping sequence numbers, so it has
/// the effect of resetting those to their initial values.
pub fn test_clear_all_stores(&self) {
    ...
}
}

Semantic line breaks

Prefer using semantic line breaks when breaking lines in any doc or non-doc comment.

References

These guidelines are largely based on the Rust API documentation guidelines.

Dependency Management Guidelines

This repository uses third-party code from a variety of sources, so we need to be mindful of how these dependencies will affect our consumers. Considerations include:

We're still evolving our policies in this area, but these are the guidelines we've developed so far.

Rust Code

Unlike Firefox, we do not vendor third-party source code directly into the repository. Instead we rely on Cargo.lock and its hash validation to ensure that each build uses an identical copy of all third-party crates. These are the measures we use for ongoing maintenance of our existing dependencies:

  • Check Cargo.lock into the repository.
  • Generate built artifacts using the --locked flag to cargo build, as an additional assurance that the existing Cargo.lock will be respected.
  • Use cargo-deny for a basic license-compatibility check as part of CI, to guard against human error.

Adding a new dependency, whether we like it or not, is a big deal - that dependency and everything it brings with it will become part of Firefox-branded products that we ship to end users. We try to balance this responsibility against the many benefits of using existing code, as follows:

  • In general, be conservative in adding new third-party dependencies.
    • For trivial functionality, consider just writing it yourself. Remember the cautionary tale of left-pad.
    • Check if we already have a crate in our dependency tree that can provide the needed functionality.
  • Prefer crates that have a a high level of due-diligence already applied, such as:
  • Check that it is clearly licensed and is MPL-2.0 compatible.
  • Take the time to investigate the crate's source and ensure it is suitably high-quality.
    • Be especially wary of uses of unsafe, or of code that is unusually resource-intensive to build.
    • Development dependencies do not require as much scrutiny as dependencies that will ship in consuming applications, but should still be given some thought.
      • There is still the potential for supply-chain compromise with development dependencies!
  • Explicitly describe your consideration of these points in the PR that introduces the new dependency.

Updating to new versions of existing dependencies is a normal part of software development and is not accompanied by any particular ceremony.

Dependency updates

We use Dependabot to automatically open pull requests when dependencies release a new version. All CI tasks run on those pull requests to ensure updates don't break anything.

As glean-core is now also vendored into mozilla-central, including all of its Rust dependencies, we need to be a bit careful about these updates. Landing upgrades of the Glean SDK itself in mozilla-central is a separate process. See Updating the Glean SDK in the Firefox Source Docs. Following are some guidelines to ensure compatibility with mozilla-central:

  • Patch releases of dependencies should be fine in most cases.
  • Minor releases should be compatible. Best to check the changelog of the updated dependency to make sure. These updates should only land in Cargo.lock. No updates in Cargo.toml are necessary
  • Major releases will always need a check against mozilla-central. See Updating the Glean SDK.

In case of uncertainty defer a decision to :janerik or :chutten.

Manual dependency updates

You can manually check for outdated dependencies using cargo-outdated. Individual crate updates for version compatible with the requirement in Cargo.toml can be done using:

cargo update -p $cratename

To update all transitive dependencies to the latest semver-compatible version use:

cargo update

Adding a new metric type

Data in the Glean SDK is stored in so-called metrics. You can find the full list of implemented metric types in the user overview.

Adding a new metric type involves defining the metric type's API, its persisted and in-memory storage as well as its serialization into the ping payload.

glean_parser

In order for your metric to be usable, you must add it to glean_parser so that instances of your new metric can be instantiated and available to our users.

The documentation for how to do this should live in the glean_parser repository, but in short:

  • Your metric type must be added to the metrics schema.
  • Your metric type must be added as a type in the object model
  • Any new parameters outside of the common metric data must also be added to the schema, and be stored in the object model.
  • You must add tests.

The metric type's API

A metric type implementation is defined in its own file under glean-core/src/metrics/, e.g. glean-core/src/metrics/counter.rs for a Counter.

Start by defining a structure to hold the metric's metadata:

#[derive(Clone, Debug)]
pub struct CounterMetric {
    meta: CommonMetricData
}

Implement the MetricType trait to create a metric from the meta data as well as expose the meta data. This also gives you a should_record method on the metric type.

impl MetricType for CounterMetric {
    fn meta(&self) -> &CommonMetricData {
        &self.meta
    }

    fn meta_mut(&mut self) -> &mut CommonMetricData {
        &mut self.meta
    }
}

Its implementation should have a way to create a new metric from the common metric data. It should be the same for all metric types.

impl CounterMetric {
    pub fn new(meta: CommonMetricData) -> Self {
        Self { meta }
    }
}

Implement each method for the type. The first argument to accept should always be glean: &Glean, that is: a reference to the Glean object, used to access the storage:

impl CounterMetric { // same block as above
    pub fn add(&self, glean: &Glean, amount: i32) {
        // Always include this check!
        if !self.should_record() {
            return;
        }

        // Do error handling here

        glean
            .storage()
            .record_with(&self.meta, |old_value| match old_value {
                Some(Metric::Counter(old_value)) => Metric::Counter(old_value + amount),
                _ => Metric::Counter(amount),
            })
    }
}

Use glean.storage().record() to record a fixed value or glean.storage.record_with() to construct a new value from the currently stored one.

The storage operation makes use of the metric's variant of the Metric enumeration.

The Metric enumeration

Persistence and in-memory serialization as well as ping payload serialization are handled through the Metric enumeration. This is defined in glean-core/src/metrics/mod.rs. Variants of this enumeration are used in the storage implementation of the metric type.

To add a new metric type, include the metric module and declare its use, then add a new variant to the Metric enum:


mod counter;

// ...

pub use self::counter::CounterMetric;

#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum Metric {
    // ...
    Counter(i32),
}

Then modify the below implementation and define the right ping section name for the new type. This will be used in the ping payload:

impl Metric {
    pub fn ping_section(&self) -> &'static str {
        match self {
            // ...
            Metric::Counter(_) => "counter",
        }
    }
}

Finally, define the ping payload serialization (as JSON). In the simple cases where the in-memory representation maps to its JSON representation it is enough to call the json! macro.

impl Metric { // same block as above
    pub fn as_json(&self) -> JsonValue {
        match self {
            // ...
            Metric::Counter(c) => json!(c),
        }
    }
}

For more complex serialization consider implementing serialization logic as a function returning a serde_json::Value or another object that can be serialized.

For example, the DateTime serializer has the following entry, where get_iso_time_string is a function to convert from the DateTime metric representation to a string:

Metric::Datetime(d, time_unit) => json!(get_iso_time_string(*d, *time_unit)),

Documentation

Documentation for the new metric type must be added to the user book.

  • Add a new file for your new metric in docs/user/user/metrics/. Its contents should follow the form and content of the other examples in that folder.
  • Reference that file in docs/user/SUMMARY.md so it will be included in the build.
  • Follow the Documentation Contribution Guide.

You must also update the payload documentation with how the metric looks in the payload.

Tests

Tests are written in the Language Bindings and tend to just cover basic functionality:

  • The metric returns the correct value when it has no value
  • The metric correctly reports errors
  • The metric returns the correct value when it has value

In the next step we will create the FFI wrapper and platform-specific wrappers.

Adding a new metric type - FFI layer

In order to use a new metric type over the FFI layer, it needs implementations in the FFI component.

FFI component

The FFI component implementation can be found in glean-core/ffi/src. Each metric type is implemented in its own module.

Add a new file named after your metric, e.g. glean-core/ffi/src/counter.rs, and declare it in glean-core/ffi/src/lib.rs with mod counter;.

In the metric type module define your metric type using the define_metric macro. This allows referencing the metric name and defines the global map as well as some common functions such as the constructor and destructor. Simple operations can be also defined in the same macro invocation:

use crate::{define_metric, handlemap_ext::HandleMapExtension, GLEAN};

define_metric!(CounterMetric => COUNTER_METRICS {
    new           -> glean_new_counter_metric(),
    destroy       -> glean_destroy_counter_metric,

    add -> glean_counter_add(amount: i32),
});

More complex operations need to be defined as plain functions. For example the test helper function for a counter metric can be defined as:

#[no_mangle]
pub extern "C" fn glean_counter_test_has_value(
    glean_handle: u64,
    metric_id: u64,
    storage_name: FfiStr,
) -> u8 {
    GLEAN.call_infallible(glean_handle, |glean| {
        COUNTER_METRICS.call_infallible(metric_id, |metric| {
            metric
                .test_get_value(glean, storage_name.as_str())
                .is_some()
        })
    })
}

Adding a new metric type - Kotlin

FFI

The platform-specific FFI wrapper needs the definitions of these new functions. For Kotlin this is in glean-core/android/src/main/java/mozilla/telemetry/glean/rust/LibGleanFFI.kt:

fun glean_new_counter_metric(category: String, name: String, send_in_pings: StringArray, send_in_pings_len: Int, lifetime: Int, disabled: Byte): Long
fun glean_destroy_counter_metric(handle: Long)
fun glean_counter_add(glean_handle: Long, metric_id: Long, amount: Int)

Kotlin API

Finally, create a platform-specific metric type wrapper. For Kotlin this would be glean-core/android/src/main/java/mozilla/telemetry/glean/private/CounterMetricType.kt:

class CounterMetricType(
    private var handle: Long,
    private val disabled: Boolean,
    private val sendInPings: List<String>
) {
    /**
     * The public constructor used by automatically generated metrics.
     */
    constructor(
        disabled: Boolean,
        category: String,
        lifetime: Lifetime,
        name: String,
        sendInPings: List<String>
    ) : this(handle = 0, disabled = disabled, sendInPings = sendInPings) {
        val ffiPingsList = StringArray(sendInPings.toTypedArray(), "utf-8")
        this.handle = LibGleanFFI.INSTANCE.glean_new_counter_metric(
                category = category,
                name = name,
                send_in_pings = ffiPingsList,
                send_in_pings_len = sendInPings.size,
                lifetime = lifetime.ordinal,
                disabled = disabled.toByte())
    }

    fun add(amount: Int = 1) {
        if (disabled) {
            return
        }

        @Suppress("EXPERIMENTAL_API_USAGE")
        Dispatchers.API.launch {
            LibGleanFFI.INSTANCE.glean_counter_add(
                Glean.handle,
                this@CounterMetricType.handle,
                amount)
        }
    }

    @VisibleForTesting(otherwise = VisibleForTesting.NONE)
    fun testHasValue(pingName: String = sendInPings.first()): Boolean {
        @Suppress("EXPERIMENTAL_API_USAGE")
        Dispatchers.API.assertInTestingMode()

        val res = LibGleanFFI.INSTANCE.glean_counter_test_has_value(Glean.handle, this.handle, pingName)
        return res.toBoolean()
    }
}

Adding a new metric type - Swift

FFI

Swift can re-use the generated C header file. Re-generate it with

make cbindgen

Swift API

Finally, create a platform-specific metric type wrapper. For Swift this would be glean-core/ios/Glean/Metrics/CounterMetric.swift:

public class CounterMetricType {
    let handle: UInt64
    let disabled: Bool
    let sendInPings: [String]

    /// The public constructor used by automatically generated metrics.
    public init(category: String, name: String, sendInPings: [String], lifetime: Lifetime, disabled: Bool) {
        self.disabled = disabled
        self.sendInPings = sendInPings
        self.handle = withArrayOfCStrings(sendInPings) { pingArray in
            glean_new_counter_metric(
                category,
                name,
                pingArray,
                Int32(sendInPings.count),
                lifetime.rawValue,
                disabled.toByte()
            )
        }
    }

    public func add(amount: Int32 = 1) {
        guard !self.disabled else { return }

        _ = Dispatchers.shared.launch {
            glean_counter_add(Glean.shared.handle, self.handle, amount)
        }
    }

    func testHasValue(_ pingName: String? = nil) -> Bool {
        let pingName = pingName ?? self.sendInPings[0]
        return glean_counter_test_has_value(Glean.shared.handle, self.handle, pingName) != 0
    }

    func testGetValue(_ pingName: String? = nil) throws -> Int32 {
        let pingName = pingName ?? self.sendInPings[0]

        if !testHasValue(pingName) {
            throw "Missing value"
        }

        return glean_counter_test_get_value(Glean.shared.handle, self.handle, pingName)
    }
}

Adding a new metric type - Python

FFI

Python can use the generated C header file directly, through cffi. Re-generate it with

make cbindgen

Python API

Finally, create a platform-specific metric type wrapper. For Python this would be glean-core/python/glean/metrics/counter.py:

class CounterMetricType:
    """
    This implements the developer facing API for recording counter metrics.

    Instances of this class type are automatically generated by
    `glean.load_metrics`, allowing developers to record values that were
    previously registered in the metrics.yaml file.

    The counter API only exposes the `CounterMetricType.add` method, which
    takes care of validating the input data and making sure that limits are
    enforced.
    """

    def __init__(
        self,
        disabled: bool,
        category: str,
        lifetime: Lifetime,
        name: str,
        send_in_pings: List[str],
    ):
        self._disabled = disabled
        self._send_in_pings = send_in_pings

        self._handle = _ffi.lib.glean_new_counter_metric(
            _ffi.ffi_encode_string(category),
            _ffi.ffi_encode_string(name),
            _ffi.ffi_encode_vec_string(send_in_pings),
            len(send_in_pings),
            lifetime.value,
            disabled,
        )

    def __del__(self):
        if getattr(self, "_handle", 0) != 0:
            _ffi.lib.glean_destroy_counter_metric(self._handle)

    def add(self, amount: int = 1):
        """
        Add to counter value.

        Args:
            amount (int): (default: 1) This is the amount to increment the
                counter by.
        """
        if self._disabled:
            return

        @Dispatcher.launch
        def add():
            _ffi.lib.glean_counter_add(self._handle, amount)

    def test_has_value(self, ping_name: Optional[str] = None) -> bool:
        """
        Tests whether a value is stored for the metric for testing purposes
        only.

        Args:
            ping_name (str): (default: first value in send_in_pings) The name
                of the ping to retrieve the metric for.

        Returns:
            has_value (bool): True if the metric value exists.
        """
        if ping_name is None:
            ping_name = self._send_in_pings[0]

        return bool(
            _ffi.lib.glean_counter_test_has_value(
                self._handle, _ffi.ffi_encode_string(ping_name)
            )
        )

    def test_get_value(self, ping_name: Optional[str] = None) -> int:
        """
        Returns the stored value for testing purposes only.

        Args:
            ping_name (str): (default: first value in send_in_pings) The name
                of the ping to retrieve the metric for.

        Returns:
            value (int): value of the stored metric.
        """
        if ping_name is None:
            ping_name = self._send_in_pings[0]

        if not self.test_has_value(ping_name):
            raise ValueError("metric has no value")

        return _ffi.lib.glean_counter_test_get_value(
            self._handle, _ffi.ffi_encode_string(ping_name)
        )

    def test_get_num_recorded_errors(
        self, error_type: ErrorType, ping_name: Optional[str] = None
    ) -> int:
        """
        Returns the number of errors recorded for the given metric.

        Args:
            error_type (ErrorType): The type of error recorded.
            ping_name (str): (default: first value in send_in_pings) The name
                of the ping to retrieve the metric for.

        Returns:
            num_errors (int): The number of errors recorded for the metric for
                the given error type.
        """
        if ping_name is None:
            ping_name = self._send_in_pings[0]

        return _ffi.lib.glean_counter_test_get_num_recorded_errors(
            self._handle, error_type.value, _ffi.ffi_encode_string(ping_name),
        )

The new metric type also needs to be imported from glean-core/python/glean/metrics/__init__.py:

from .counter import CounterMetricType


__all__ = [
    "CounterMetricType",
    # ...
]

It also must be added to the _TYPE_MAPPING in glean-core/python/glean/_loader.py:

_TYPE_MAPPING = {
    "counter": metrics.CounterMetricType,
    # ...
}

Adding a new metric type - Rust

Trait

To ensure the API is stable across Rust consumers and re-exporters (like FOG), you must define a Trait for the new metric in glean-core/src/traits. First, add your metric type to mod.rs:


#![allow(unused)]
fn main() {
mod counter;
...
pub use self::counter::Counter;
}

Then add the trait in e.g. counter.rs.

The trait includes test_get_num_recorded_errors and any metric operation included in the metric type's API (except new). The idea here is to only include operations that make sense for Rust consumers. If there are internal-only or language-specific APIs on the underlying metric, feel free to not include them on the trait.

Spend some time on the comments. These will be the dev-facing API docs for Rust consumers.

Rust Language Binding (RLB) Type

The Rust Language Binding supplies the implementation of the trait (mostly by delegating to the glean-core implementation) and adds a layer of ordering and safety using the dispatcher. You can find the RLB metric implementations in glean-core/rlb/src/private.

First, add your metric type to mod.rs:


#![allow(unused)]
fn main() {
mod counter;
...
pub use counter::Counter;
}

Then add the trait in e.g. counter.rs.

Note that in counter.rs the (internal) new and add_sync are on the struct's impl, not the trait's.

Note there are no API comments on the trait's impl.

If your metric type has locked internal mutability, (like TimingDistributionMetric's RwLock) you must always take the metric lock and Glean in the same order.

Tests

Add at least a "smoke test" (a simple confirmation of the API's behavior) to the RLB implementation.

Documentation

Don't forget to document the new Rust API in the Book's page on the Metric Type (e.g. Counter). Add a tab for Rust, and mimic any other language's example.

Adding a new metric type - data platform

The data platform technically exists outside of the Glean SDK. However, the Glean-specific steps for adding a new Glean metric type to the data platform are documented here for convenience.

Adding a new metric type to mozilla-pipeline-schemas

The mozilla-pipeline-schemas contains JSON schemas that are used to validate the Glean ping payload when reaching the ingestion server. These schemas are written using a simple custom templating system to give more structure and organization to the schemas.

Each individual metric type has its own file in templates/include/glean. For example, here is the schema for the Counter metric type in counter.1.schema.json:

{
  "type": "integer"
}

Add a new file for your new metric type in that directory, containing the JSON schema snippet to validate it. A good resource for learning about JSON schema and the validation keywords that are available is Understanding JSON Schema.

A reference to this template needs to be added to the main Glean schema in templates/include/glean/glean.1.schema.json. For example, the snippet to include the template for the counter metric type is:

        "counter": {
          @GLEAN_BASE_OBJECT_1_JSON@,
          "additionalProperties": @GLEAN_COUNTER_1_JSON@
        },

If adding a labeled metric type as well, the same template from the "core" metric type can be reused:

        "labeled_counter": {
          @GLEAN_BASE_OBJECT_1_JSON@,
          "additionalProperties": {
            @GLEAN_LABELED_GROUP_1_JSON@,
            "additionalProperties": @GLEAN_COUNTER_1_JSON@
          }
        },

After updating the templates, you need to regenerate the fully-qualified schema using these instructions.

The fully-qualified Glean schema is also used by the Glean SDK's unit test suite to make sure that ping payloads validate against the schema. Therefore, whenever the Glean JSON schema is updated, it should also be copied and checked in to the Glean SDK repository. Specifically, copy the generated schema in mozilla-pipeline-schemas/schemas/glean/glean.1.schema.json to the root of the Glean SDK repository.

Adding a new metric type to mozilla-schema-generator

Each new metric type also needs an entry in the Glean configuration in mozilla-schema-generator. The config file for Glean is in glean.yaml. Each entry in that file just needs some boilerplate for each metric type. For example, the snippet for the Counter metric type is:

  counter:
    match:
      send_in_pings:
        not:
          - glean_ping_info
      type: counter

FFI layer

The core part of the Glean SDK provides an FFI layer for its API. This FFI layer can be consumed by platform-specific bindings, such as the Kotlin bindings for Android.

When to use what method of passing data between Rust and Java/Swift

There are a bunch of options here. For the purposes of our discussion, there are two kinds of values you may want to pass over the FFI.

  1. Types with identity (includes stateful types, resource types, or anything that isn't really serializable).
  2. Plain ol' data.

Types with identity

Examples of this are all metric type implementations, e.g. StringMetric. These types are complex, implemented in Rust and make use of the global Glean singleton on the Rust side, They have an equivalent on the wrapper side (Kotlin/Swift/Python), that forwards calls through FFI.

They all follow the same pattern:

A ConcurrentHandleMap stores all instances of the same type, A handle is passed back and forth as a u64 from Rust, Long from Kotlin, UInt64 from Swift or other equivalent type in other languages.

This is recommended for most cases, as it's the hardest to mess up. Additionally, for types T such that &T: Sync + Send, or that you need to call &mut self method, this is the safest choice.

Additionally, this will ensure panic-safety, as it will poison the internal Mutex, making further access impossible.

The ffi_support::handle_map docs are good, and under ConcurrentHandleMap include an example of how to set this up.

Plain Old Data

This includes both primitive values, strings, arrays, or arbitrarily nested structures containing them.

Primitives

Numeric primitives are the easiest to pass between languages. The main recommendation is: use the equivalent and same-sized type as the one provided by Rust.

There are a couple of exceptions/caveats, especially for Kotlin. All of them are caused by JNA/Android issues. Swift has very good support for calling over the FFI.

  1. bool: Don't use it. JNA doesn't handle it well. Instead, use a numeric type (like u8) and represent 0 for false and 1 for true for interchange over the FFI, converting back to Kotlin's Boolean or Swift's Bool after (as to not expose this somewhat annoying limitation in our public API). All wrappers already include utility functions to turn 8-bit integers (u8) back to booleans (toBool() or equivalent methods).

  2. usize/isize: These cause the structure size to be different based on the platform. JNA does handle this if you use NativeSize, but it's awkward. Use the size-defined integers instead, such as i64/i32 and their language-equivalents (Kotlin: Long/Int, Swift:UInt64/UInt32). Caution: In Kotlin integers are signed by default. You can use u64/u32 for Long/Int if you ensure the values are non-negative through asserts or error reporting code.

  3. char: Avoid these if possible. If you really have a use case consider u32 instead.

    If you do this, you should probably be aware of the fact that Java chars are 16 bit, and Swift Characters are actually strings (they represent Extended Grapheme Clusters, not codepoints).

Strings

These we pass as null-terminated UTF-8 C-strings.

For return values, used *mut c_char, and for input, use ffi_support::FfiStr

  1. If the string is returned from Rust to Kotlin/Swift, you need to expose a string destructor from your ffi crate. See ffi_support::define_string_destructor!).

    For converting to a *mut c_char, use either rust_string_to_c if you have a String, or opt_rust_string_to_c for Option<String> (None becomes std::ptr::null_mut()).

    Important: In Kotlin, the type returned by a function that produces this must be Pointer, and not String, and the parameter that the destructor takes as input must also be Pointer.

    Using String will almost work. JNA will convert the return value to String automatically, leaking the value Rust provides. Then, when passing to the destructor, it will allocate a temporary buffer, pass it to Rust, which we'll free, corrupting both heaps 💥. Oops!

  2. If the string is passed into Rust from Kotlin/Swift, the Rust code should declare the parameter as a FfiStr<'_>. and things should then work more or less automatically. The FfiStr has methods for extracting it's data as &str, Option<&str>, String, and Option<String>.

Aggregates

This is any type that's more complex than a primitive or a string (arrays, structures, and combinations there-in). There are two options we recommend for these cases:

  1. Passing data as JSON. This is very easy and useful for prototyping, but is much slower and requires a great deal of copying and redundant encode/decode steps (in general, the data will be copied at least 4 times to make this work, and almost certainly more in practice). It can be done relatively easily by derive(Serialize, Deserialize), and converting to a JSON string using serde_json::to_string.

    This is a viable option for test-only functions, where the overhead is not important.

  2. Use repr(C) structures and copy the data across the boundary, carefully replicating the same structure on the wrapper side. In Kotlin this will require @Structure.FieldOrder annotations. Swift can directly handle C types.

    Caution: This is error prone! Structures, enumerations and unions need to be kept the same across all layers (Rust, generated C header, Kotlin, Swift, ...). Be extra careful, avoid adding references to structures and ensure the structures are correctly freed inside Rust. Copy out the data and convert into language-appropriate types (e.g. convert *mut c_char into Swift/Kotlin strings) as soon as possible.

Internal documentation

This chapter describes aspects of Glean that are internal implementation details.

NOTE: As implementation details, this information is subject to change at any time. External users should not rely on this information. It is provided as a reference for contributing to Glean only.

This includes:

Reserved ping names

The Glean SDK reserves all ping names in send_in_pings starting with glean_.

This currently includes, but is not limited to:

  • glean_client_info: metrics sent with this ping are added to every ping in its client_info section;
  • glean_internal_info: metrics sent with this ping are added to every ping in its ping_info section.

Additionally, only Glean may specify all-pings. This special value has no effect in the client, but indicates to the backend infrastructure that a metric may appear in any ping.

Clearing metrics when disabling/enabling Glean

When disabling upload (Glean.setUploadEnabled(false)), metrics are also cleared to prevent their storage on the local device, and lessen the likelihood of accidentally sending them. There are a couple of exceptions to this:

  • first_run_date and first_run_hour are retained so they aren't reset if metrics are later re-enabled.

  • client_id is set to the special value "c0ffeec0-ffee-c0ff-eec0-ffeec0ffeec0". This should make it possible to detect the error case when metrics are sent from a client that has been disabled.

When re-enabling metrics:

  • first_run_date and first_run_hour are left as-is. (They should remain a correct time of first run of the application, unaffected by disabling/enabling the Glean SDK).

  • The client_id is set to a newly-generated random UUID. It has no connection to the client_id used prior to disabling the Glean SDK.

  • Application lifetime metrics owned by Glean are regenerated from scratch so that they will appear in subsequent pings. This is the same process that happens during every startup of the application when the Glean SDK is enabled. The application is also responsible for setting any application lifetime metrics that it manages at this time.

  • Ping lifetime metrics do not need special handling. They will begin recording again after metric uploading is re-enabled.

Payload format

The main sections of a Glean ping are described in Ping Sections. This Payload format chapter describes details of the ping payload that are relevant for decoding Glean pings in the pipeline.

NOTE: The payload format is an implementation detail of the Glean SDK and subject to change at any time. External users should not rely on this information. It is provided as a reference for contributing to Glean only.

JSON Schema

Glean's ping payloads have a formal JSON schema defined in the mozilla-pipeline-schemas project. It is written as a set of templates that are expanded by the mozilla-pipeline-schemas build infrastructure into a fully expanded schema.

Metric types

Boolean

A Boolean is represented by its boolean value.

Example

true

Counter

A Counter is represented by its integer value.

Example

17

Quantity

A Quantity is represented by its integer value.

Example

42

String

A String is represented by its string value.

Example

"sample string"

String list

A String List is represented as an array of strings.

["sample string", "another one"]

Timespan

A Timespan is represented as an object of their duration as an integer and the time unit.

Field nameTypeDescription
valueIntegerThe value in the marked time unit.
time_unitStringThe time unit, see the timespan's configuration for valid values.

Example

{
    "time_unit": "milliseconds",
    "value": 10
}

Timing Distribution

A Timing distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent, so that the server can aggregate and visualize distributions, without knowing anything about the specific bucketing function used. This range starts with the first bucket with a non-zero accumulation, and ends at one bucket beyond the last bucket with a non-zero accumulation (so that the upper bound on the last bucket is retained).

For example, the following shows the recorded values vs. what is sent in the payload.

recorded:  1024: 2, 1116: 1,                   1448: 1,
sent:      1024: 2, 1116: 1, 1217: 0, 1327: 0, 1448: 1, 1579: 0

Example:

{
    "sum": 4612,
    "values": {
        "1024": 2,
        "1116": 1,
        "1217": 0,
        "1327": 0,
        "1448": 1,
        "1579": 0
    }
}

Memory Distribution

A Memory distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent. See timing distribution for more details.

Example:

{
    "sum": 3,
    "values": {
        "0": 1,
        "1": 3,
    }
}

UUID

A UUID is represented by the string representation of the UUID.

Example

"29711dc8-a954-11e9-898a-eb4ea7e8fd3f"

Datetime

A Datetime is represented by its ISO8601 string representation, truncated to the metric's time unit. It always includes the timezone offset.

Example

"2019-07-18T14:06:00.000+02:00"

Event

Events are represented as an array of objects, with one object for each event. Each event object has the following keys:

Field nameTypeDescription
timestampIntegerA monotonically increasing timestamp value, in milliseconds. To avoid leaking absolute times, the first timestamp in the array is always zero, and subsequent timestamps in the array are relative to that reference point.
categoryStringThe event's category. This comes directly from the category under which the metric was defined in the metrics.yaml file.
nameStringThe event's name, as defined in the metrics.yaml file.
extraObject (optional)Extra data associated with the event. Both the keys and values of this object are strings. The keys must be from the set defined for this event in the metrics.yaml file. The values have a maximum length of 50 bytes, when encoded as UTF-8.

Example

[
  {
    "timestamp": 0,
    "category": "app",
    "name": "ss_menu_opened"
  },
  {
    "timestamp": 124,
    "category": "search",
    "name": "performed_search",
    "extra": {
      "source": "default.action"
    }
  }
]

Also see the JSON schema for events.

To avoid losing events when the application is killed by the operating system, events are queued on disk as they are recorded. When the application starts up again, there is no good way to determine if the device has rebooted since the last run and therefore any timestamps recorded in the new run could not be guaranteed to be consistent with those recorded in the previous run. To get around this, on application startup, any queued events are immediately collected into pings and then cleared. These "startup-triggered pings" are likely to have a very short duration, as recorded in ping_info.start_time and ping_info.end_time (see the ping_info section). The maximum timestamp of the events in these pings are quite likely to exceed the duration of the ping, but this is to be expected.

Custom Distribution

A Custom distribution is represented as an object with the following fields.

Field nameTypeDescription
sumIntegerThe sum of all recorded values.
valuesMap<String, Integer>The values in each bucket. The key is the minimum value for the range of that bucket.

A contiguous range of buckets is always sent, so that the server can aggregate and visualize distributions, without knowing anything about the specific bucketing function used. This range starts with the first bucket (as specified in the range_min parameter), and ends at one bucket beyond the last bucket with a non-zero accumulation (so that the upper bound on the last bucket is retained).

For example, suppose you had a custom distribution defined by the following parameters:

  • range_min: 10
  • range_max: 200
  • bucket_count: 80
  • histogram_type: 'linear'

The following shows the recorded values vs. what is sent in the payload.

recorded:        12: 2,                      22: 1
sent:     10: 0, 12: 2, 14: 0, 17: 0, 19: 0, 22: 1, 24: 0

Example:

{
    "sum": 3,
    "values": {
        "10": 0,
        "12": 2,
        "14": 0,
        "17": 0,
        "19": 0,
        "22": 1,
        "24": 0
    }
}

Labeled metrics

Currently several labeled metrics are supported:

All are on the top-level represented in the same way, as an object mapping the label to the metric's value. See the individual metric types for details on the value payload:

Example for Labeled Counters

{
    "label1": 2,
    "label2": 17
}

Rate

A Rate is represented by its numerator and denominator.

Example

{
    "numerator": 22,
    "denominator": 7,
}

Directory structure

This page describes the contents of the directories where Glean stores its data.

All Glean data is inside a single root directory with the name glean_data.

On Android, this directory lives inside the ApplicationInfo.dataDir directory associated with the application.

On iOS, this directory lives inside the Documents directory associated with the application.

For the Python bindings, if no directory is specified, it is stored in a temporary directory and cleared at exit.

Within the glean_data directory are the following contents:

  • db: Contains the rkv database used to persist ping and user lifetime metrics.

  • events: Contains flat files containing persisted events before they are collected into pings.

  • pending_pings: Pings are written here before they are picked up by the ping uploader to send to the submission endpoint.

  • deletion_request: The deletion-request ping is written here before it is picked up by the ping uploader. This directory is separate from the pending_pings directory above, in or for an uploader to pick up only deletion-request pings and send them after general upload is disabled.

  • tmp: Pings are written here and then moved to the pending_pings directory when finished to make sure that partially-written pings to not get queued for sending.
    (The standard system temporary directory is not used for this because it is not guaranteed to be on the same volume as the glean_data directory on Android).

File format

For persistence assembled ping payloads are stored as files on disk in the above directories. At initialization the Glean SDK reads any pending ping files and queues them for eventual upload. The files are not meant to be read by any outside consumers. Its format may change. The Glean SDK will be able to read previous formats if necessary.

The current format has the following newline-delimited lines:

<ping submission path>\n
<json-encoded ping payload>\n
<json-encoded metadata>\n

Debug Pings

For debugging and testing purposes Glean allows to tag pings, which are then available in the Debug Ping Viewer1.

Pings are sent to the same endpoint as all pings, with the addition of one HTTP header:

X-Debug-ID: <tag>

<tag> is a alphanumeric string with a maximum length of 20 characters, used to identify pings in the Debug Ping Viewer.

See Debugging products using the Glean SDK for detailed information how to use this mechanism in applications.


1

Requires a Mozilla login.

Upload mechanism

The glean-core Rust crate does not handle the ping upload directly. Network stacks vastly differ between platforms, applications and operating systems. The Glean SDK leverages the available platform capabilities to implement any network communication.

Glean core controls all upload and coordinates the platform side with its own internals. All language bindings implement ping uploading around a common API and protocol.

The upload module in the language bindings

classDiagram
    class UploadResult {
        ~ToFFI()* int
    }

    class HttpResponse {
        int statusCode
        ~ToFFI() int
    }

    class UnrecoverableFailure {
        ~ToFFI() int
    }

    class RecoverableFailure {
        ~ToFFI() int
    }

    class PingUploader {
        <<interface>>
        +Upload() UploadResult
    }

    class BaseUploader {
        +BaseUploader(PingUploader)
        ~TriggerUploads()
        ~CancelUploads()
    }

    class HttpUploader {
        +Upload() UploadResult
    }

    UploadResult <|-- HttpResponse
    UploadResult <|-- UnrecoverableFailure
    UploadResult <|-- RecoverableFailure
    PingUploader <|-- HttpUploader
    PingUploader o-- BaseUploader

PingUploader

The PingUploader interface describes the contract between the BaseUploader and the SDK or user-provided upload modules.

BaseUploader

The BaseUploader component is responsible for interfacing with the lower level get_upload_task calls and dealing with the logic in a platform-coherent way.

  • Every Glean instance will always have a single BaseUploader instance.
  • The BaseUploader is fed, at Glean initialization, with an instance of an implementation of the PingUploader interface.
  • Whenever BaseUploader thinks it should perform an upload, it will call the provided instance of the PingUploader interface and call upload with the data it's getting from the glean-core/FFI.
  • Any throttling happens at this layer: the core will orchestrate the throttling, while this layer will be responsible to abide to what the core is telling it to do.
  • Any logic for aborting uploads or triggering uploads is provided by this object.

HttpClientUploader

The HttpClientUploader is the default SDK-provided HTTP uploader. It acts as an adapter between the platform-specific upload library and the Glean upload APIs.

Note that most of the languages have now diverged, due to the many iterations, from this design. For example, in Kotlin, the BaseUploader is mostly empty and its functionalities are spread in the PingUploadWorker.

Upload task API

The following diagram visualizes the communication between Glean core (the Rust crate), a Glean language binding (e.g. the Kotlin or Swift implementation) and a Glean end point server.

sequenceDiagram
    participant Glean core
    participant Glean wrapper
    participant Server

    Glean wrapper->>Glean core: get_upload_task()
    Glean core->>Glean wrapper: Task::Upload(PingRequest)
    Glean wrapper-->>Server: POST /submit/{task.id}
    Server-->>Glean wrapper: 200 OK
    Glean wrapper->>Glean core: upload_response(200)
    Glean wrapper->>Glean core: get_upload_task()
    Glean core->>Glean wrapper: Task::Done

Glean core will take care of file management, cleanup, rescheduling and rate limiting1.

1

Rate limiting is achieved by limiting the amount of times a language binding is allowed to get a Task::Upload(PingRequest) from get_upload_task in a given time interval. Currently, the default limit is for a maximum of 15 upload tasks every 60 seconds and there are no exposed methods that allow changing this default (follow Bug 1647630 for updates). If the caller has reached the maximum tasks for the current interval, they will get a Task::Wait regardless if there are other Task::Upload(PingRequest)s queued.

Available APIs

For direct Rust consumers the global Glean object provides these methods:


#![allow(unused)]
fn main() {
/// Gets the next task for an uploader.
fn get_upload_task(&self) -> PingUploadTask

/// Processes the response from an attempt to upload a ping.
fn process_ping_upload_response(&self, uuid: &str, status: UploadResult)
}

See the documentation for further usage and explanation of the additional types:

For FFI consumers (e.g. Kotlin/Swift/Python implementations) these functions are available:


#![allow(unused)]
fn main() {
/// Gets the next task for an uploader. Which can be either:
extern "C" fn glean_get_upload_task(result: *mut FfiPingUploadTask)

/// Processes the response from an attempt to upload a ping.
extern "C" fn glean_process_ping_upload_response(task: *mut FfiPingUploadTask, status: u32)
}

See the documentation for additional information about the types:

Implementations

Project NameLanguage BindingsOperating SystemApp Lifecycle TypeEnvironment Data source
glean-coreRustallallnone
glean-ffiCallallnone
gleanRustWindows/Mac/LinuxDesktop applicationOS info build-time autodetected, app info passed in
Glean AndroidKotlin, JavaAndroidMobile appAutodetected from the Android environment
Glean iOSSwiftiOSMobile appAutodetected from the iOS environment
Glean.pyPythonWindows/Mac/LinuxallAutodetected at runtime
FOG1Rust/C++/JavaScriptas Firefox supportsDesktop applicationOS info build-time autodetected, app info passed in

Features matrix

Feature/BindingsKotlinSwiftPythonRust
Core metric types
Metrics Testing API
baseline pingXX
metricsXX
eventsX
deletion-request ping
Custom pings
Custom pings testing APIX
Debug Ping View support

1

Firefox on Glean (FOG) is the name of the layer that integrates the Glean SDK into Firefox Desktop. It uses the Glean Rust bindings and exposes the same Rust API inside Firefox and extends it with a C++ and JavaScript API.

The following language-specific API docs are available: