Implementing DevSecOps


I recently had to clean up my pipelines after a deployment failed due to a full artifact store. While fixing that by reducing retention limits, I also decided to implement best practices and shift my setup more towards DevSecOps. But first some preparation: I introduced two new workflows: one that selectively cleans artifact caches for specific uploads in case I ran into the issue again, and another that performs a full purge including caches and logs. At the same time, I hardened and cleaned the workflows, which had become messy over the years and no longer met reasonable standards.

New Workflows and splitting into Action Components

The next step was untangling large workflows into individual Action Components, meaning one big file became many smaller ones. Context, environment variables, and secret passing worked seamlessly, and I was pleasantly surprised. The main deployment workflow was reduced from roughly 500 lines to 250, with significantly better separation and isolation. The test workflows saw similar improvements.

During cleanup, I also extracted larger shell snippets (five lines or more) from YAML into dedicated Bash scripts. These were made executable at the file level via git update-index, avoiding repetitive chmod +x steps in the deployment, as I had before. With its workflows, the project now executes around ~90 different PowerShell, Bash, Python, and JavaScript scripts on Windows and Linux CI/CD runners in clearly defined contexts.

With a clean base in place, I designed a minimal DevSecOps job. The initial plan included steps for CodeQL (SAST), OSV-Scanner (dependency scanning), and GitLeaks (secret scanning). I deliberately skipped the popular Trivy, preferring specialized tools per task.

Basic SecDevOps Gate Workflow Integration

After implementing all three, I reassessed and removed CodeQL and OSV-Scanner: Given the small number of fixed-version dependencies and the limited value of static analysis reports without enforcing pipeline failures, the setup felt like overkill and unnecessary overhead that would only slow me down. I also initially decided against any IaC or container scanning tools, as there was no real use case. Additionally, linters such as ESLint were excluded, as they add more value in team environments. Reducing complexity and tailoring tooling to actual needs was my eventual priority, not bloating for looks.

The final DevSecOps job now focuses solely on secret scanning with GitLeaks. It adds minimal overhead but reliably fails on committed secrets, providing an effective safeguard against accidentally leaking API keys or credentials. I set up a .toml configuration file to define common RegEx patterns and exclusions for false positives.

I decided against the off-the-shelf GitHub Action version and implemented GitLeaks in two modes via customized shell scripts:

  • A shallow scan of the current commit, automatically triggered as the first step of any deployment workflow.
  • A manual deep scan, triggered on demand, scanning the full repository history, branches, and files.
GitLeaks Deep and Surface Scans Bash Scripts

For reporting, I opted against SARIF artifact generation and instead relied on well-formatted json log output, processed via jq in bash. When a leak is found, the pipeline simply fails and prompts an investigation, where the console logs provide sufficient detail.

GitLeaks DeepScan Findings

To my surprise, the first deep scan i ran manually uncovered several outdated -but still problematic- secrets in the commit history.

Git Hard-Reset to one commit

To fully clean the repository of the accumulated leaks (and also reduce long-term churn), I decided to squash roughly 1,200 commits into a single clean baseline commit. With extensive backups available and no prior need to revisit old history, i think this trade-off was acceptable for long-term maintainability.

comments powered by Disqus