[{"data":1,"prerenderedAt":815},["ShallowReactive",2],{"/en-us/blog/generic-semantic-version-processing":3,"navigation-en-us":38,"banner-en-us":448,"footer-en-us":458,"blog-post-authors-en-us-Julian Thome":698,"blog-related-posts-en-us-generic-semantic-version-processing":712,"blog-promotions-en-us":753,"next-steps-en-us":805},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":25,"isFeatured":12,"meta":26,"navigation":27,"path":28,"publishedDate":20,"seo":29,"stem":33,"tagSlugs":34,"__hash__":37},"blogPosts/en-us/blog/generic-semantic-version-processing.yml","Generic Semantic Version Processing",[7],"julian-thome",null,"security",{"slug":11,"featured":12,"template":13},"generic-semantic-version-processing",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9,"tags":22},"SemVer versioning: how we handled it with linear interval arithmetic","SemVer versioning made it difficult to automate processing. We turned to linear interval arithmetic to come up with a unified, language-agnostic semantic versioning approach.",[18],"Julian Thome","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663397/Blog/Hero%20Images/logoforblogpost.jpg","2021-09-28","The [semantic versioning (SemVer) specification](https://semver.org/) can be considered the de-facto standard for tracking software states during its evolution. Unfortunately, in reality many languages/ecosystems practice \"SemVer versioning\" and have not adopted the standard as-is; instead we can find many different semantic versioning flavors that are not necessarily compatible with the original SemVer spec. SemVer Versioning has led to the creation of a variety of different semantic versioning schemes.\nGitLab provides a [Dependency Scanning (DS)](https://docs.gitlab.com/user/application_security/dependency_scanning/)\nfeature that automatically detects vulnerabilities in the dependencies of a software project for a variety of different languages. DS relies on the [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db)\nthat is updated on a daily basis providing information about vulnerable packages that is expressed in the package-specific (native)\nsemantic version dialect. GitLab also recently launched an [Open Source Edition](https://gitlab.com/gitlab-org/advisories-community) of the GitLab Advisory Database.\nAt GitLab we use a semi-automated process for advisory generation: we extract advisory data that includes package names and vulnerable versions from data-sources such as [NVD](https://nvd.nist.gov/) and generate advisories that adhere to the GitLab advisory format before they are curated and stored in our [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db).\nThe plethora of SemVer versioning in the wild posed a major challenge for the level of automation we could apply in the advisory generation process: the different semantic version dialects prevented us from building generic mechanisms around version matching, version verification (i.e., the process of verifying whether or not versions are available on the relevant package registry), fixed version inference etc. Moreover, since advisory generation requires us to extract and update advisory data on scale from data-sources with hundreds of thousands vulnerability entries, translating and/or verifying versions by hand is not a viable, scalable solution.\nHaving a generic method to digest and process a variety of different SemVer versioning dialects was an important building block for automating large parts of the advisory generation process. This led to the development of [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects), a utility that helps processing semantic versions in a generic, language-agnostic manner which has been recently open-sourced (MIT) and [published on rubygems.org](https://rubygems.org/gems/semver_dialects).\n## Understand the SemVer spec\nThe SemVer spec is the de-facto standard for tracking states of software projects during their evolution by associating unique, comparable version numbers to distinct states, and by encoding semantic properties into the semantic version strings so that a version change implicitly conveys information about the nature of the change.  \nA semantic version consists of a prefix (version core) and a suffix that hold pre-release and/or build information. A version core consists of three numeric components that are delimited by `.`:\n* major: backwards-incompatible changes\n* minor: new backwards-compatible functionality\n* patch: backwards-compatible bug fixes\nConsidering a software project using SemVer, with two releases `1.0.0` and `1.0.1`, by just looking at the change applied to the semantic version strings, it is clear that `1.0.1` is a newer (more recent) release of the software, whereas version `1.0.0` is an older release. In addition, the version number `1.0.1` represents an improved state of the software as compared to version `1.0.0` which contained a bug that has been fixed in version `1.0.1`. This fix is signalled by the higher number of the patch version component.\nSemantic version processing is particularly useful in the context of [Dependency Scanning (DS)](https://docs.gitlab.com/user/application_security/dependency_scanning/). DS is the process of automatically detecting (and potentially fixing)\nvulnerabilities related to the dependencies of a software project: dependencies of a software project are checked against a set of configuration files (so called advisories) that contain information about vulnerable dependencies; advisories usually include the versions of the vulnerable dependency.\nVulnerable versions are usually expressed in terms of version intervals: for example [this out-of-bounds read vulnerability for the Python tensorflow package](https://nvd.nist.gov/vuln/detail/CVE-2021-29560) contains information about the vulnerable version by listing the four version intervals below:\n1. up to 2.1.4\n1. from 2.2.0 up to 2.2.3\n1. from 2.3.0 up to 2.3.3\n1. from 2.4.0 up to 2.4.2\nWhile SemVer is very concise and clear about the syntax and semantic of semantic versions, it does not specify how to express and represent semantic version constraints. In addition, SemVer is purposefully simplistic to foster its adoption. In practice it seems as if many ecosystems required features that go beyond SemVer which led to the development of many SemVer versioning flavours as well as a variety of different native constraint matching syntaxes, some of which deviate from the official SemVer specification.  Depending on the ecosystem you are working with, the same semantic version string may be treated/interpreted differently: for example both Maven and pip/PyPI treat versions `1.2.3.SP` differently because pip/PyPI lacks the notion of an `SP` post release. Apart from that, `1.2.3.SP` cannot be considered a valid semantic version according to the SemVer spec.\nToday we have a variety of different semantic versioning schemes:\n- `gem`: [gem requirement](https://guides.rubygems.org/specification-reference/#add_runtime_dependency)\n- `maven`: [Maven Dependency Version Requirement Specification](https://maven.apache.org/pom.html#Dependency_Version_Requirement_Specification)\n- `npm`: [node-semver](https://github.com/npm/node-semver#ranges)\n- `php`: [PHP Composer version constraints](https://getcomposer.org/doc/articles/versions.md#writing-version-constraints)\n- `pypi`: [PEP440](https://www.python.org/dev/peps/pep-0440/#version-specifiers)\n- `go`: [go semver](https://godoc.org/golang.org/x/tools/internal/semver)\n- `nuget`: [NuGet semver](https://docs.microsoft.com/en-us/nuget/concepts/package-versioning)\n- `conan`: [node-semver flavour](https://github.com/npm/node-semver#ranges)\nThis SemVer versioning fragmentation limited the degree of automation we could apply to our advisory extraction/generation process. This limitation motivated the development of a methodology and tool [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) that helps to digest and process semantic versions in a language agnostic way and, hence, helps to reduce the manual advisory curation effort.\nBelow, you can see an excerpt of the advisory information that is extracted and generated by our semi-automated advisory generation process:\n```yaml\n# ...\naffected_range: \">=1.9,\u003C=2.7.1||==2.8\"\nfixed_versions:\n- \"2.7.2\"\n- \"2.8.1\"\nnot_impacted: \"All versions before 1.9, all versions after 2.7.1 before 2.8, all versions\n  after 2.8\"\nsolution: \"Upgrade to versions 2.7.2, 2.8.1 or above.\"\n# ...\n```\nIn the excerpt above:\n- `affected_range` denotes the range of affected versions which is the machine-readable, native syntax used by the package manager/registry (in this case pypi).\n- `fixed_versions` denotes the concrete versions when the vulnerability has been fixed.\n- `not_impacted` provides a textual description of the versions that are not affected.\n- `solution` provides information about how to remediate the vulnerability.\nTo be able to extract and generate advisories like the one illustrated above in a language/ecosystem agnostic way, we implemented and open-sourced a generic semantic version representation and processing approach called semver_dialects.\nIn the advisory excerpt above, the `affected_range` field contains the version constraints in the native constraint syntax (in this case PyPI for Python); `fixed_versions` can be inferred by inverting the `affected_version` (i.e., non-affected versions) and by selecting the first available  version that falls into the range of non-affected versions from the native package registry; this step requires our approach to be able to parse the native semantic version syntax.\nIn order to deal with SemVer versioning and automatically process and generate the fields according to this description, our [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) implementation had to satisfy the following requirements:\n1. Provide a unified interface to the language specific dialects.\n1. Match semantic versions in a language agnostic way.\n1. Invert ranges.\n1. Cope with scattered, non-consecutive ranges.\n1. Parse and produce different version syntaxes.\n1. Parse and match versions/constraints in a best-effort manner.\n## SemVer versioning representation\nFirst, we need a generic representation of a semantic version to start with. We assume that a semantic version is composed of prefix and suffix where the prefix contains segments for major, minor and patch version components as defined in the\nSemVer specification. The suffix may hold additional information about pre/post releases etc. As illustrated below, the major, minor and patch prefix segments can be accessed by means of the corresponding methods.\n```ruby\ns1 = SemanticVersion.new('1.2.3')\nputs \"segments: #{s1}\"\n# segments: 1:2:3\nputs \"major #{s1.major}\"\n# major 1\nputs \"minor #{s1.minor}\"\n# minor 2\nputs \"patch #{s1.patch}\"\n# patch 3\n```\nWe cannot generally assume that all provided versions we would like to process fully adhere to the SemVer spec which requires a version prefix (core) to consist of three segments: major, minor and patch. Hence, per default, we remove redundant, trailing zeros from the prefix to ensure that `2.0.0`, `2.0` and `2` are considered identical.\n[Semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) translates language specific version suffixes into numeric values. This process can be described as version normalization.  For example the Maven (pre-)release candidate version `2.0.0.RC1` can be translated to a numeric representation with prefix: `2` and suffix `-1:1` by mapping `RC` to a numeric value (in this example `-1`) and, thus, rendering it numerically comparable.\nAfter this normalization step, semantic version matching for two versions `vA` and `vB` can be implemented by simply numerically comparing their segments in a pairwise fashion.  For unknown suffices that are not mappable to the numeric domain, we use lexical matching as a default fallback strategy.\nIn summary, comparing two semantic versions is a two-step process:\n1. Normalization: Extend both semantic versions to have the same prefix length and suffix lengths by appending zeros. 1. Comparison: Iterate over segments and compare each of them numerically.\nFor example, after normalizing the versions `2.0.0.RC1` and `2.0.0` to `2:-1:1` and `2:0:0`, respectively, we can iterate over the segments (delimited by `:` in the example) which we can compare numerically to successfully identify `2:-1:1` as being the smaller (release-candidate) version in comparison to `2:0:0`.\n## Constraint syntax - everything is a linear interval\nTranslating semantic versions into a generic representation makes them numerically comparable which is already useful but not sufficient to express SemVer versioning constraints in a language-agnostic fashion.\nFor representing semantic version constraints in a generic way, we rely on linear intervals.  For the purpose of this blog, we define an interval as an ordered pair of two semantic versions which we are referring to as lower and upper bounds (or cuts). For the sake of simplicity, for the remainder of this section we will use simple integers as examples for lower and upper bounds, respectively.\nLinear intervals capture semantic version ranges symbolically which makes them very versatile and space efficient. At the same time, we can rely on well-established mathematical models borrowed from linear interval arithmetic that enable us to translate/express any type of constraint in terms of mathematical set operations on intervals.\nIn the table below you can find all the different types of intervals we considered to model semantic version constraints and a corresponding description where `L` stands for left, `R` stands for right with `a` and `b` being the lower and upper bounds, respectively.\n| Type of interval | Example                    | Description                               |\n| ---------------- | ---------------------------| ----------------------------------------- |\n| LR-closed        |  `[a,b]: x >= a, x \u003C= b`   | all versions starting from a until b      |\n| L-open R-closed  |  `(a,b]: x > a, x \u003C= b`    | all versions after a until b              |\n| L-closed R-open  |  `[a,b): x >= a, x \u003C b`    | all versions starting from a before b     |\n| LR-open          |  `(a,b): x > a, x \u003C b`     | all versions between a and b              |\n| L-unbounded      |  `(-inf,b]: x \u003C= b`        | all versions until b                      |\n| R-unbounded      |  `[a,+inf): x >= a`        | all versions starting from a              |\nBelow you can see example output for the different types of ranges from [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) where we are using the `VersionParser` component to generate linear intervals from version constraints where `,` denotes a logical conjunction: e.g., `>=1, \u003C=2` denotes the set of integers that are greater than or equal to 1 *and* smaller than or equal to two, i.e., all integers/versions numbers starting from 1 until 2.\n```ruby\nputs VersionParser.parse(\">=1, \u003C=2\")\n# [1,2]\nputs VersionParser.parse(\">1, \u003C=2\")\n# (1,2]\nputs VersionParser.parse(\">=1, \u003C2\")\n# [1,2)\nputs VersionParser.parse(\">1, \u003C2\")\n# (1,2)\nputs VersionParser.parse(\"\u003C=2\")\n# (-inf,2]\nputs VersionParser.parse(\">=1\")\n# [1,+inf)\n```\nFor solving SemVer versioning constraints, we use linear interval arithmetic which is explained in-depth in the text-book \"[Introduction to Interval\nAnalysis](https://epubs.siam.org/doi/book/10.1137/1.9780898717716?mobileUi=0&).\"\nAs mentioned earlier, for our purposes, we define an interval as an ordered pair of two semantic versions (lower and upper bound) that represents the set of all those semantic versions that are enclosed by lower and upper bounds.\nGiven that intervals are sets, we can perform standard set operations on them.\nIn the context of advisory generation, there are three operations we require to satisfy all the requirements we defined earlier: Intersection, Union and Complement.\nThe operations are explained in more detail in the sections below.\nFor the remainder of this section, we explain interval operations, using two example intervals `X` and `Y` with `X=[x_l, x_u]` and `Y=[y_l, y_u]` where `x_l`, `x_u` denote the lower and upper bounds for `X`, and `y_l`, `y_u` denote the lower and upper bounds for `Y`, respectively. In addition, we are using the `min` and `max` functions, where `max(a,b)` returns the largest and `min(a,b)` returns the smallest value of the parameters `a` and `b`; the ∅ symbol denotes the empty set.\n### Intersection\nThe recipe below illustrates how the intersection (`X` ∩ `Y`) can be computed.\n`X` ∩ `Y` = if `X` and `Y` have points in common `[max(x_l,y_l), min(x_u,y_u)]` else ∅\nIntuitively, the intersection extracts the overlap (if any) from the two intervals `X` and `Y`.\nThe code snippet below shows how the intersection is computed in [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) for the two examples:\n1. `[2,5]` ∩ `[3,10]`\n1. `[2,5]` ∩ `[7,10]`\n```ruby\n# 1. [2,5] ∩ [3,10] = [3, 5]\nputs VersionParser.parse(\">=2, \u003C=5\").intersect(VersionParser.parse(\">=3, \u003C=10\"))\n# [3,5]\n\n# 2. [2,5] ∩ [7,10] = ∅\nputs VersionParser.parse(\">=2, \u003C=5\").intersect(VersionParser.parse(\">=7, \u003C=10\"))\n# empty\n```\nThe intersection operation is useful to perform semantic version matching for checking whether semantic version falls into a certain version interval or range. For instance we may want to check whether version `1.2.3` satisfies the constraint `>=1.0.0, \u003C1.2.4`. In the context of [Dependency Scanning](https://docs.gitlab.com/user/application_security/dependency_scanning/), these types of constraints are very common. The problem `1.2.3` ∈ `[1.0.0, 1.2.4)` can be translated to a set intersection: `[1.2.3, 1.2.3]` ∩ `[1.0.0, 1.2.4)` = `[1.2.3, 1.2.3]`  which returns a non-empty set and, hence, tells us that version `1.2.3` satisfies the given version constraints.\nIn the context of our advisory generation process, we use intersection to cross-validate versions from vulnerability reports (CVEs) with versions of the available package that are available on the package registry that serves it.\nFor convenience, as mentioned earlier, [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) also supports grouping intervals into ranges by means of the `VersionRange` class. A range is a set of intervals which we denote with `{I0, I1, ..., IN}` where `I` denotes version intervals delimited by `,` which can be interpreted as a union operator (explained in the next section).\nA range is a set of intervals. In the example below, we first create a range `r1` to which we are adding two intervals: `r1 = {[2.2.1, 5.1.2], (3.1, 10)}`.\nAfter that, there is a check for an overlap (i.e., an intersection) between `r1` and `[0, 2.1)` (no overlap) as well as `[5.5, 5.5]` (overlap). You can see the output of [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) in the excerpt below.\n```ruby\nr1 = VersionRange.new\nr1.add(VersionParser.parse(\">=2.1.2, \u003C=5.1.2\"))\nr1.add(VersionParser.parse(\">3.1, \u003C10\"))\n\nputs \"[0,2.1) in #{r1}? #{r1.overlaps_with?(VersionParser.parse(\">=0, \u003C2.1\"))}\"\n# [0,2.1) in [2.1.2,5.1.2],(3.1,10)? false\nputs \"[5.5,5.5] overlap with #{r1}? #{r1.overlaps_with?(VersionParser.parse(\"=5.5\"))}\"\n# [5.5,5.5] overlap with [2.1.2,5.1.2],(3.1,10)? true\n```\n### Union\nThe recipe below illustrates how the union (`X` ∪ `Y`) can be computed.\n`X` ∪ `Y` = if `X` and `Y` have points in common `{[min(x_l,y_l), max(x_u,y_u)]}` else `{X,Y}`\nThe code snippet below shows how the union can be computed with [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) for the two examples: 1. `[2,5]` ∪ `[3,10]` = `{[2,5], [3,10]}` = `{[2, 10]}`\n2. `[2,5]` ∪ `[7,10]` = `{[2,5], [7,10]}`\nWith the union operator, we can collapse version intervals in case they have an overlap/intersection; otherwise, if `X` and `Y` are disjoint, we add their intervals directly to the range.\n```ruby\n# 1. [2,5] ∪ [3,10] = [2, 10]\nputs \"union: #{VersionParser.parse(\">=2, \u003C=5\").union(VersionParser.parse(\">=3, \u003C=10\"))}\"\n# union: [2,10]\n\n# Version ranges perform union two for the purpose of automatically collapsing\n# intervals (if possible)\nr1 = VersionRange.new\nr1.add(VersionParser.parse(\">=2, \u003C=5\"))\nr1.add(VersionParser.parse(\">=3, \u003C=10\"))\nputs \"r1: #{r1}\"\n# union: [2,5],[3,10]\nputs \"r1 collapsed: #{r1.collapse}\" # creates the union between intervals\n# r1 collapsed: [2,10]\n\n# 2. [2,5] ∪ [7,10] = {[2, 10], [7,10]}\nr2 = VersionRange.new\nr2.add(VersionParser.parse(\">=2, \u003C=5\"))\nr2.add(VersionParser.parse(\">=7, \u003C=10\"))\nputs \"r2: #{r2}\"\n# r2: [2,5],[7,10]\n```\nIn the context of [Dependency Scanning](https://docs.gitlab.com/user/application_security/dependency_scanning/), vulnerability data usually lists a set of intervals for dependencies that are susceptible to a given vulnerability like the [tensorflow example](https://nvd.nist.gov/vuln/detail/CVE-2021-29560) in the introduction where the following versions are affected:\n1. up to 2.1.4\n1. from 2.2.0 up to 2.2.3\n1. from 2.3.0 up to 2.3.3\n1. from 2.4.0 up to 2.4.2\nThis list of intervals can be represented as a single range (`VersionRange`) by combining all of the mentioned version intervals through the union operator.\nIn the Ruby code example above, you can also see the `collapse` method which is invoked on a `VersionRange` object. This method automatically collapses overlapping intervals that are included in the same `VersionRange` to eliminate redundant intervals. Collapsing the range `{[2, 5], [3, 10]}` yields a new range `{[2,10]}` with only one interval while preserving semantic equivalence.\n### Complement\nThe recipe below, illustrates how the relative complement (`X` - `Y`) can be computed.\n`X` - `Y`: `Z` := `X` ∩ `Y`; if (`z_l` > `x_l` && `z_u` \u003C `x_u`)\n          `{[x_l, z_l),(z_u, x_u]}` else if (`x_l` \u003C `z_l`)\n          `{[x_l, z_l)}` else if (`x_u` > `z_u`)\n          `{(z_u, x_u]}`\n\nIntuitively, this recipe computes the intersection (`Z`) between `X` and `Y` and removes all elements from `X` that are included in the intersection. The examples below illustrate the recipe:\n1. `[3, 5]` - `[1, 3]`: with `Z` = `[3, 3]` we get `{(3, 5]}` which is\n   equivalent to `{[4, 5]}`\n1. `[3, 10]` - `[10, 11]`: with `Z` = `[10, 10]` we get `{[3, 10)}` which is equivalent to `{[3, 9]}`\n1. `[1, 5]` - `[2, 2]`: with `Z` = `[2, 2]` we get `[1, 2), (2, 5]` which is equivalent to `{[1, 1], [3, 5]}`\nWith the recipe above, we can also compute the absolute complement `X` - `Y` by assuming `X` is the universe that captures the entirety of all possible values:\n`(-inf,+inf)`. The universal complement can be defined as `~X` = `(-inf,+inf)` - `X`.\nWith [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects), the absolute complement can be computed by means of the `invert` method as illustrated in the example below.\n```ruby\n# example 1: ~[1,3] = {(-inf,0],[4, +inf)} = {(-inf,1),(3,+inf)}\nr1 = VersionRange.new\nr1.add(VersionParser.parse(\">=1, \u003C=3\"))\nputs r1.invert\n# (-inf,1),(3,+inf)\n\n# example 2: ~{[2.1.2, 5.1.2], (3.1, 10)} = ~{[2.1.2, 10)} = {(-inf,2.1.2),[10,+inf)}\n{(-inf,0],[4, +inf)} = {(-inf,1),(3,+inf)}\nr2 = VersionRange.new\nr2.add(VersionParser.parse(\">=2.1.2, \u003C=5.1.2\"))\nr2.add(VersionParser.parse(\">3.1, \u003C10\"))\nputs r2.collapse.invert\n# (-inf,2.1.2),[10,+inf)\n```\nIn the context of [Dependency Scanning](https://docs.gitlab.com/user/application_security/dependency_scanning/), this functionality is used to automatically infer non-affected versions from the affected versions information: if `[1, 3]` represents all the affected versions of a vulnerable package, its complement `{(-inf,1),(3,+inf)}`, per definition, captures only the unaffected version. In our advisory generation process we cross-validate the version information of packages from the package registries with this information about unaffected versions to check whether or not unaffected packages are available; if this is the case, we add the corresponding remediation information to the generated advisories.\n## Version Translation\nLinear interval arithmetic provides us with all the means necessary to represent and solve SemVer versioning constraints in a language-agnostic way.\nHowever, in order to leverage the generic representation, we have to be able to automatically translate the native semantic version dialects into the generic representation and vice versa. The details of this translation functionality are provided below.\n[Semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) offers a `VersionTranslator` class. The `VersionTranslator` takes a native semantic version constraint, and translates it into an intermediate string representation that can then be translated into a range (`VersionRange`) by using the `VersionParser`. Currently semver_dialects supports all the syntax listed below by invoking `translate_\u003Cpackage_type>` where `\u003Cpackage_type>` is one of:\n- `gem`: [gem requirement](https://guides.rubygems.org/specification-reference/#add_runtime_dependency)\n- `maven`: [Maven Dependency Version Requirement Specification](https://maven.apache.org/pom.html#Dependency_Version_Requirement_Specification)\n- `npm`: [node-semver](https://github.com/npm/node-semver#ranges)\n- `packagist`: [PHP Composer version constraints](https://getcomposer.org/doc/articles/versions.md#writing-version-constraints)\n- `pypi`: [PEP440](https://www.python.org/dev/peps/pep-0440/#version-specifiers)\n- `go`: [go semver](https://godoc.org/golang.org/x/tools/internal/semver)\n- `nuget`: [NuGet semver](https://docs.microsoft.com/en-us/nuget/concepts/package-versioning)\n- `conan`: [node-semver flavour](https://github.com/npm/node-semver#ranges)\nThe example below illustrates how the [semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects)' `VersionTranslator` can be used to translate native version syntax to an intermediate representation.\nThe `VersionTranslator` parses the native version syntax and translates it into a common format. In the example below, you can further see that both native, semantically equivalent but syntactically different version strings for packagist and maven are translated into a common format: a string array where a single array entry represents a conjunct of the semantic version constraints. This translation step removes all language-specific features from the native semantic version constraints.\n```ruby\n# native packagist version constraint syntax\nvs_packagist = \"\u003C2.5.9||>=2.6.0,\u003C2.6.11\"\n# native maven version constraint syntax\nvs_maven = \"(,2.5.9),[2.6.0,2.6.11)\"\n\n# translate\nputs VersionTranslator.translate_packagist(vs_packagist).to_s\n# [\"\u003C2.5.9\", \">=2.6.0 \u003C2.6.11\"]\nputs VersionTranslator.translate_maven(vs_maven).to_s\n# [\"\u003C2.5.9\", \">=2.6.0 \u003C2.6.11\"]\n```\nThis common format can then be translated to a version interval by means of `VersionParser` and `VersionRange`. The example below illustrates how the version interval `constraint` is generated by iterating over the array elements of our intermediate representation, translating them to intervals and adding these intervals to the `VersionRange` object `constraint`. At the end of the excerpt below, we check whether version `1.0.0` satisfies the version constraint `\u003C2.5.9||>=2.6.0,\u003C2.6.11` which correctly yields `true`.\n```ruby\n# translate native maven version constraint to range of interval\nconstraint = VersionRange.new\nVersionTranslator.translate_maven(vs_maven).each do |version_string|\n  constraint \u003C\u003C VersionParser.parse(version_string)\nend\n\nputs constraint.overlaps_with?(VersionParser.parse('=' + '1.0.0'))\n# true\n```\n## Wrapping it up\nWe discussed the fragmentation of SemVer versioning which poses a challenge when building automation around semantic version processing for multi-language/ecosystem applications. In this blog post, we used our internal semi-automated process for advisory generation as an example.\nWe illustrated how we tackled the above-mentioned challenge by building a generic/language-agnostic semantic version approach based on linear interval arithmetic. All mechanisms discussed in this blog post are implemented in the open-sourced (MIT)\n[semver_dialects](https://gitlab.com/gitlab-org/vulnerability-research/foss/semver_dialects) implementation and published on [rubygems.org](https://rubygems.org/gems/semver_dialects).\n",[9,23,24],"DevOps","open source","yml",{},true,"/en-us/blog/generic-semantic-version-processing",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":30,"ogSiteName":31,"ogType":32,"canonicalUrls":30},"https://about.gitlab.com/blog/generic-semantic-version-processing","https://about.gitlab.com","article","en-us/blog/generic-semantic-version-processing",[9,35,36],"devops","open-source","5jL0imWW5P-6Utxs1MpiPselSJjW_MNsC0sovkkzfCQ",{"data":39},{"logo":40,"freeTrial":45,"sales":50,"login":55,"items":60,"search":368,"minimal":399,"duo":418,"switchNav":427,"pricingDeployment":438},{"config":41},{"href":42,"dataGaName":43,"dataGaLocation":44},"/","gitlab logo","header",{"text":46,"config":47},"Get free trial",{"href":48,"dataGaName":49,"dataGaLocation":44},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":51,"config":52},"Talk to sales",{"href":53,"dataGaName":54,"dataGaLocation":44},"/sales/","sales",{"text":56,"config":57},"Sign in",{"href":58,"dataGaName":59,"dataGaLocation":44},"https://gitlab.com/users/sign_in/","sign in",[61,88,183,188,289,349],{"text":62,"config":63,"cards":65},"Platform",{"dataNavLevelOne":64},"platform",[66,72,80],{"title":62,"description":67,"link":68},"The intelligent orchestration platform for DevSecOps",{"text":69,"config":70},"Explore our Platform",{"href":71,"dataGaName":64,"dataGaLocation":44},"/platform/",{"title":73,"description":74,"link":75},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":76,"config":77},"Meet GitLab Duo",{"href":78,"dataGaName":79,"dataGaLocation":44},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":81,"description":82,"link":83},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":84,"config":85},"Learn more",{"href":86,"dataGaName":87,"dataGaLocation":44},"/why-gitlab/","why gitlab",{"text":89,"left":27,"config":90,"link":92,"lists":96,"footer":165},"Product",{"dataNavLevelOne":91},"solutions",{"text":93,"config":94},"View all Solutions",{"href":95,"dataGaName":91,"dataGaLocation":44},"/solutions/",[97,121,144],{"title":98,"description":99,"link":100,"items":105},"Automation","CI/CD and automation to accelerate deployment",{"config":101},{"icon":102,"href":103,"dataGaName":104,"dataGaLocation":44},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[106,110,113,117],{"text":107,"config":108},"CI/CD",{"href":109,"dataGaLocation":44,"dataGaName":107},"/solutions/continuous-integration/",{"text":73,"config":111},{"href":78,"dataGaLocation":44,"dataGaName":112},"gitlab duo agent platform - product menu",{"text":114,"config":115},"Source Code Management",{"href":116,"dataGaLocation":44,"dataGaName":114},"/solutions/source-code-management/",{"text":118,"config":119},"Automated Software Delivery",{"href":103,"dataGaLocation":44,"dataGaName":120},"Automated software delivery",{"title":122,"description":123,"link":124,"items":129},"Security","Deliver code faster without compromising security",{"config":125},{"href":126,"dataGaName":127,"dataGaLocation":44,"icon":128},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[130,134,139],{"text":131,"config":132},"Application Security Testing",{"href":126,"dataGaName":133,"dataGaLocation":44},"Application security testing",{"text":135,"config":136},"Software Supply Chain Security",{"href":137,"dataGaLocation":44,"dataGaName":138},"/solutions/supply-chain/","Software supply chain security",{"text":140,"config":141},"Software Compliance",{"href":142,"dataGaName":143,"dataGaLocation":44},"/solutions/software-compliance/","software compliance",{"title":145,"link":146,"items":151},"Measurement",{"config":147},{"icon":148,"href":149,"dataGaName":150,"dataGaLocation":44},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[152,156,160],{"text":153,"config":154},"Visibility & Measurement",{"href":149,"dataGaLocation":44,"dataGaName":155},"Visibility and Measurement",{"text":157,"config":158},"Value Stream Management",{"href":159,"dataGaLocation":44,"dataGaName":157},"/solutions/value-stream-management/",{"text":161,"config":162},"Analytics & Insights",{"href":163,"dataGaLocation":44,"dataGaName":164},"/solutions/analytics-and-insights/","Analytics and insights",{"title":166,"items":167},"GitLab for",[168,173,178],{"text":169,"config":170},"Enterprise",{"href":171,"dataGaLocation":44,"dataGaName":172},"/enterprise/","enterprise",{"text":174,"config":175},"Small Business",{"href":176,"dataGaLocation":44,"dataGaName":177},"/small-business/","small business",{"text":179,"config":180},"Public Sector",{"href":181,"dataGaLocation":44,"dataGaName":182},"/solutions/public-sector/","public sector",{"text":184,"config":185},"Pricing",{"href":186,"dataGaName":187,"dataGaLocation":44,"dataNavLevelOne":187},"/pricing/","pricing",{"text":189,"config":190,"link":192,"lists":196,"feature":276},"Resources",{"dataNavLevelOne":191},"resources",{"text":193,"config":194},"View all resources",{"href":195,"dataGaName":191,"dataGaLocation":44},"/resources/",[197,230,248],{"title":198,"items":199},"Getting started",[200,205,210,215,220,225],{"text":201,"config":202},"Install",{"href":203,"dataGaName":204,"dataGaLocation":44},"/install/","install",{"text":206,"config":207},"Quick start guides",{"href":208,"dataGaName":209,"dataGaLocation":44},"/get-started/","quick setup checklists",{"text":211,"config":212},"Learn",{"href":213,"dataGaLocation":44,"dataGaName":214},"https://university.gitlab.com/","learn",{"text":216,"config":217},"Product documentation",{"href":218,"dataGaName":219,"dataGaLocation":44},"https://docs.gitlab.com/","product documentation",{"text":221,"config":222},"Best practice videos",{"href":223,"dataGaName":224,"dataGaLocation":44},"/getting-started-videos/","best practice videos",{"text":226,"config":227},"Integrations",{"href":228,"dataGaName":229,"dataGaLocation":44},"/integrations/","integrations",{"title":231,"items":232},"Discover",[233,238,243],{"text":234,"config":235},"Customer success stories",{"href":236,"dataGaName":237,"dataGaLocation":44},"/customers/","customer success stories",{"text":239,"config":240},"Blog",{"href":241,"dataGaName":242,"dataGaLocation":44},"/blog/","blog",{"text":244,"config":245},"Remote",{"href":246,"dataGaName":247,"dataGaLocation":44},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":249,"items":250},"Connect",[251,256,261,266,271],{"text":252,"config":253},"GitLab Services",{"href":254,"dataGaName":255,"dataGaLocation":44},"/services/","services",{"text":257,"config":258},"Community",{"href":259,"dataGaName":260,"dataGaLocation":44},"/community/","community",{"text":262,"config":263},"Forum",{"href":264,"dataGaName":265,"dataGaLocation":44},"https://forum.gitlab.com/","forum",{"text":267,"config":268},"Events",{"href":269,"dataGaName":270,"dataGaLocation":44},"/events/","events",{"text":272,"config":273},"Partners",{"href":274,"dataGaName":275,"dataGaLocation":44},"/partners/","partners",{"backgroundColor":277,"textColor":278,"text":279,"image":280,"link":284},"#2f2a6b","#fff","Insights for the future of software development",{"altText":281,"config":282},"the source promo card",{"src":283},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":285,"config":286},"Read the latest",{"href":287,"dataGaName":288,"dataGaLocation":44},"/the-source/","the source",{"text":290,"config":291,"lists":293},"Company",{"dataNavLevelOne":292},"company",[294],{"items":295},[296,301,307,309,314,319,324,329,334,339,344],{"text":297,"config":298},"About",{"href":299,"dataGaName":300,"dataGaLocation":44},"/company/","about",{"text":302,"config":303,"footerGa":306},"Jobs",{"href":304,"dataGaName":305,"dataGaLocation":44},"/jobs/","jobs",{"dataGaName":305},{"text":267,"config":308},{"href":269,"dataGaName":270,"dataGaLocation":44},{"text":310,"config":311},"Leadership",{"href":312,"dataGaName":313,"dataGaLocation":44},"/company/team/e-group/","leadership",{"text":315,"config":316},"Team",{"href":317,"dataGaName":318,"dataGaLocation":44},"/company/team/","team",{"text":320,"config":321},"Handbook",{"href":322,"dataGaName":323,"dataGaLocation":44},"https://handbook.gitlab.com/","handbook",{"text":325,"config":326},"Investor relations",{"href":327,"dataGaName":328,"dataGaLocation":44},"https://ir.gitlab.com/","investor relations",{"text":330,"config":331},"Trust Center",{"href":332,"dataGaName":333,"dataGaLocation":44},"/security/","trust center",{"text":335,"config":336},"AI Transparency Center",{"href":337,"dataGaName":338,"dataGaLocation":44},"/ai-transparency-center/","ai transparency center",{"text":340,"config":341},"Newsletter",{"href":342,"dataGaName":343,"dataGaLocation":44},"/company/contact/#contact-forms","newsletter",{"text":345,"config":346},"Press",{"href":347,"dataGaName":348,"dataGaLocation":44},"/press/","press",{"text":350,"config":351,"lists":352},"Contact us",{"dataNavLevelOne":292},[353],{"items":354},[355,358,363],{"text":51,"config":356},{"href":53,"dataGaName":357,"dataGaLocation":44},"talk to sales",{"text":359,"config":360},"Support portal",{"href":361,"dataGaName":362,"dataGaLocation":44},"https://support.gitlab.com","support portal",{"text":364,"config":365},"Customer portal",{"href":366,"dataGaName":367,"dataGaLocation":44},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":369,"login":370,"suggestions":377},"Close",{"text":371,"link":372},"To search repositories and projects, login to",{"text":373,"config":374},"gitlab.com",{"href":58,"dataGaName":375,"dataGaLocation":376},"search login","search",{"text":378,"default":379},"Suggestions",[380,382,386,388,392,396],{"text":73,"config":381},{"href":78,"dataGaName":73,"dataGaLocation":376},{"text":383,"config":384},"Code Suggestions (AI)",{"href":385,"dataGaName":383,"dataGaLocation":376},"/solutions/code-suggestions/",{"text":107,"config":387},{"href":109,"dataGaName":107,"dataGaLocation":376},{"text":389,"config":390},"GitLab on AWS",{"href":391,"dataGaName":389,"dataGaLocation":376},"/partners/technology-partners/aws/",{"text":393,"config":394},"GitLab on Google Cloud",{"href":395,"dataGaName":393,"dataGaLocation":376},"/partners/technology-partners/google-cloud-platform/",{"text":397,"config":398},"Why GitLab?",{"href":86,"dataGaName":397,"dataGaLocation":376},{"freeTrial":400,"mobileIcon":405,"desktopIcon":410,"secondaryButton":413},{"text":401,"config":402},"Start free trial",{"href":403,"dataGaName":49,"dataGaLocation":404},"https://gitlab.com/-/trials/new/","nav",{"altText":406,"config":407},"Gitlab Icon",{"src":408,"dataGaName":409,"dataGaLocation":404},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":406,"config":411},{"src":412,"dataGaName":409,"dataGaLocation":404},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":414,"config":415},"Get Started",{"href":416,"dataGaName":417,"dataGaLocation":404},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":419,"mobileIcon":423,"desktopIcon":425},{"text":420,"config":421},"Learn more about GitLab Duo",{"href":78,"dataGaName":422,"dataGaLocation":404},"gitlab duo",{"altText":406,"config":424},{"src":408,"dataGaName":409,"dataGaLocation":404},{"altText":406,"config":426},{"src":412,"dataGaName":409,"dataGaLocation":404},{"button":428,"mobileIcon":433,"desktopIcon":435},{"text":429,"config":430},"/switch",{"href":431,"dataGaName":432,"dataGaLocation":404},"#contact","switch",{"altText":406,"config":434},{"src":408,"dataGaName":409,"dataGaLocation":404},{"altText":406,"config":436},{"src":437,"dataGaName":409,"dataGaLocation":404},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":439,"mobileIcon":444,"desktopIcon":446},{"text":440,"config":441},"Back to pricing",{"href":186,"dataGaName":442,"dataGaLocation":404,"icon":443},"back to pricing","GoBack",{"altText":406,"config":445},{"src":408,"dataGaName":409,"dataGaLocation":404},{"altText":406,"config":447},{"src":412,"dataGaName":409,"dataGaLocation":404},{"title":449,"button":450,"config":455},"See how agentic AI transforms software delivery",{"text":451,"config":452},"Watch GitLab Transcend now",{"href":453,"dataGaName":454,"dataGaLocation":44},"/events/transcend/virtual/","transcend event",{"layout":456,"icon":457,"disabled":27},"release","AiStar",{"data":459},{"text":460,"source":461,"edit":467,"contribute":472,"config":477,"items":482,"minimal":687},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":462,"config":463},"View page source",{"href":464,"dataGaName":465,"dataGaLocation":466},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":468,"config":469},"Edit this page",{"href":470,"dataGaName":471,"dataGaLocation":466},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":473,"config":474},"Please contribute",{"href":475,"dataGaName":476,"dataGaLocation":466},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":478,"facebook":479,"youtube":480,"linkedin":481},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[483,530,582,626,653],{"title":184,"links":484,"subMenu":499},[485,489,494],{"text":486,"config":487},"View plans",{"href":186,"dataGaName":488,"dataGaLocation":466},"view plans",{"text":490,"config":491},"Why Premium?",{"href":492,"dataGaName":493,"dataGaLocation":466},"/pricing/premium/","why premium",{"text":495,"config":496},"Why Ultimate?",{"href":497,"dataGaName":498,"dataGaLocation":466},"/pricing/ultimate/","why ultimate",[500],{"title":501,"links":502},"Contact Us",[503,506,508,510,515,520,525],{"text":504,"config":505},"Contact sales",{"href":53,"dataGaName":54,"dataGaLocation":466},{"text":359,"config":507},{"href":361,"dataGaName":362,"dataGaLocation":466},{"text":364,"config":509},{"href":366,"dataGaName":367,"dataGaLocation":466},{"text":511,"config":512},"Status",{"href":513,"dataGaName":514,"dataGaLocation":466},"https://status.gitlab.com/","status",{"text":516,"config":517},"Terms of use",{"href":518,"dataGaName":519,"dataGaLocation":466},"/terms/","terms of use",{"text":521,"config":522},"Privacy statement",{"href":523,"dataGaName":524,"dataGaLocation":466},"/privacy/","privacy statement",{"text":526,"config":527},"Cookie preferences",{"dataGaName":528,"dataGaLocation":466,"id":529,"isOneTrustButton":27},"cookie preferences","ot-sdk-btn",{"title":89,"links":531,"subMenu":540},[532,536],{"text":533,"config":534},"DevSecOps platform",{"href":71,"dataGaName":535,"dataGaLocation":466},"devsecops platform",{"text":537,"config":538},"AI-Assisted Development",{"href":78,"dataGaName":539,"dataGaLocation":466},"ai-assisted development",[541],{"title":542,"links":543},"Topics",[544,549,554,557,562,567,572,577],{"text":545,"config":546},"CICD",{"href":547,"dataGaName":548,"dataGaLocation":466},"/topics/ci-cd/","cicd",{"text":550,"config":551},"GitOps",{"href":552,"dataGaName":553,"dataGaLocation":466},"/topics/gitops/","gitops",{"text":23,"config":555},{"href":556,"dataGaName":35,"dataGaLocation":466},"/topics/devops/",{"text":558,"config":559},"Version Control",{"href":560,"dataGaName":561,"dataGaLocation":466},"/topics/version-control/","version control",{"text":563,"config":564},"DevSecOps",{"href":565,"dataGaName":566,"dataGaLocation":466},"/topics/devsecops/","devsecops",{"text":568,"config":569},"Cloud Native",{"href":570,"dataGaName":571,"dataGaLocation":466},"/topics/cloud-native/","cloud native",{"text":573,"config":574},"AI for Coding",{"href":575,"dataGaName":576,"dataGaLocation":466},"/topics/devops/ai-for-coding/","ai for coding",{"text":578,"config":579},"Agentic AI",{"href":580,"dataGaName":581,"dataGaLocation":466},"/topics/agentic-ai/","agentic ai",{"title":583,"links":584},"Solutions",[585,587,589,594,598,601,605,608,610,613,616,621],{"text":131,"config":586},{"href":126,"dataGaName":131,"dataGaLocation":466},{"text":120,"config":588},{"href":103,"dataGaName":104,"dataGaLocation":466},{"text":590,"config":591},"Agile development",{"href":592,"dataGaName":593,"dataGaLocation":466},"/solutions/agile-delivery/","agile delivery",{"text":595,"config":596},"SCM",{"href":116,"dataGaName":597,"dataGaLocation":466},"source code management",{"text":545,"config":599},{"href":109,"dataGaName":600,"dataGaLocation":466},"continuous integration & delivery",{"text":602,"config":603},"Value stream management",{"href":159,"dataGaName":604,"dataGaLocation":466},"value stream management",{"text":550,"config":606},{"href":607,"dataGaName":553,"dataGaLocation":466},"/solutions/gitops/",{"text":169,"config":609},{"href":171,"dataGaName":172,"dataGaLocation":466},{"text":611,"config":612},"Small business",{"href":176,"dataGaName":177,"dataGaLocation":466},{"text":614,"config":615},"Public sector",{"href":181,"dataGaName":182,"dataGaLocation":466},{"text":617,"config":618},"Education",{"href":619,"dataGaName":620,"dataGaLocation":466},"/solutions/education/","education",{"text":622,"config":623},"Financial services",{"href":624,"dataGaName":625,"dataGaLocation":466},"/solutions/finance/","financial services",{"title":189,"links":627},[628,630,632,634,637,639,641,643,645,647,649,651],{"text":201,"config":629},{"href":203,"dataGaName":204,"dataGaLocation":466},{"text":206,"config":631},{"href":208,"dataGaName":209,"dataGaLocation":466},{"text":211,"config":633},{"href":213,"dataGaName":214,"dataGaLocation":466},{"text":216,"config":635},{"href":218,"dataGaName":636,"dataGaLocation":466},"docs",{"text":239,"config":638},{"href":241,"dataGaName":242,"dataGaLocation":466},{"text":234,"config":640},{"href":236,"dataGaName":237,"dataGaLocation":466},{"text":244,"config":642},{"href":246,"dataGaName":247,"dataGaLocation":466},{"text":252,"config":644},{"href":254,"dataGaName":255,"dataGaLocation":466},{"text":257,"config":646},{"href":259,"dataGaName":260,"dataGaLocation":466},{"text":262,"config":648},{"href":264,"dataGaName":265,"dataGaLocation":466},{"text":267,"config":650},{"href":269,"dataGaName":270,"dataGaLocation":466},{"text":272,"config":652},{"href":274,"dataGaName":275,"dataGaLocation":466},{"title":290,"links":654},[655,657,659,661,663,665,667,671,676,678,680,682],{"text":297,"config":656},{"href":299,"dataGaName":292,"dataGaLocation":466},{"text":302,"config":658},{"href":304,"dataGaName":305,"dataGaLocation":466},{"text":310,"config":660},{"href":312,"dataGaName":313,"dataGaLocation":466},{"text":315,"config":662},{"href":317,"dataGaName":318,"dataGaLocation":466},{"text":320,"config":664},{"href":322,"dataGaName":323,"dataGaLocation":466},{"text":325,"config":666},{"href":327,"dataGaName":328,"dataGaLocation":466},{"text":668,"config":669},"Sustainability",{"href":670,"dataGaName":668,"dataGaLocation":466},"/sustainability/",{"text":672,"config":673},"Diversity, inclusion and belonging (DIB)",{"href":674,"dataGaName":675,"dataGaLocation":466},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":330,"config":677},{"href":332,"dataGaName":333,"dataGaLocation":466},{"text":340,"config":679},{"href":342,"dataGaName":343,"dataGaLocation":466},{"text":345,"config":681},{"href":347,"dataGaName":348,"dataGaLocation":466},{"text":683,"config":684},"Modern Slavery Transparency Statement",{"href":685,"dataGaName":686,"dataGaLocation":466},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":688},[689,692,695],{"text":690,"config":691},"Terms",{"href":518,"dataGaName":519,"dataGaLocation":466},{"text":693,"config":694},"Cookies",{"dataGaName":528,"dataGaLocation":466,"id":529,"isOneTrustButton":27},{"text":696,"config":697},"Privacy",{"href":523,"dataGaName":524,"dataGaLocation":466},[699],{"id":700,"title":18,"body":8,"config":701,"content":703,"description":8,"extension":25,"meta":707,"navigation":27,"path":708,"seo":709,"stem":710,"__hash__":711},"blogAuthors/en-us/blog/authors/julian-thome.yml",{"template":702},"BlogAuthor",{"name":18,"config":704},{"headshot":705,"ctfId":706},"","jthome",{},"/en-us/blog/authors/julian-thome",{},"en-us/blog/authors/julian-thome","60j2yTW0PuY7q83cXf4NkfzrKTRE2sUlVv5TXE6o1xM",[713,726,741],{"content":714,"config":724},{"title":715,"description":716,"authors":717,"date":719,"body":720,"category":9,"tags":721,"heroImage":723},"Prepare your pipeline for AI-discovered zero-days","AI is finding vulnerabilities faster than teams can patch. Learn how pipeline enforcement, automated triage, and AI remediation close the gap.",[718],"Omer Azaria","2026-04-20","Anthropic's [Mythos Preview model](https://red.anthropic.com/2026/mythos-preview/) recently identified thousands of zero-day vulnerabilities across every major operating system and web browser, including an OpenBSD bug that went undetected for 27 years. In testing, Mythos autonomously chained four vulnerabilities into a working browser exploit that escaped its sandbox. Anthropic is restricting access to Mythos, but the company’s head of offensive cyber research expects threats to have comparable tooling within six to twelve months.\n\nThe defender side of the equation hasn't kept pace. One third of exploited Common Vulnerabilities and Exposures (CVEs) in the first half of 2025 showed activity on or before disclosure day, before most teams even know there's something to patch. AI is compressing that window further, accelerating attackers and flooding teams with whitehat disclosures faster than they can triage. Defender tooling has improved, but most organizations can't operationalize it fast enough to close the gap between discovery and exploitation.\n\nWhen the window between disclosure and exploitation is measured in hours, the security team can't be the last line of defense. Security has to run where code enters the system: in the pipeline, on every merge request, enforced by policy. The fixes that can be automated should be. The ones that can't need to reach the right human faster than they do today.\n\n## Known vulnerabilities are already outpacing remediation\n\nThe bottleneck isn't detection, it's acting at scale on what teams already know. Sixty percent of breaches in the 2025 Verizon DBIR involved exploiting known vulnerabilities where a patch was already available. Teams couldn’t close them in time.\n\nThe backlog was untenable before Mythos. Developers spend [11 hours per month remediating vulnerabilities](https://about.gitlab.com/resources/developer-survey/) post-release instead of shipping new work. Over half of organizations have at least one open internet-facing vulnerability, and the median time to close half of those is 361 days. Exploitation takes hours, while remediation takes months.\n\nAI-assisted development is widening the gap, and stakeholders know it. By June 2025, AI-generated code was adding over 10,000 new security findings per month across Fortune 50 repositories, a 10x jump from six months earlier. Georgia Tech identified 34 [CVEs attributable to AI-generated code](https://research.gatech.edu/bad-vibes-ai-generated-code-vulnerable-researchers-warn) in March 2026, up from 6 in January, and that count reflects only the ones where AI authorship is clear. AI coding assistants hallucinate package names, reach for outdated patterns, and copy insecure examples from training data. More code, more dependencies, and more vulnerabilities per line are generated faster than security teams can review them.\n\nDefenders need to harness frontier AI models, too — not bolted onto the SDLC as external tooling, but running inside the same policies, approvals, and audit trail as the rest of the team. \n\n## Security at the speed of AI coding\n\nWhen a critical CVE drops, how quickly can your team confirm which projects are affected? How many tools does an alert cross before a developer can submit a fix?\n\nThe teams that benefit most from AI already have policies, enforcement, and controls embedded in their development workflows. AI amplifies that foundation. It doesn't replace it.\n\n**Enforcement at the point of change.** As exploitation windows compress, every line of code entering a repository needs to pass through a defined set of controls. Not a separate review, in a different tool, by a different team. Organizations need the ability to enforce security policies across every group and project, with the merge request as the enforcement point. Policies defined once, applied everywhere, with exceptions reviewed, approved, and logged.\n\n**Simple issues caught before the merge request, not during.** Hardcoded secrets, known-vulnerable imports, and deprecated API calls can be flagged in the IDE before a developer pushes a commit. Catching them at authoring time means fewer findings blocking the MR, so review cycles go to the findings that require cross-component context: reachability, exploitability, and architectural risk.\n\n**Triage automated by default, not by exception.** Embedding security into every merge request creates a volume problem. More scans, more findings, more noise reaching developers who aren’t trained to distinguish a reachable critical from a theoretical one. AI must handle false positive detection, reachability, exploitability context, and severity assessment before a developer sees the finding, so the findings they see actually warrant their time.\n\n**Remediation governed like any other change.** AI-based remediation compresses the timeline for closing vulnerabilities, but every generated fix must move through the same governance as a human-authored change: policies enforce scans, the right reviewers approve, and evidence is recorded. GitLab’s automated remediation capability proposes each fix in a merge request with a confidence score. The MR records which policy applied, which scans ran, what they found, and who approved. Human code and AI-generated code move through the same process, with the same audit trail.\n\n## What a ready pipeline looks like\n\nHere's how these pieces work together when a high-severity vulnerability is discovered and the clock is running.\n\nA proof-of-concept exploit for a vulnerability in a popular open-source package appears on a security mailing list. There’s no CVE, no National Vulnerability Database (NVD) entry, and no scanner signature yet. The security team finds out the usual way: someone shares it in Slack.\n\nA security engineer asks the security agent if the package is in use, which projects have affected versions, and whether any vulnerable call paths are reachable in production. The agent checks the dependency graph for every project, matches the affected versions and entry points from the disclosure, and returns a ranked list of exposed projects with details about reachability. There’s no need to search through repositories by hand or wait for a scanner update. The question, \"Are we exposed?\" is answered in minutes.\n\nThe engineer starts a remediation campaign for every exposed project. The remediation agent suggests fixes: version updates where a patched release is available, and targeted call-path patches where it is not. Scan execution policies are already in place for projects tagged SOC 2. The engineer hardens the rules to block merges on any merge request that introduces or keeps the affected dependency, and an approval policy now requires security sign-off on every fix. The agent's first proposed patch fails the pipeline when an integration test catches a regression. The agent revises the patch based on the test failure, and the second attempt passes. Developers review the changes, security signs off under the stricter policy, and merges proceed across the campaign.\n\nAt the next audit review, the security team presents a report showing how policies were enforced and risks were reduced during the campaign. It includes scan results, policies applied, approvers, and merge timestamps for every MR in every affected project. The evidence was automatically generated in flight, not assembled after the fact.\n\n## Close the gaps now\n\nMythos exists today, and comparable models will be in attacker hands within a year. Every month between now and then is a chance to strengthen your software supply chain.\n\nAsk these questions about your pipeline:\n\n* How do you enforce that security scans run on every merge request, not just the projects where teams configured them?\n\n* If a compromised package entered your dependency tree today, would your pipeline catch it before build?\n\n* When a scanner flags a critical finding, how many tool boundaries does it cross before a developer starts the fix?\n\n* If an AI agent proposed a code fix for a vulnerability, what process would that fix go through before reaching production, and is that process auditable?\n\n* When auditors ask for evidence that a specific policy was enforced on a specific change, how long does it take to produce?\n\nIf the answers expose gaps, address them now. [Talk to a GitLab solutions architect](https://about.gitlab.com/sales/) about the role of security governance in your development lifecycle.",[722,9,533],"AI/ML","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772195014/ooezwusxjl1f7ijfmbvj.png",{"featured":27,"template":13,"slug":725},"prepare-your-pipeline-for-ai-discovered-zero-days",{"content":727,"config":739},{"title":728,"description":729,"authors":730,"heroImage":732,"date":733,"category":9,"tags":734,"body":738},"Manage vulnerability noise at scale with auto-dismiss policies","Learn how to cut through scanner noise and focus on the vulnerabilities that matter most with GitLab security, including use cases and templates.",[731],"Grant Hickman","https://res.cloudinary.com/about-gitlab-com/image/upload/v1774375772/kpaaaiqhokevxxeoxvu0.png","2026-03-25",[9,735,563,736,737],"tutorial","features","product","Security scanners are essential, but not every finding requires action. Test code, vendored dependencies, generated files, and known false positives create noise that buries the vulnerabilities that actually matter. Security teams waste hours manually dismissing the same irrelevant findings across projects and pipelines. They experience slower triage, alert fatigue, and developer friction that undermines adoption of security scanning itself.\n\nGitLab's auto-dismiss vulnerability policies let you codify your triage decisions once and apply them automatically on every default-branch pipeline. Define criteria based on file path, directory, or vulnerability identifier (CVE, CWE), choose a dismissal reason, and let GitLab handle the rest.\n\n## Why auto-dismiss?\nAuto-dismiss vulnerability policies enable security teams to:\n- **Eliminate triage noise**: Automatically dismiss findings in test code, vendored dependencies, and generated files.\n- **Enforce decisions at scale**: Apply policies centrally to dismiss known false positives across your entire organization.\n- **Maintain audit transparency**: Every auto-dismissed finding includes a documented reason and links back to the policy that triggered it.\n- **Preserve the record**: Unlike scanner exclusions, dismissed vulnerabilities remain in your report, so you can revisit decisions if conditions change.\n\n## How auto-dismiss policies work\n\n1. **Define your policy** in a vulnerability management policy YAML file. Specify match criteria (file path, directory, or identifier) and a dismissal reason.\n\n2. **Merge and activate.** Create the policy via **Secure > Policies > New  policy > Vulnerability management policy**. Merge the MR to enable it.\n3. **Run your pipeline.** On every default-branch pipeline, matching vulnerabilities are automatically set to \"Dismissed\" with the specified reason. Up to 1,000 vulnerabilities are processed per run.\n4. **Measure the impact.** Filter your vulnerability report by status \"Dismissed\" to see exactly what was cleaned up and validate that the right findings are being handled.\n\n## Use cases with ready-to-use configurations\n\nEach example below includes a policy configuration you can copy, customize, and apply immediately.\n\n### 1. Dismiss test code vulnerabilities\n\nSAST and dependency scanners flag hardcoded credentials, insecure fixtures, and dev-only dependencies in test directories. These are not production risks.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Dismiss test code vulnerabilities\"\n    description: \"Auto-dismiss findings in test directories\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: file_path\n            value: \"test/**/*\"\n      - type: detected\n        criteria:\n          - type: file_path\n            value: \"tests/**/*\"\n      - type: detected\n        criteria:\n          - type: file_path\n            value: \"spec/**/*\"\n      - type: detected\n        criteria:\n          - type: directory\n            value: \"__tests__/*\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: used_in_tests\n\n```\n\n### 2. Dismiss vendored and third-party code\n\nVulnerabilities in `vendor/`, `third_party/`, or checked-in `node_modules` are managed upstream and not actionable for your team.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Dismiss vendored dependency findings\"\n    description: \"Findings in vendored code are managed upstream\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: directory\n            value: \"vendor/*\"\n      - type: detected\n        criteria:\n          - type: directory\n            value: \"third_party/*\"\n      - type: detected\n        criteria:\n          - type: directory\n            value: \"vendored/*\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: not_applicable\n\n```\n\n### 3. Dismiss known false positive CVEs\n\nCertain CVEs are repeatedly flagged but don't apply to your usage context. Teams dismiss these manually every time they appear. Replace the example CVEs below with your own.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Dismiss known false positive CVEs\"\n    description: \"CVEs confirmed as false positives for our environment\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CVE-2023-44487\"\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CVE-2024-29041\"\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CVE-2023-26136\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: false_positive\n\n```\n\n### 4. Dismiss generated and auto-created code\n\nProtobuf, gRPC, OpenAPI generators, and ORM scaffolding tools produce files with flagged patterns that cannot be patched by your team.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Dismiss generated code findings\"\n    description: \"Generated files are not authored by us\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: directory\n            value: \"generated/*\"\n      - type: detected\n        criteria:\n          - type: file_path\n            value: \"**/*.pb.go\"\n      - type: detected\n        criteria:\n          - type: file_path\n            value: \"**/*.generated.*\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: not_applicable\n\n```\n\n### 5. Dismiss infrastructure-mitigated vulnerabilities\n\nVulnerability classes like XSS (CWE-79) or SQL injection (CWE-89) that are already addressed by WAF rules or runtime protection. Only use this when mitigating controls are verified and consistently enforced.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Dismiss CWEs mitigated by WAF\"\n    description: \"XSS and SQLi mitigated by WAF rules\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CWE-79\"\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CWE-89\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: mitigating_control\n\n```\n\n### 6. Dismiss CVE families across your organization\n\nA wave of related CVEs for a widely-used library your team has assessed? Apply at the group level to dismiss them across dozens of projects. The wildcard pattern (e.g., `CVE-2021-44*`) matches all CVEs with that prefix.\n\n```yaml\nvulnerability_management_policy:\n  - name: \"Accept risk for log4j CVE family\"\n    description: \"Log4j CVEs mitigated by version pinning and WAF\"\n    enabled: true\n    rules:\n      - type: detected\n        criteria:\n          - type: identifier\n            value: \"CVE-2021-44*\"\n    actions:\n      - type: auto_dismiss\n        dismissal_reason: acceptable_risk\n\n```\n\n## Quick reference\n\n| Parameter | Details |\n|-----------|---------|\n| **Criteria types** | `file_path` (glob patterns, e.g., `test/**/*`), `directory` (e.g., `vendor/*`), `identifier` (CVE/CWE with wildcards, e.g., `CVE-2023-*`) |\n| **Dismissal reasons** | `acceptable_risk`, `false_positive`, `mitigating_control`, `used_in_tests`, `not_applicable` |\n| **Criteria logic** | Multiple criteria within a rule = AND (must match all). Multiple rules within a policy = OR (match any). |\n| **Limits** | 3 criteria per rule, 5 rules per policy, 5 policies per security policy project. Vulnerabilty management policy actions process 1000 vulnerabilities per pipeline run in the target project, until all matching vulnerabilities are processed. |\n| **Affected statuses** | Needs triage, Confirmed |\n| **Scope** | Project-level or group-level (group-level applies across all projects) |\n\n## Getting started\nHere's how to get started with auto-dismiss policies:\n\n1. **Identify the noise.** Open your vulnerability report and sort by \"Needs triage.\" Look for patterns: test files, vendored code, the same CVE across projects.\n\n2. **Pick a scenario.** Start with whichever use case above accounts for the most findings.\n\n3. **Record your baseline.** Note the number of \"Needs triage\" vulnerabilities before creating a policy.\n\n4. **Create and enable.** Navigate to **Secure > Policies > New policy > Vulnerability management policy**. Paste the configuration from the use case above, then merge the MR.\n\n5. **Validate results.** After the next default-branch pipeline, filter by status \"Dismissed\" to confirm the right findings were handled.\n\nFor full configuration details, see the [vulnerability management policy documentation](https://docs.gitlab.com/user/application_security/policies/vulnerability_management_policy/#auto-dismiss-policies).\n\n> Ready to take control of vulnerability noise? [Start a free GitLab Ultimate trial](https://about.gitlab.com/free-trial/) and configure your first auto-dismiss policy today.\n",{"slug":740,"featured":27,"template":13},"auto-dismiss-vulnerability-management-policy",{"content":742,"config":751},{"title":743,"description":744,"authors":745,"heroImage":747,"date":748,"body":749,"category":9,"tags":750},"GitLab 18.10 brings AI-native triage and remediation ","Learn about GitLab Duo Agent Platform capabilities that cut noise, surface real vulnerabilities, and turn findings into proposed fixes.",[746],"Alisa Ho","https://res.cloudinary.com/about-gitlab-com/image/upload/v1773843921/rm35fx4gylrsu9alf2fx.png","2026-03-19","GitLab 18.10 introduces new AI-powered security capabilities focused on improving the quality and speed of vulnerability management. Together, these features can help reduce the time developers spend investigating false positives and bring automated remediation directly into their workflow, so they can fix vulnerabilities without needing to be security experts.\n\nHere is what’s new:\n\n* [**Static Application Security Testing (SAST) false positive detection**](https://docs.gitlab.com/user/application_security/vulnerabilities/false_positive_detection/) **is now generally available.** This flow uses an LLM for agentic reasoning to determine the likelihood that a vulnerability is a false positive or not, so security and development teams can focus on remediating critical vulnerabilities first.  \n* [**Agentic SAST vulnerability resolution**](https://docs.gitlab.com/user/application_security/vulnerabilities/agentic_vulnerability_resolution/) **is now in beta.** Agentic SAST vulnerability resolution automatically creates a merge request with a proposed fix for verified SAST vulnerabilities, which can shorten time to remediation and reduce the need for deep security expertise.  \n* [**Secret false positive detection**](https://docs.gitlab.com/user/application_security/vulnerabilities/secret_false_positive_detection/) **is now in beta.** This flow brings the same AI-powered noise reduction to secret detection, flagging dummy and test secrets to save review effort.\n\nThese flows are available to GitLab Ultimate customers using GitLab Duo Agent Platform. \n\n## Cut triage time with SAST false positive detection\n\nTraditional SAST scanners flag every suspicious code pattern they find, regardless of whether code paths are reachable or frameworks already handle the risk. Without runtime context, they cannot distinguish a real vulnerability from safe code that just looks dangerous.\n\nThis means developers could spend hours investigating findings that turn out to be false positives. Over time, that can erode confidence in the report and slow down the teams responsible for fixing real risks.\n\nAfter each SAST scan, GitLab Duo Agent Platform automatically analyzes new critical and high severity findings and attaches:\n\n* A confidence score indicating how likely the finding is to be a false positive  \n* An AI-generated explanation describing the reasoning  \n* A visual badge that makes “Likely false positive” versus “Likely real” easy to scan in the UI\n\nThese findings appear in the [Vulnerability Report](https://docs.gitlab.com/user/application_security/vulnerability_report/), as shown below. You can filter the report to focus on findings marked as “Not false positive” so teams can spend their time addressing real vulnerabilities instead of sifting through noise.\n\n![Vulnerability report](https://res.cloudinary.com/about-gitlab-com/image/upload/v1773844787/i0eod01p7gawflllkgsr.png)\n\n\nGitLab Duo Agent Platform's assessment is a recommendation. You stay in control of every false positive to determine if it is valid, and you can audit the agent's reasoning at any time to build confidence in the model. \n\n\n## Turn vulnerabilities into automated fixes\n\nKnowing that a vulnerability is real is only half the work.  Remediation still requires understanding the code path, writing a safe patch, and making sure nothing else breaks.\n\nIf the vulnerability is identified as likely not be a false positive by the SAST false positive detection flow, the Agentic SAST vulnerability resolution flow automatically:\n\n1. Reads the vulnerable code and surrounding context from your repository  \n2. Generates high-quality proposed fixes  \n3. Validates fixes through automated testing   \n4. Opens a merge request with a proposed fix that includes:  \n   * Concrete code changes  \n   * A confidence score  \n   * An explanation of what changed and why\n\nIn this demo, you’ll see how GitLab can automatically take a SAST vulnerability all the way from detection to a ready-to-review merge request. Watch how the agent reads the code, generates and validates a fix, and opens an MR with clear, explainable changes so developers can remediate faster without being security experts.\n\n\u003Ciframe src=\"https://player.vimeo.com/video/1174573325?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" style=\"position:absolute;top:0;left:0;width:100%;height:100%;\" title=\"GitLab 18.10 AI SAST False Positive Auto Remediation\">\u003C/iframe>\u003Cscript src=\"https://player.vimeo.com/api/player.js\">\u003C/script>\n\nAs with any AI-generated suggestion, you should review the proposed merge request carefully before merging.\n\n## Surface real secrets\n\nSecret detection is only useful if teams trust the results. When reports are full of test credentials, placeholder values, and example tokens, developers may waste time reviewing noise instead of fixing real exposures. That can slow remediation and decrease confidence in the scan.\n\nSecret false positive detection helps teams focus on the secrets that matter so they can reduce risk faster. When it runs on the default branch, it will automatically:\n\n1. Analyze each finding to spot likely test credentials, example values, and dummy secrets  \n2. Assign a confidence score for whether the finding is a real risk or a likely false positive  \n3. Generate an explanation for why the secret is being treated as real or noise  \n4. Add a badge in the Vulnerability Report so developers can see the status at a glance\n\nDevelopers can also trigger this analysis manually from the Vulnerability Report by selecting **“Check for false positive”** on any secret detection finding, helping them clear out findings that do not pose risk and focus on real secrets sooner.\n\n## Try AI-powered security today\n\nGitLab 18.10 introduces capabilities that cover the full vulnerability workflow, from cutting false positive noise in SAST and secret detection to automatically generating merge requests with proposed fixes.\n\nTo see how AI-powered security can help cut review time and turn findings into ready-to-merge fixes, [start a free trial of GitLab Duo Agent Platform today](https://about.gitlab.com/gitlab-duo-agent-platform/?utm_medium=blog&utm_source=blog&utm_campaign=eg_global_x_x_security_en_).",[737,9,736],{"featured":12,"template":13,"slug":752},"gitlab-18-10-brings-ai-native-triage-and-remediation",{"promotions":754},[755,769,780,791],{"id":756,"categories":757,"header":759,"text":760,"button":761,"image":766},"ai-modernization",[758],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":762,"config":763},"Get your AI maturity score",{"href":764,"dataGaName":765,"dataGaLocation":242},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":767},{"src":768},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":770,"categories":771,"header":772,"text":760,"button":773,"image":777},"devops-modernization",[737,566],"Are you just managing tools or shipping innovation?",{"text":774,"config":775},"Get your DevOps maturity score",{"href":776,"dataGaName":765,"dataGaLocation":242},"/assessments/devops-modernization-assessment/",{"config":778},{"src":779},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":781,"categories":782,"header":783,"text":760,"button":784,"image":788},"security-modernization",[9],"Are you trading speed for security?",{"text":785,"config":786},"Get your security maturity score",{"href":787,"dataGaName":765,"dataGaLocation":242},"/assessments/security-modernization-assessment/",{"config":789},{"src":790},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":792,"paths":793,"header":796,"text":797,"button":798,"image":803},"github-azure-migration",[794,795],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":799,"config":800},"See how GitLab compares to GitHub",{"href":801,"dataGaName":802,"dataGaLocation":242},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":804},{"src":779},{"header":806,"blurb":807,"button":808,"secondaryButton":813},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":809,"config":810},"Get your free trial",{"href":811,"dataGaName":49,"dataGaLocation":812},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":504,"config":814},{"href":53,"dataGaName":54,"dataGaLocation":812},1777393964358]