A simpler way to do this, especially if you do tagging in your repositories, is to use `git describe`. For example:
$ git describe --dirty
v1.4.1-1-gde18fe90-dirty
The format is <the most recent tag>-<the number of commits since that tag>-g<the short git hash>-<dirty, but only if the repo is dirty>.
If the repo isn't dirty, then the hash you get excludes that part:
$ git describe --dirty
v1.4.1-1-gde18fe90
If you're using lightweight tags (the default) and not annotated tags (with messages and signatures and etc) you may want to add `--tags` because otherwise it'll skip over any lightweight tags.
The other nice thing about this is that, if the repo is not -dirty, you can use the output from `git describe` in other git commands to reference that commit:
$ git show -s v1.4.1-1-gde18fe90
commit de18fe907edda2f2854e9813fcfbda9df902d8f1 (HEAD -> 1.4.1-release, origin/HEAD, origin/1.4.1-release)
Author: rockowitz <rockowitz@minsoft.com>
Date: Sun May 28 17:09:46 2023 -0400
Create codacy.yml
Also, if you don't feel ready to commit to tagging your repository you can start with the `--always` flag which falls back to just the short commit hash.
The article's script isn't far from `git describe --always --dirty`, which can be a good place to start, and then it gets better as you start tagging.
That barely scratches the surface when it comes to reproducible c and c++ builds. In fact the topic of reproducible builds assumes your sources are the same, as in that's really not the problem here.
You need to control every single library header version you are using outside your source like stdlibs, os headers, third party, and have a strategy to deal with rand/datetime variables that can be part of the binary.
As well as the toolchain used to compile your toolchain, through multiple levels, and all compiler flags along the path, and so on, down to some "seed" from which everything is build.
A good package manager, e.g. GNU Guix, let's you define a reproducible environment of all of your dependencies. This accounts for all of those external headers and shared libraries, which will be made available in an isolated build environment that only contains them and nothing else.
Eliminating nondeterminism from your builds might require some thinking, there are a number of places this can creep in (timestamps, random numbers, nondeterministic execution, ...). A good package manager can at least give you tooling to validate that you have eliminated nondeterminism (e.g. `guix build --check ...`).
Once you control the entire environment and your build is reproducible in principal, you might still encounter some fun issues, like "time traps". Guix has a great blog post about some of these issues and how they mitigate them: https://guix.gnu.org/en/blog/2024/adventures-on-the-quest-fo...
Virtualization, imho. Every build gets its own virtual machine, and once the build is released to the public, the VM gets cloned for continued development and the released VM gets archived.
I do this git tags thing with my projects - it helps immensely if the end user can hover over the company logo and get a tooltip with the current version, git tag and hash, and any other relevant information to the build.
Then, if I need to triage something specific, I un-archive the virtualized build environment, and everything that was there in the original build is still there.
This is a very handy method for keeping large code bases under control, and has been very effective over the years in going back to triage new bugs found, fixing them, and so on.
Back in the PS2 era of game development, we didn't have much of virtual machines to work with. And, making a shippable build involved wacky custom hardware that wouldn't work in a VM anyway. So, instead we had The Build Machine.
The Build Machine would be used to make The Gold Master Disc. A physical DVD that would be shipped to the publisher to be reproduced hopefully millions of times. Getting The Gold Master Disc to a shippable state would usually take weeks because it involved burning a custom disc format for each build and there was usually no way to debug other than watching what happened on the game screen.
When The Gold Master Disc was finally finalized, The Build Machine would be powered down, unplugged, labeled "This is the machine that made The Gold Master Disc for Game XYZ. DO NOT DISCARD. Do not power on without express permission from the CTO." and archived in the basement forever. Or, until the company shut down. Then, who knows what happens to it.
But, there was always a chance that the publisher or Sony would come back and request to make a change for 1.0.1 version because of some subtle issue that was found later. You don't want to take any chances starting the build process over on a different machine. You make the minimal changes possible on The Build Machine and you get The Gold Master Disc 1.0.1 out ASAP.
Give Nix a look sometime, it takes this to a whole new level by including all of the build dependencies in the hash, and their build dependencies and so on. The standard flake workflow even includes the warning about having uncommitted files.
It's quite odd to me that Nix or something similar like Mise isn't completely ubiquitous in software. I feel like I went from having issues with build dependencies to having that aspect of software development completely solved as soon as I adopted Nix.
I absolutely can't imagine not using some kind of tool like this. Feels as vital as VCS to me now.
Agreed. Recently started a new gig and set up Mise (previously had used nix for this) in our primary repos so that we can all share dependencies, scripts, etc. The new monorepo mode is great. Basically no one has complained and it's made everyone's lives a lot easier. Can't imagine working any other way — having the same tools everywhere is really great.
I'll also say I have absolutely 0 regrets about moving from Nix to Mise. All the common tools we want are available, it's especially easy to install tools from pip or npm and have the environments automanaged. The docs are infinity times better. And the speed of install and shell sourcing is, you guessed it, much better. Initial setup and install is also fantastically easier. I understand the ideology behind Nix, and if I were working on projects where some of our tools weren't pre-packageable or had weird conflicting runtime lib problems I'd get it, but basically everything these days has prebuilt static binaries available.
Mise is pretty nice, I'd recommend it over all the other gazillion version-manager things out there, but it's not without its own weak spots: I tried mise for a php project, neither of the backends available for php had a binary for macos, and both of them failed to build it. I now use a flake.nix, along with direnv and `use flake`. The nix language definitely makes for some baffling boilerplate around the dependencies list, but devs unfamiliar with nix can ignore it and just paste in the package name from nixpkgs search.
There's also jbadeau/mise-nix that lets you use flakes in mise, but I figured at that point I may as well just use flake.nix.
The beauty of mise is that as long as someone is hosting a precompiled binary for you, it's easy to get it. I just repro'd and yeah, `mise use php` fails for me on my machine because I don't have any dev headers. But looks like there's an easy workaround using the `ubi` downloader:
Having these kind of "eject" options is one of the reasons I really appreciate Mise. Not sure this would work for you but I'd rather be able to do this than have to manage/support everyone on my dev team installing and maintaining Nix.
We'd have been a lot further along if tools like make had ever adopted hashes for freshness checking rather than timestamps. We'd have ccache built in to make, make could hash entire targets, and now we're halfway to derivations. Of course that's handwaving over the tricky problem of making sure targets build reproducibly, but perhaps compiler toolchains would have taken more care to ensure it.
I'd say the sad part is that nix really works well when the toolchain does caching transparently. But to deliver good DX outside of nix, you kind of want great porcelain tooling that handles everything behind the scenes - downloading of libraries, building said libraries, linking everything together. Sometimes people choose to just embed a whole programming language to make their build system work e.g. gradle. Cargo just does everything. Nix then can't really granularly build everything piece by piece when building rust crates with Cargo - you just get to rebuild every dependency any time the derivation is built and any one input changed. I wonder how much less time would've been wasted if newer languages chose to build on top of nix. Of course, nix would need to become slightly more compatible with Windows and other OSes for this to be practical.
Timestamps have the property of being easily comparable; you can always tell if one file is older then the other. If you were to use hashes for the same purpose, you'd have to keep a database of expected hashes, and comparing them would be a less trivial task, etc. It's doable, but it would be a very differently designed (and much more computationally expensive) program then make.
I bet we could get pretty far with symlinks, but then again even those were an exotic feature on some of make's supported platforms. Nowadays, may as well use sqlite.
Git hashes have nothing whatsoever to do with whether you can do a clean build of the same tree twice with the same results, bit for bit.
Git hashes or tags can help identify what was built: the inputs.
You only need to know that for traceability: when you hold the released outputs, but do not hold (or are not sure you hold) the matching inputs.
If builds are reproducible, the traceability becomes more meaningful.
In the TXR project, have a ./configure option called --build-id. This sets an ID that is appended to the version, which is in the executable. It is nothing by default; not used. It is meant to be useful for people who interact with the code, so they can check what they are running (things can get confusing when you are going back and forth among versions, or making local changes).
If you set the build ID it to the word "git", then it is calculated using:
git describe --tags --dirty
that's probably what this author should be using. It gives you a meaningful ID that is related to the most recent release tag, and whether the repo was dirty.
We are (sadly, only) 20 commits after 302, at a commit whose short hash is 77c99b74e, and the repo is in a modified state.
I have it rigged in the Makefile that it actually keeps track of the most recent build ID in a little .build_id file. If the build ID changes relative to what is in that file, the Makefile will force a rebuild of the .o files which incorporate the build ID.
Also, there is no need to be generating dynamic #include material just for this. A simple -Dsymbol=var option in the CFLAGS will define a preprocessor symbol:
Yep, your way of framing it is clearer. Embedding version information in released binary artefacts helps answer the question of "what version of the software even produced this output/is crashing in production?". This is the problem that the author is focusing on, and it is an important thing to sort out early in any serious project, especially if you ship software that gets deployed to customer machines. Setting this up early will probably even pay for itself before the software is in production as knowing what version is deployed where can reduce wasted time due to confusion about which experimental version is deployed to what non prod environment.
It's addressing a distinct problem from "if we rebuild any given version, perhaps some later time, do we even get the same binary?" which is what people usually mean by "reproducible builds".
Your tip that injecting build ids can be done with linker flags without needing to generate header files is a great one.
Passing version info without code generation using linker flags can also be done in other languages & toolchains, e.g. with Go projects, the go linker exposes an -x flag that can be used to set the value of a string variable in a package [1] [2].
A step beyond this could be to explicitly build a feature into your software to help the user report bugs or request support, e.g. user clicks a button and the software dumps its own version info, info about what the user is doing & their machine, packages it up and sends in to your support queue. Doesn't make sense doing this for backend services, but you do see support features like this in PC games to help users easily send high quality bug reports.
> Passing version info without code generation using linker flags can also be done in other languages & toolchains, e.g. with Go projects, the go linker exposes an -x flag
Here's a short writeup of a bit of my build system for a project I'm working on. It's pretty simple, and is just a relatively clean way of recording the repository state when code was compiled, so I can reproduce results later on. Just thought the interaction between git, cmake, and C++ was a bit nice!
This is many useful things, but it's far from a reproducible C++ build. That'd require you ensure bit-for-bit identic builds when you reproduce, and logging the repository state is just a tiny first step to get there.
Can I build my embedded firmware with nix using a Windows only toolchain?
(Fyi I just used something like the solution from the article, with the hash embedded in the binary image to be burned to ROM masks. The gaps in toolchain versioning and not building with dirty checkouts can be managed with self discipline /internal checks)
Generally development tools run fine under wine, so I'd guess it would be fine. Running a windows binary within wine within WSL on windows does seem a little insane tho!
Now I think of it, WSL can generally call out to Windows tools - you would need to run in a Windows file system mounted into WSL. It just won't port to a Linux-based CI job without Wine. The ideal is a build and test run that is reproducible in CI and locally.
A simpler way to do this, especially if you do tagging in your repositories, is to use `git describe`. For example:
The format is <the most recent tag>-<the number of commits since that tag>-g<the short git hash>-<dirty, but only if the repo is dirty>.If the repo isn't dirty, then the hash you get excludes that part:
If you're using lightweight tags (the default) and not annotated tags (with messages and signatures and etc) you may want to add `--tags` because otherwise it'll skip over any lightweight tags.The other nice thing about this is that, if the repo is not -dirty, you can use the output from `git describe` in other git commands to reference that commit:
`git describe` is great.
Also, if you don't feel ready to commit to tagging your repository you can start with the `--always` flag which falls back to just the short commit hash.
The article's script isn't far from `git describe --always --dirty`, which can be a good place to start, and then it gets better as you start tagging.
The one caveat to this is that you must perform a sufficiently-deep clone that you can actually reach the tag.
That barely scratches the surface when it comes to reproducible c and c++ builds. In fact the topic of reproducible builds assumes your sources are the same, as in that's really not the problem here.
You need to control every single library header version you are using outside your source like stdlibs, os headers, third party, and have a strategy to deal with rand/datetime variables that can be part of the binary.
You also need to capture the version of the toolchain etc etc. Should also have a traceable link to the version of your specifications.
Just use ClearCase/ClearMake, it's been doing all of this software configuration auditing stuff for you since the 1990s.
Also the compiler/linker used to build it.
As well as the toolchain used to compile your toolchain, through multiple levels, and all compiler flags along the path, and so on, down to some "seed" from which everything is build.
Guix' full-source bootstrap is pretty enlightening on that topic: https://guix.gnu.org/manual/devel/en/html_node/Full_002dSour...
How would you even start solving these?
A good package manager, e.g. GNU Guix, let's you define a reproducible environment of all of your dependencies. This accounts for all of those external headers and shared libraries, which will be made available in an isolated build environment that only contains them and nothing else.
Eliminating nondeterminism from your builds might require some thinking, there are a number of places this can creep in (timestamps, random numbers, nondeterministic execution, ...). A good package manager can at least give you tooling to validate that you have eliminated nondeterminism (e.g. `guix build --check ...`).
Once you control the entire environment and your build is reproducible in principal, you might still encounter some fun issues, like "time traps". Guix has a great blog post about some of these issues and how they mitigate them: https://guix.gnu.org/en/blog/2024/adventures-on-the-quest-fo...
Virtualization, imho. Every build gets its own virtual machine, and once the build is released to the public, the VM gets cloned for continued development and the released VM gets archived.
I do this git tags thing with my projects - it helps immensely if the end user can hover over the company logo and get a tooltip with the current version, git tag and hash, and any other relevant information to the build.
Then, if I need to triage something specific, I un-archive the virtualized build environment, and everything that was there in the original build is still there.
This is a very handy method for keeping large code bases under control, and has been very effective over the years in going back to triage new bugs found, fixing them, and so on.
Back in the PS2 era of game development, we didn't have much of virtual machines to work with. And, making a shippable build involved wacky custom hardware that wouldn't work in a VM anyway. So, instead we had The Build Machine.
The Build Machine would be used to make The Gold Master Disc. A physical DVD that would be shipped to the publisher to be reproduced hopefully millions of times. Getting The Gold Master Disc to a shippable state would usually take weeks because it involved burning a custom disc format for each build and there was usually no way to debug other than watching what happened on the game screen.
When The Gold Master Disc was finally finalized, The Build Machine would be powered down, unplugged, labeled "This is the machine that made The Gold Master Disc for Game XYZ. DO NOT DISCARD. Do not power on without express permission from the CTO." and archived in the basement forever. Or, until the company shut down. Then, who knows what happens to it.
But, there was always a chance that the publisher or Sony would come back and request to make a change for 1.0.1 version because of some subtle issue that was found later. You don't want to take any chances starting the build process over on a different machine. You make the minimal changes possible on The Build Machine and you get The Gold Master Disc 1.0.1 out ASAP.
Take a look at the decade+ long effort that Debian has put into this problem: https://wiki.debian.org/ReproducibleBuilds
Here's a talk from 2024: https://debconf24.debconf.org/talks/18-reproducible-builds-t...
Several distros are above the 90% mark of all packages being byte-for-byte reproducible, and one or two have hit the 99% mark.
> Several distros are above the 90% mark of all packages being byte-for-byte reproducible, and one or two have hit the 99% mark.
Simply incredible.
Explains F-Droid's recent success with Reproducible Builds (as some F-Droid maintainers are also active in the Debian scene): https://f-droid.org/en/2025/05/21/making-reproducible-builds...
AFAIK ClearMake intercepted file system access and recorded the version of everything touched during your build.
Give Nix a look sometime, it takes this to a whole new level by including all of the build dependencies in the hash, and their build dependencies and so on. The standard flake workflow even includes the warning about having uncommitted files.
It's quite odd to me that Nix or something similar like Mise isn't completely ubiquitous in software. I feel like I went from having issues with build dependencies to having that aspect of software development completely solved as soon as I adopted Nix.
I absolutely can't imagine not using some kind of tool like this. Feels as vital as VCS to me now.
Agreed. Recently started a new gig and set up Mise (previously had used nix for this) in our primary repos so that we can all share dependencies, scripts, etc. The new monorepo mode is great. Basically no one has complained and it's made everyone's lives a lot easier. Can't imagine working any other way — having the same tools everywhere is really great.
I'll also say I have absolutely 0 regrets about moving from Nix to Mise. All the common tools we want are available, it's especially easy to install tools from pip or npm and have the environments automanaged. The docs are infinity times better. And the speed of install and shell sourcing is, you guessed it, much better. Initial setup and install is also fantastically easier. I understand the ideology behind Nix, and if I were working on projects where some of our tools weren't pre-packageable or had weird conflicting runtime lib problems I'd get it, but basically everything these days has prebuilt static binaries available.
Mise is pretty nice, I'd recommend it over all the other gazillion version-manager things out there, but it's not without its own weak spots: I tried mise for a php project, neither of the backends available for php had a binary for macos, and both of them failed to build it. I now use a flake.nix, along with direnv and `use flake`. The nix language definitely makes for some baffling boilerplate around the dependencies list, but devs unfamiliar with nix can ignore it and just paste in the package name from nixpkgs search.
There's also jbadeau/mise-nix that lets you use flakes in mise, but I figured at that point I may as well just use flake.nix.
The beauty of mise is that as long as someone is hosting a precompiled binary for you, it's easy to get it. I just repro'd and yeah, `mise use php` fails for me on my machine because I don't have any dev headers. But looks like there's an easy workaround using the `ubi` downloader:
https://github.com/jdx/mise/discussions/4720#discussioncomme...
or see the first comment on this thread to see a way to explicitly specify where to find the binaries for each platform:
https://github.com/jdx/mise/discussions/4720#discussioncomme...
Having these kind of "eject" options is one of the reasons I really appreciate Mise. Not sure this would work for you but I'd rather be able to do this than have to manage/support everyone on my dev team installing and maintaining Nix.
We'd have been a lot further along if tools like make had ever adopted hashes for freshness checking rather than timestamps. We'd have ccache built in to make, make could hash entire targets, and now we're halfway to derivations. Of course that's handwaving over the tricky problem of making sure targets build reproducibly, but perhaps compiler toolchains would have taken more care to ensure it.
I'd say the sad part is that nix really works well when the toolchain does caching transparently. But to deliver good DX outside of nix, you kind of want great porcelain tooling that handles everything behind the scenes - downloading of libraries, building said libraries, linking everything together. Sometimes people choose to just embed a whole programming language to make their build system work e.g. gradle. Cargo just does everything. Nix then can't really granularly build everything piece by piece when building rust crates with Cargo - you just get to rebuild every dependency any time the derivation is built and any one input changed. I wonder how much less time would've been wasted if newer languages chose to build on top of nix. Of course, nix would need to become slightly more compatible with Windows and other OSes for this to be practical.
Timestamps have the property of being easily comparable; you can always tell if one file is older then the other. If you were to use hashes for the same purpose, you'd have to keep a database of expected hashes, and comparing them would be a less trivial task, etc. It's doable, but it would be a very differently designed (and much more computationally expensive) program then make.
I bet we could get pretty far with symlinks, but then again even those were an exotic feature on some of make's supported platforms. Nowadays, may as well use sqlite.
I think bazel is the tool lot of people are converging towards, but turns out that maintaining complex build setups is a lot of work.
Yes, especially as you can do things like
There is no need to keep anything around, or roll your own nix equivalent, you can just look up the output by commit.Git hashes have nothing whatsoever to do with whether you can do a clean build of the same tree twice with the same results, bit for bit.
Git hashes or tags can help identify what was built: the inputs.
You only need to know that for traceability: when you hold the released outputs, but do not hold (or are not sure you hold) the matching inputs.
If builds are reproducible, the traceability becomes more meaningful.
In the TXR project, have a ./configure option called --build-id. This sets an ID that is appended to the version, which is in the executable. It is nothing by default; not used. It is meant to be useful for people who interact with the code, so they can check what they are running (things can get confusing when you are going back and forth among versions, or making local changes).
If you set the build ID it to the word "git", then it is calculated using:
that's probably what this author should be using. It gives you a meaningful ID that is related to the most recent release tag, and whether the repo was dirty. We are (sadly, only) 20 commits after 302, at a commit whose short hash is 77c99b74e, and the repo is in a modified state.I have it rigged in the Makefile that it actually keeps track of the most recent build ID in a little .build_id file. If the build ID changes relative to what is in that file, the Makefile will force a rebuild of the .o files which incorporate the build ID.
Also, there is no need to be generating dynamic #include material just for this. A simple -Dsymbol=var option in the CFLAGS will define a preprocessor symbol:
Yep, your way of framing it is clearer. Embedding version information in released binary artefacts helps answer the question of "what version of the software even produced this output/is crashing in production?". This is the problem that the author is focusing on, and it is an important thing to sort out early in any serious project, especially if you ship software that gets deployed to customer machines. Setting this up early will probably even pay for itself before the software is in production as knowing what version is deployed where can reduce wasted time due to confusion about which experimental version is deployed to what non prod environment.
It's addressing a distinct problem from "if we rebuild any given version, perhaps some later time, do we even get the same binary?" which is what people usually mean by "reproducible builds".
Your tip that injecting build ids can be done with linker flags without needing to generate header files is a great one.
Passing version info without code generation using linker flags can also be done in other languages & toolchains, e.g. with Go projects, the go linker exposes an -x flag that can be used to set the value of a string variable in a package [1] [2].
A step beyond this could be to explicitly build a feature into your software to help the user report bugs or request support, e.g. user clicks a button and the software dumps its own version info, info about what the user is doing & their machine, packages it up and sends in to your support queue. Doesn't make sense doing this for backend services, but you do see support features like this in PC games to help users easily send high quality bug reports.
[1] https://pkg.go.dev/cmd/link
[2] https://www.digitalocean.com/community/tutorials/using-ldfla...
In short, "traceable bill of materials" != "reproducible build"
Which golfs to "traceable" != "reproducible"
> Passing version info without code generation using linker flags can also be done in other languages & toolchains, e.g. with Go projects, the go linker exposes an -x flag
Someday, Go programs won't have to do this: https://github.com/golang/go/issues/50603
For those of you using CMake, have a look at the module below:
https://github.com/xrootd/xrootd/blob/master/cmake/XRootDVer...
and also the genversion.sh script at the top of the repo.
I use these plus #cmakedefine and git tags to manage the project version without having to do it via commits.
Here's a short writeup of a bit of my build system for a project I'm working on. It's pretty simple, and is just a relatively clean way of recording the repository state when code was compiled, so I can reproduce results later on. Just thought the interaction between git, cmake, and C++ was a bit nice!
This is many useful things, but it's far from a reproducible C++ build. That'd require you ensure bit-for-bit identic builds when you reproduce, and logging the repository state is just a tiny first step to get there.
https://nikhilism.com/post/2020/windows-deterministic-builds... is a good resource on some of the other steps needed. It's... a non-trivial journey :)
Logging Git hashes makes C++ builds reproducible and easy to track."
nix fixes this
had to be said
Can I build my embedded firmware with nix using a Windows only toolchain?
(Fyi I just used something like the solution from the article, with the hash embedded in the binary image to be burned to ROM masks. The gaps in toolchain versioning and not building with dirty checkouts can be managed with self discipline /internal checks)
Generally development tools run fine under wine, so I'd guess it would be fine. Running a windows binary within wine within WSL on windows does seem a little insane tho!
Now I think of it, WSL can generally call out to Windows tools - you would need to run in a Windows file system mounted into WSL. It just won't port to a Linux-based CI job without Wine. The ideal is a build and test run that is reproducible in CI and locally.