On https://exiftool.org/ , there is a link to https://exiftool.org/exiftool-12.01.zip and https://exiftool.org/checksums.txt .
Both the ZIP archive and the checksum hash are hosted on the same machine. This means that an attacker who has compromised the server also will have replaced checksums.txt with a fake one matching the malware-infected ZIP archive. And thus there is no point in checking this checksum?
Maybe the answer is "they can’t afford a separate server", which explains it and is understandable. However, what is the point?
One idea I had was that maybe it’s implied that I should store these hashes "for the next time", and thus be reasonably sure (at least slightly less unsure) that somebody hasn’t compromised the server since the last check. However, the checksums are very specific to the current file! They are not some kind of general "author’s signature" which I can use to verify that it’s the real author who has signed the new binaries. So that idea goes out of the window as well.
Since the security hashes are very specific for the current files, there is no value in "pre-fetching" and storing these locally to compare them the next time, since they always correspond to the current version of the binaries.
I refuse to believe that a person smart enough to make such a project would not have thought of this, so I assume that I must be missing something.
(Also, this is far from the only project where I’ve seen this, so this is not specifically about ExifTool. I just used it as an example since I’m trying to code a secure update mechanism for this program.)