A couple of these were probably actual bugs in edge cases. Most of the
changes I'm fine with. The fact that aeson's object returns sometihng
that we know will be an Object, but the type checker does not know is
kind of annoying.
Git will eventually switch to sha2 and there will not be one single
shaSize anymore, but two (40 and 64).
Changed all parsers for git plumbing output to support both sizes of
shas.
One potential problem this does not deal with is, if somewhere in
git-annex it reads two shas from different sources, and compares them
to see if they're the same sha, it would fail if they're sha1 and sha256
of the same value. I don't know if that will really be a concern.
smudge: When annex.largefiles=anything, files that were already stored in
git, and have not been modified could sometimes be converted to being
stored in the annex. Changes in 7.20191024 made this more of a problem.
This case is now detected and prevented.
Adds a dependency on filepath-bytestring, an as yet unreleased fork of
filepath that operates on RawFilePath.
Git.Repo also changed to use RawFilePath for the path to the repo.
This does eliminate some RawFilePath -> FilePath -> RawFilePath
conversions. And filepath-bytestring's </> is probably faster.
But I don't expect a major performance improvement from this.
This is mostly groundwork for making Annex.Location use RawFilePath,
which will allow for a conversion-free pipleline.
Goal is to make git-annex faster by using ByteString for all the
worktree traversal. For now, this is focusing on Command.Find,
in order to benchmark how much it helps. (All other commands are
temporarily disabled)
Currently in a very bad unbuildable in-between state.
This does not change the overall license of the git-annex program, which
was already AGPL due to a number of sources files being AGPL already.
Legally speaking, I'm adding a new license under which these files are
now available; I already released their current contents under the GPL
license. Now they're dual licensed GPL and AGPL. However, I intend
for all my future changes to these files to only be released under the
AGPL license, and I won't be tracking the dual licensing status, so I'm
simply changing the license statement to say it's AGPL.
(In some cases, others wrote parts of the code of a file and released it
under the GPL; but in all cases I have contributed a significant portion
of the code in each file and it's that code that is getting the AGPL
license; the GPL license of other contributors allows combining with
AGPL code.)
My concern with using bytestring for this is the file needs to be split
into lines, and the encoding is not known. It's safe to split a utf-8
encoded file on the \n byte; only newlines get encoded to that byte in utf-8.
And this code already assumes utf-8 or ascii encoding, because it used
the filesystem encoding.
This was badly named, it's a not a blob necessarily, but anything that a
tree can refer to.
Also removed the Show instance which was used for serialization to git
format, instead use fmtTreeItemType.
This commit was supported by the NSF-funded DataLad project.
This does mean that it has to write out temp files containing updated
objects for the merge. So may use more disk space, and disk IO, but that
should generally win out over needing to launch N separate
git hash-object processes.
None of these are very likely at all to ever be ambiguous, since tree
refs almost never have symbolic names and the sha is very unlikely
to be in the work tree.. But, let's get it right!
Union merges involving two or more repositories could sometimes result in
data from one repository getting lost. This could result in the location
log data becoming wrong, and fsck being needed to fix it.
NB: I audited for any other occurrences of this problem. There are other
places than union merge where multiple changes are fed into update-index
in a stream, but they all involve working copy files being staged, or their
deletion being staged, and in this case it's fine for the later changes
to override the earlier ones.
I'm down to 9 places in the code that can produce unwaited for zombies.
Most of these are pretty innocuous, at least for now, are only
used in short-running commands, or commands that run a set of
actions and explicitly reap zombies after each one.
The one from Annex.Branch.files could be trouble later,
since both Command.Fsck and Command.Unused can trigger it,
and the assistant will be doing those eventally. Ditto the one in
Git.LsTree.lsTree, which Command.Unused uses.
The only ones currently affecting the assistant though, are
in Git.LsFiles. Several threads use several of those.
(And yeah, using pipes or ResourceT would be a less ad-hoc approach,
but I don't really feel like ripping my entire code base apart right
now to change a foundation monad. Maybe one of these days..)
Under ghc 7.4, this seems to be able to handle all filename encodings
again. Including filename encodings that do not match the LANG setting.
I think this will not work with earlier versions of ghc, it uses some ghc
internals.
Turns out that ghc 7.4 has a special filesystem encoding that it uses when
reading/writing filenames (as FilePaths). This encoding is documented
to allow "arbitrary undecodable bytes to be round-tripped through it".
So, to get FilePaths from eg, git ls-files, set the Handle that is reading
from git to use this encoding. Then things basically just work.
However, I have not found a way to make Text read using this encoding.
Text really does assume unicode. So I had to switch back to using String
when reading/writing data to git. Which is a pity, because it's some
percent slower, but at least it works.
Note that stdout and stderr also have to be set to this encoding, or
printing out filenames that contain undecodable bytes causes a crash.
IMHO this is a misfeature in ghc, that the user can pass you a filename,
which you can readFile, etc, but that default, putStr of filename may
cause a crash!
Git.CheckAttr gave me special trouble, because the filenames I got back
from git, after feeding them in, had further encoding breakage.
Rather than try to deal with that, I just zip up the input filenames
with the attributes. Which must be returned in the same order queried
for this to work.
Also of note is an apparent GHC bug I worked around in Git.CheckAttr. It
used to forkProcess and feed git from the child process. Unfortunatly,
after this forkProcess, accessing the `files` variable from the parent
returns []. Not the value that was passed into the function. This screams
of a bad bug, that's clobbering a variable, but for now I just avoid
forkProcess there to work around it. That forkProcess was itself only added
because of a ghc bug, #624389. I've confirmed that the test case for that
bug doesn't reproduce it with ghc 7.4. So that's ok, except for the new ghc
bug I have not isolated and reported. Why does this simple bit of code
magnet the ghc bugs? :)
Also, the symlink touching code is currently broken, when used on utf-8
filenames in a non-utf-8 locale, or probably on any filename containing
undecodable bytes, and I temporarily commented it out.
Testing b9ac585454, it didn't find the
optimal union merge, the second sha was the one to use, at least in
the case I tried. Let's just try all shas to see if any can be reused.
I stopped using the expensive nub, so despite the use of sets to
sort/uniq file contents, this is probably as fast or faster than it
was before.
Tries to avoid generating a new object when the merged content has the same
lines that were in the old object.
I've noticed some merge commits that only move lines around, like this:
- 1323478057.181191s 1 be23c3ac-0ee5-11e0-b185-3b0f9b5b00c5
1323204972.062151s 1 87e06c7a-7388-11e0-ba07-03cdf300bd87
++1323478057.181191s 1 be23c3ac-0ee5-11e0-b185-3b0f9b5b00c5
Unsure if this will really save anything in practice, since it only looks
at one of the two old objects, and maybe I didn't pick the best one.
In git, a Ref can be a Sha, or a Branch, or a Tag. I added type aliases for
those. Note that this does not prevent mixing up of eg, refs and branches
at the type level. Since git really doesn't care, except rare cases like
git update-ref, or git tag -d, that seems ok for now.
There's also a tree-ish, but let's just use Ref for it. A given Sha or Ref
may or may not be a tree-ish, depending on the object type, so there seems
no point in trying to represent it at the type level.
Before, a merge was first calculated, by running various actions that
called git and built up a list of lines, which were at the end sent
to git update-index. This necessarily used space proportional to the size
of the diff between the trees being merged.
Now, lines are streamed into git update-index from each of the actions in
turn.
Runtime size of git-annex merge when merging 50000 location log files
drops from around 100 mb to a constant 4 mb.
Presumably it runs quite a lot faster, too.
This reduces the memory use of a merge by 1/3rd. The space leak was
apparently because the whole update-index input was generated strictly, not
lazily.
I wondered if the change to ByteStrings contributed to this, due to the
need to convert with L.pack here. But going back to the old code, I still
see a much similar leak, and worse performance besides due to it not using
ByteStrings.
The fix is to just hPutStr the lines repeatedly. (Note the \0 is written
separately, to avoid allocation overheads in adding it to the string.)
The Git.pipeWrite interface is probably just wrong for any large inputs to
git. This was the only place using it for input of any size.
There is still at least one other space leak in the merge code.