Started with a problem when running addurl on a really long url,
because the whole url is munged into the filename. Ended up doing
a fairly extensive review for places where filenames could get too large,
although it's hard to say I'm not missed any..
Backend.Url had a 128 character limit, which is fine when the limit is 255,
but not if it's a lot shorter on some systems. So check the pathconf()
limit. Note that this could result in fromUrl creating different keys
for the same url, if run on systems with different limits. I don't see
this is likely to cause any problems. That can already happen when using
addurl --fast, or if the content of an url changes.
Both Command.AddUrl and Backend.Url assumed that urls don't contain a
lot of multi-byte unicode, and would fail to truncate an url that did
properly.
A few places use a filename as the template to make a temp file.
While that's nice in that the temp file name can be easily related back to
the original filename, it could lead to `git annex add` failing to add a
filename that was at or close to the maximum length.
Note that in Command.Add.lockdown, the template is still derived from the
filename, just with enough space left to turn it into a temp file.
This is an important optimisation, because the assistant may lock down
a bunch of files all at once, and using the same template for all of them
would cause openTempFile to iterate through the same set of names,
looking for an unused temp file. I'm not very happy with the relatedTemplate
hack, but it avoids that slowdown.
Backend.WORM does not limit the filename stored in the key.
I have not tried to change that; so git annex add will fail on really long
filenames when using the WORM backend. It seems better to preserve the
invariant that a WORM key always contains the complete filename, since
the filename is the only unique material in the key, other than mtime and
size. Since nobody has complained about add failing (I think I saw it
once?) on WORM, probably it's ok, or nobody but me uses it.
There may be compatability problems if using git annex addurl --fast
or the WORM backend on a system with the 255 limit and then trying to use
that repo in a system with a smaller limit. I have not tried to deal with
those.
This commit was sponsored by Alexander Brem. Thanks!
This bug was introduced in 82a6db8fe8,
which improved handling of adding very large numbers of files by ensuring
that a minimum number of max size commits (5000 files each) were done.
I accidentially made it wait for another change to appear after such a max
size commit, even if a lot of queued changes were already accumulated.
That resulted in a stall when it got to the end. Now fixed to not wait
any longer than necessary to ensure the watcher has had time to wake back
up after the max size commit.
This commit was sponsored by Michael Linksvayer. Thanks!
This is a laziness problem. Despite the bang pattern on newfiles, the list
was not being fully evaluated before cleanup was called. Moving cleanup out
to after the list is actually used fixes this.
More evidence that I should be using ResourceT or pipes, if any was needed.
This affected both the hourly NetWatcherFallback thread and the syncing
when network connection is detected.
It was a reversion of sorts, introduced in
8861e270be, when annex-ignore was changed to
not control git syncing. I forgot to make it check annex-sync at that
point.
This was the last place in git-annex that could remove data referred to by
the git history, without being forced.
Like drop, dropunused checks remotes, and honors the global annex.numcopies
setting. (However, .gitattributes settings cannot apply to unused files.)
I only added this to the presense messages that are really intended for
presence. The ones used for tunneling git etc don't have the tag, because
that would waste bandwidth.
In direct mode, it's best to whenever possible not move direct mode files
out of the way, and so I made unannex avoid touching the direct mode file at
all.
That actually turns out to be easy, because in direct mode, unlike indirect
mode, the pre-commit hook won't get confused if the unannexed file later
gets added back by git add. So there's no need to commit the unannex right
away; it can be staged for the user to commit later. This also means that
unannex in direct mode is a lot faster than in indirect mode!
Another subtle bit is the bookkeeping that is done when unannexing a direct
mode file. The inode cache needs to be removed so that when uninit runs
getKeysPresent, it doesn't see the cache and think the key is still
present and crash when it's not.
This commit is sponsored by Douglas Butts. Thanks!
This is ok to do now that the socket filename never needs to be mapped back
to a hostname.
Short hostnames will still appear in the clear, which is less obfuscated.
So this cannot possibly make ssh connection caching fail for a hostname it
used to work for.
gmail.com has some XMPP SRV records, but does not itself respond to XMPP
traffic, although it does accept connections on port 5222. So if a user
entered the wrong password, it would try all the SRVs and fall back to
trying gmail, and hang at that point.
This seems the right thing to do, not just a workaround.
I wanted to try to guard against it in Command.Add too, but it's a case of
garbage in, garbage out. Once Command.Add has been told it's dealing with a
dummy symlink, it goes and deletes it, and even though the object it
thinks it points to is not present in the annex, it's Command.Add is still
doing the right thing to go ahead and add the broken symlink. So the two
fixes I was able to put in will have to do.
I thought at first this was a Windows specific problem, but it's not;
this affects checking any non-bare repository exported via http. Which is
a potentially important use case!
The actual bug was the case where Right False was returned by the first url
short-curcuited later checks. But the whole method used felt like code
I'd no longer write, and the use of undefined was particularly disgusting.
So I rewrote it.
Also added an action display.
This commit was sponsored by Eric Hanchrow. Thanks!
Cabal does not seem to have a way to check if flag A is set and then, if
flag B is set, add a dep. Instead, it makes flag B get unset if the
dep is not available.
Note that've told me:
We'll see how it goes, but I think this could be a permanent offer for
your userbase. People using git-annex are clueful and won't be a big
support burden for us, so it's a win-win.
The icon files will be installed when running make install or cabal
install. Did not try to run update-icon-caches, since I think it's debian
specific, and dh_icons will take care of that for the Debian package.
Using the favicon as a 16x16 icon. At 24x24 the svg displays pretty well,
although the dotted lines are rather faint. The svg is ok at all higher
resolutions.
The standalone linux build auto-installs the desktop and autostart files
when run. I have not made it auto-install the icon file too, because
a) that would take more work to include them in the tarball and find them
b) it would need to be an install to ~/.icons/, and I don't know if that
really works!
annexLocations uses OS-native directory separators, but for an url,
it needs to use / even on Windows.
This is an ugly workaround. Could parameterize a lot of stuff in
annexLocations to fix it better. I suspect this is probably the only place
it's needed though.
migrate wants to know the associated filename, in order to look up
the new backend. Can't do that with --all
migrate --all --backend=newvalue could be useful to support, in the future.
I spent a long time worrying about this problem with --all, that it cannot
check .gitattributes files for numcopies settings, and so would not be
entirely safe to use. The solution turns out to be simple, just don't
implement `git annex drop --all`. drop is the only command that needs to
check numcopies (move can also reduce the number of copies, but explicitly
bypasses numcopies settings).
Use cases that might need a drop --all are probably better served by using
unused and dropunused, which already work in a bare repository.
The ssh setup first runs ssh to the real hostname, to probe if a ssh key is
needed. If one is, it generates a mangled hostname that uses a key. This
mangled hostname was being used to ssh into the server to set up the key.
But if the server already had the key set up, and it was locked down, the
setup would fail. This changes it to use the real hostname when sshing in
to set up the key, which avoids the problem.
Note that it will redundantly set up the key on the ssh server. But it's
the same key; the ssh key generation code uses the key if it already
exists.
A common failure mode for direct mode has been for files to end up still
stored in indirect mode. While I hope that doesn't happen anymore, fsck
should deal with it.
This is a compromise. I would like to nice every thread except for the
webapp thread, but it's not practical to do so. That would need every
thread to run as a bound thread, which could add significant overhead.
And any forkIO would escape the nice level.
Better to have a working test suite that doesn't test a few things
than no working test suite.
Most of the disabled stuff is because for some reason "git annex sync"
doesn't work when run inside the test suite. Looks like PATH problems.
The directory and rsync special remotes seem broken on Windows, or
maybe the tests are. Pretty sure the hook special remote test is broken.
Yeah, that didn't actually work. Got error messages like it couldn't read
from the control socket, so probably ssh doesn't really support that on
Windows, at least the cygwin ssh build I'm using.
This write permission frobbing is very appropriate in indirect mode,
since annexed objects are stored as immutably as can be managed. But not
in direct mode, where files should be able to be modified at any time.
There are already sufficient guards that there's no need to prevent a file
being written to while it's being ingested, in direct mode. The inode cache
will detect (most) types of modifications, and the add will fail. Then a
re-add should be done. The assistant should get another inotify change
event, and automatically add the new version of the file.
Fuzz tests have shown that git cat-file --batch sometimes stops running.
It's not yet known why (no error message; repo seems ok). But this is
something we can deal with in the CoProcess framework, since all 3 types of
long-running git processes should be restartable if they fail.
Note that, as implemented, only IO errors are caught. So an error thrown
by the reveiver, when it sees something that is not valid output from
git cat-file (etc) will not cause a restart. I don't want it to retry
if git commands change their output or are just outputting garbage.
This does mean that if the command did a partial output and crashed in the
middle, it would still not be restarted.
There is currently no guard against restarting a command repeatedly, if,
for example, it crashes repeatedly on startup.
The checkpresent hook can return either True or, False, or fail with a message
if it cannot successfully check the remote. Currently for glacier, when
--trust-glacier is not set, it always returns False. Crucially, in the case
when a file is in glacier, this is telling git-annex it's not there, so copy
re-uploads it. This is not desirable; it breaks using glacier-cli to retreive
that file later, and it wastes money/bandwidth.
What if it instead, when the glacier inventory is missing a
file, it returns False. And when the glacier inventory has a file, unless
--trust-glacier is set, it *fails*.
The result would be:
* `git annex copy --to glacier` would only send things not listed in inventory. If a file is listed in the inventory, `copy`
would complain that --trust-glacier` is not set, and not re-upload the file.
* `git annex drop` would only trust that glacier has a file when --trust-glacier is set. Behavior unchanged.
* `git annex move --to glacier`, when the file is not listed in inventory, would send the file, and delete it locally. Behavior unchanged.
* `git annex move --to glacier`, when the file is listed in inventory, would only trust that glacier has the file when --trust-glacier is set
* `git annex copy --from glacier` / `git annex get`, when the file is located in glacier, would trust the location log, and attempt to get the file from glacier.
Ie, when there'a a conflicted merge we may get foo.variant-xxxx
created in a merge. If a second merge conflict occurs on that new file,
it was not falling back to putting in the whole key (which should stop
the merge conflicts happening for good, but is ugly).
The current manual mode preferred content expression is:
"present and (((exclude=*/archive/* and exclude=archive/*) or (not (copies=archive:1 or copies=smallarchive:1))) or (not copies=semitrusted+:1))"
The old matcher misparsed this, to basically:
OR (present and (...)) (not copies=semitrusted+:1))
The paren handling and indeed the whole conversion from tokens to the
matcher was just wrong. The new way may not be the cleverest, but I think
it is correct, and you can see how it pattern matches structurally against
the expressions when parsing them.
That expression is now parsed to:
MAnd (MOp <function>)
(MOr (MOr (MAnd (MOp <function>) (MOp <function>)) (MNot (MOr (MOp <function>) (MOp <function>))))
(MNot (MOp <function>)))
Which appears correct, and behaves correct in testing.
Also threw in a simplifier, so the final generated Matcher has less
unnecessary clutter in it. Mostly so that I could more easily read &
confirm them.
Also, added a simple test of the Matcher to the test suite.
There is a small chance of badly formed preferred content expressions
behaving differently than before due to this rewrite.
I noticed that when my modem hung up and redialed, my xmpp client was left
sending messages into the void. This will also handle any idle
disconnection issues.