When gpg.program is configured, it's used to get the command to run for
gpg. Useful on systems that have only a gpg2 command or want to use it
instead of the gpg command.
It was failing at link time, some problem with terminatePID.
Re-implemented that to not use a C wrapper function, which cleared up the
problem. Removed old EvilLinker hack with must have been related to the
same problem.
Note that I have not tested this with older ghc's. In
f11f7520b5 I mention having tried this
approach before, and getting segfaults.. So, who knows. It seems to work
fine with ghc 7.10 at least.
This is mostly to be able to see how long a command took to run. Also exit
code may be useful.
Unofrtunately, I can't put the command name in there, because it's not
available at this point, and it would be a much larger change to wrap the
ProcessHandle data type to add that. However, it's generally pretty obvious
which process exited from context.
Since _encodeFilePath generates a String that doesn't use the filesystem
encoding, when this exception is caught, we know we already have such a
String, and can just return it as-is.
This was caused by 23e9d3bb77
an Arbitrary String is not necessarily encoded using the filesystem
encoding, and in a non-utf8 locale, encodeBS throws an exception on such a
string. All I could think to do is limit test data to ascii.
This shouldn't be a problem in practice, because the all Strings in
git-annex that are not generated by Arbitrary should be loaded in a way
that does apply the filesystem encoding.
Oh boy, not again. So, another place that the filesystem encoding needs to
be applied. Yay.
In passing, I changed decodeBS so if a NUL is embedded in the input, the
resulting FilePath doesn't get truncated at that NUL. This was needed to
make prop_b64_roundtrips pass, and on reviewing the callers of decodeBS, I
didn't see any where this wouldn't make sense. When a FilePath is used to
operate on the filesystem, it'll get truncated at a NUL anyway, whereas if
a String is being used for something else, it might conceivably have a NUL
in it, and we wouldn't want it to get truncated when going through
decodeBS.
(NB: There may be a speed impact from this change.)
While cryptohash has SHA3 support, it has not been updated for the final
version of the spec. Note that cryptonite has not been ported to all arches
that cryptohash builds on yet.
This fixes a reversion introduced by relative path changes back last winter.
The root cause is simplifyPath "../foo" was incorrectly yielding "foo".
absPathFrom seems quite horrible. Probably most things that use it should
use </> instead.
This is needed because when preferred content matches on files,
the second pass would otherwise want to drop all keys. Using a bloom filter
avoids this, and in the case of a false positive, a key will be left
undropped that preferred content would allow dropping. Chances of that
happening are a mere 1 in 1 million.
I want this as fast as possible, so it can be added to code paths without
slowing them down.
Avoid the set lookup, and rely on laziness,
drops runtime from 14.37 ns to 11.03 ns according to this criterion benchmark:
import Criterion.Main
import qualified Types.Difference as New
import qualified Types.DifferenceOld as Old
main :: IO ()
main = defaultMain
[ bgroup "hasDifference"
[ bench "new" $ whnf (New.hasDifference New.OneLevelObjectHash) new
, bench "old" $ whnf (Old.hasDifference Old.OneLevelObjectHash) old
]
]
where
s = "fromList [ObjectHashLower, OneLevelObjectHash, OneLevelBranchHash]"
new = New.readDifferences s
old = Old.readDifferences s
A little bit of added boilerplate, but I suppose it's worth it to not
need to worry about set lookup overhead. Note that adding more differences
would slow down the old implementation; the new implementation will run
the same speed.
This reverts commit ef0e3ac22e.
Sebastian thinks best to revert this:
It seems to me the reason I needed to look at activatable sockets
might actually be a networkd bug, and I was in error about patch 0001.
On my machines (without DHCP), networkd quits after configuring the
links. I thought this had to do with network activation, but that was
probably mistaken. This was obscured by my testing the change by doing
systemctl stop/start on networkd; now that I actually unplugged the
network cable, I noticed no DBus messages are triggered by this on
this machine. Your test case might have had a similar problem
(networkd quitting on idle). Might be related to [1].
On another machine (with DHCP) networkd remains active all the time,
and patch 0002 works there. You might want to revert 0001, though:
Suppose someone’s running no manager at all, so that polling would be
required. Because networkd is still listed as activable, we would
refrain from polling – by mistake, because networkd doesn’t seem to
actually go active if we listen on its bus, and it’s listed as
activable even when it’s not configured. Connectivity-related messages
will come in when stopping/starting the service, but not when
unplugging the cable.
This removes a bit of complexity, and should make things faster
(avoids tokenizing Params string), and probably involve less garbage
collection.
In a few places, it was useful to use Params to avoid needing a list,
but that is easily avoided.
Problems noticed while doing this conversion:
* Some uses of Params "oneword" which was entirely unnecessary
overhead.
* A few places that built up a list of parameters with ++
and then used Params to split it!
Test suite passes.
The content file may not be owned by the user running git-annex, in which
case, setting the owner write bit was not enough to let lockContent
act on the file. However, with some core.sharedRepository configs, the file
should be writable by the user's group. So, the thing to do is to call
thawContent on it.
Also cleaned up the code, avoiding creating a lock file if we're going to
open it for create later anyway.
And, if there's an exception while preparing to lock the file, but not at
the point of actually taking the lock, throw an exception, instead of
silently not locking and pretending to succeed.
And, on Windows, always use lock file, even if the repo somehow got into
indirect mode (maybe with cygwin git..)
The one exception is in Utility.Daemon. As long as a process only
daemonizes once, which seems reasonable, and as long as it avoids calling
checkDaemon once it's already running as a daemon, the fcntl locking
gotchas won't be a problem there.
Annex.LockFile has it's own separate lock pool layer, which has been
renamed to LockCache. This is a persistent cache of locks that persist
until closed.
This is not quite done; lockContent stil needs to be converted.
This is certianly a cabal bug for not passing the build options in the
cabal file when building Setup.hs.
And, why oh why did ghc enable this warning by default? So unhappy with
this choice.
That failed on OSX. The temp dir was
/var/folders/fb/pnwjj52n7fg0r9mnvpsfll180000gr/T/downloadurl
and the relative path
../../../../../../Volumes/Visitors/joeyh/git-annex/r/.git/...
Didn't work. I have no clue why, how did OSX manage to break this?
But, the relative path is longer most of the time anyway, so let's
just use the absolute path.
Ambiguous occurrence `bracket'
It could refer to either `Control.Exception.bracket',
imported from `Control.Exception' at Utility/FileMode.hs:14:27-33
(and originally defined in `Control.Exception.Base')
or `Utility.Exception.bracket',
imported from `Utility.Exception' at Utility/FileMode.hs:22:1-24
(and originally defined in `Control.Monad.Catch')
Since we started using this for git repos, when a remote was on another
drive, it resulted in a bogus relative path to it being used by git-annex,
which didn't work.
I don't quite understand the cause of the deadlock. It only occurred
when git-annex-shell transferinfo was being spawned over ssh to feed
download transfer progress back. And if I removed this line from
feedprogressback, the deadlock didn't occur:
bytes <- readSV v
The problem was not a leaked FD, as far as I could see. So what was it?
I don't know.
Anyway, this is a nice clean implementation, that avoids the deadlock.
Just fork off the async threads to handle filtering the stdout and stderr,
and let them clean up their handles whenever they decide to exit.
I've verified that the handles do get promptly closed, although a little
later than I would expect. Presumably that "little later" is what
was making waiting on the threads deadlock.
Despite the late exit, the last line of stdout and stderr appears where
I'd want it to, so I guess this is ok..
Stderr reader blocks waiting for all stderr, and so blocks the process ever
exiting.
I tried several ways to get around this, but no success yet. For now,
disable the stderr reader entirely.
It sounds worse than it is. ;)
Some external special remotes may run commands that display progress on
stderr. If git-annex is run with --quiet, this should filter out such
displays while letting the errors through.
Came up with a generic way to filter out progress messages while keeping
errors, for commands that use stderr for both.
--json mode will disable command outputs too.
New approach is to do it the expensive way for the first 100 paths
on the command line, but then assume the user doesn't care about order too
much and fall back to the cheap way that does not preserve order.
In this situation, curl -o exits successfully without creating the output
file.
There was already a workaround for curl file:/// but I did not realize this
also affected regular url downloads.
To fix it, pre-create the destination file before starting curl.
Since we cannot always know the size of an url before trying to download
it, let's always do this.
Note that since curl is told -C -, we have to consider if this
makes curl try to do a ranged download, which might fail on some servers
where a regular download would have succeeded. My testing indicates
this isn't a problem; since the file is empty, curl seems to not try to
do a ranged download.
Original report: https://github.com/datalad/datalad/issues/79
Curl bug report: https://github.com/bagder/curl/issues/183
The fix is to stop using w82s, which does not properly reconstitute unicode
strings. Instrad, use utf8 bytestring to get the [Word8] to base64. This
passes unicode through perfectly, including any invalid filesystem encoded
characters.
Note that toB64 / fromB64 are also used for creds and cipher
embedding. It would be unfortunate if this change broke those uses.
For cipher embedding, note that ciphers can contain arbitrary bytes (should
really be using ByteString.Char8 there). Testing indicated it's not safe to
use the new fromB64 there; I think that characters were incorrectly
combined.
For credpair embedding, the username or password could contain unicode.
Before, that unicode would fail to round-trip through the b64.
So, I guess this is not going to break any embedded creds that worked
before.
This bug may have affected some creds before, and if so,
this change will not fix old ones, but should fix new ones at least.
hGetSomeString reads one byte at a time, so unicode bytes are not composed.
The problem comes when outputting that to the console with hPut; that
tried to apply the handle's encoding, and so we get mojibake.
Instead, use ByteStrings, and only convert it to a string for parsing, not
for display.
Note that there are a couple of other things that use hGetSomeString,
which I've left as-is for now.
This reverts commit a7f05c007b.
Consider: relPathDirToFile (absPathFrom "/tmp/repo/xxx" "y/bar") "/tmp/repo/.git/annex/objects/xxx"
This needs to always yield "../../../.git/annex/objects/xxx" but on
Windows, it is "..\\..\\/tmp/repo/.git/annex/objects/xxx"
This is necessary for interop between inode caches created on unix and
windows. Which is more important than supporting inodecaches for large keys
with the wrong size, which are broken anyway.
There should be no slowdown from this change, except on Windows.
Avoid using fileSize which maxes out at just 2 gb on Windows.
Instead, use hFileSize, which doesn't have a bounded size.
Fixes support for files > 2 gb on Windows.
Note that the InodeCache code only needs to compare a file size,
so it doesn't matter it the file size wraps. So it has been
left as-is. This was necessary both to avoid invalidating existing inode
caches, and because the code passed FileStatus around and would have become
more expensive if it called getFileSize.
This commit was sponsored by Christian Dietrich.
Reverts 965e106f24
Unfortunately, this caused breakage on Windows, and possibly elsewhere,
because parentDir and takeDirectory do not behave the same when there is a
trailing directory separator.
parentDir is less safe than takeDirectory, especially when working
with relative FilePaths. It's really only useful in loops that
want to terminate at /
This commit was sponsored by Audric SCHILTKNECHT.
This allows the git repository to be moved while git-annex is running in
it, with fewer problems.
On Windows, this avoids some of the problems with the absurdly small
MAX_PATH of 260 bytes. In particular, git-annex repositories should
work in deeper/longer directory structures than before. See
http://git-annex.branchable.com/bugs/__34__git-annex:_direct:_1_failed__34___on_Windows/
There are several possible ways this change could break git-annex:
1. If it changes its working directory while it's running, that would
be Bad News. Good news everyone! git-annex never does so. It would also
break thread safety, so all such things were stomped out long ago.
2. parentDir "." -> "" which is not a valid path. I had to fix one
instace of this, and I should probably wipe all calls to parentDir out
of the git-annex code base; it was never a good idea.
3. Things like relPathDirToFile require absolute input paths,
and code assumes that the git repo path is absolute and passes it to it
as-is. In the case of relPathDirToFile, I converted it to not make
this assumption.
Currently, the test suite has 16 failures.
Getting rid of build warning
warning: 'statfs64' is deprecated: first deprecated in OS X 10.6
[-Wdeprecated-declarations]
10.6 is much older than the oldest git-annex OSX port, so won't break
anything.
More aggressive rsync params fixup for windows. Param may contain a url, or
a file path, so check if it looks like a local file path and if so, fix it
up.
On windows only, rsyncUrlIsPath will treat c:foo as a path, rather than as
a rsyncurl starting with a host "c".
This should be essentially no-op change for hGetContentsMetered, since it
always gets the entire contents. So the only difference is that each chunk
of the lazy bytestring will always be the full chunk size. So, I'm pretty
sure this is safe. Also, the only current users of hGetContentsMetered are
reading files, so the stream won't block for long in the middle.
The improvement is that hGetUntilMetered will always get some multiple of
the defaultChunkSize. This will allow the S3 multipart code to pick a fixed
size and know that hGetUntilMetered will really get that size.
(cherry picked from commit bd09046291)
Didn't know that this library existed!
This includes making git-annex not re-exec itself on start on windows, and
making the test suite on Windows run tests without forking.