This interacts with it using stdio, which is surprisingly hard.
sendFile does not currently work, due to
https://github.com/warner/magic-wormhole/issues/108
Parsing the output to find the magic code is done as robustly as
possible, and should continue to work unless wormhole radically changes
the format of its codes. Presumably it will never output something that
looks like a wormhole code before the actual wormhole code; that would
also break this. It would be better if there was a way to make
wormhole not mix the code with other output, as requested in
https://github.com/warner/magic-wormhole/issues/104
Only exchange of files/directories is supported. To exchange messages,
https://github.com/warner/magic-wormhole/issues/99 would need to be resolved.
I don't need message exchange however.
The attacker could just send a very lot of data, with no \n and it would
all be buffered in memory until the kernel killed git-annex or perhaps OOM
killed some other more valuable process.
This is a low impact security hole, only affecting communication between
local git-annex and git-annex-shell on the remote system. (With either
able to be the attacker). Only those with the right ssh key can do it. And,
there are probably lots of ways to construct git repositories that make git
use a lot of memory in various ways, which would have similar impact as
this attack.
The fix in P2P/IO.hs would have been higher impact, if it had made it to a
released version, since it would have allowed DOSing the tor hidden
service without needing to authenticate.
(The LockContent and NotifyChanges instances may not be really
exploitable; since the line is read and ignored, it probably gets read
lazily and does not end up staying buffered in memory.)
Display progress meter on send and receive from remote.
Added a new hGetMetered that can read an exact number of bytes (or
less), updating a meter as it goes.
This commit was sponsored by Andreas on Patreon.
On Debian, apparmor prevents tor from reading from most locations. And,
it silently fails if it is prevented from reading the hidden service
socket. I filed #846275 about this; violating the FHS is the least bad of a
bad set of choices until that bug is fixed.
Still a couple bugs:
* Closing the connection to the server leaves git upload-pack /
receive-pack running, which could be used to DOS.
* Sometimes the data is transferred, but it fails at the end, sometimes
with:
git-remote-tor-annex: <socket: 10>: commitBuffer: resource vanished (Broken pipe)
Must be a race condition around shutdown.
Almost working, but there's a bug in the relaying.
Also, made tor hidden service setup pick a random port, to make it harder
to port scan.
This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
A bit tricky since Proto doesn't support threads. Rather than adding
threading support to it, ended up using a callback that waits for both
data on a Handle, and incoming messages at the same time.
This commit was sponsored by Denis Dzyubenko on Patreon.
For use with tor hidden services, and perhaps other transports later.
Based on Utility.SimpleProtocol, it's a line-based protocol,
interspersed with transfers of bytestrings of a specified size.
Implementation of the local and remote sides of the protocol is done
using a free monad. This lets monadic code be included here, without
tying it to any particular way to get bytes peer-to-peer.
This adds a dependency on the haskell package "free", although that
was probably pulled in transitively from other dependencies already.
This commit was sponsored by Jeff Goeke-Smith on Patreon.
ghc 8 added backtraces on uncaught errors. This is great, but git-annex was
using error in many places for a error message targeted at the user, in
some known problem case. A backtrace only confuses such a message, so omit it.
Notably, commands like git annex drop that failed due to eg, numcopies,
used to use error, so had a backtrace.
This commit was sponsored by Ethan Aubin.
This avoids needing to bind to the right port before something else
does.
The socket is in /var/run/user/$uid/ which ought to be writable by only
that uid. At least it is on linux systems using systemd.
For Windows, may need to revisit this and use ports or something.
The first version of tor to support sockets for hidden services
was 0.2.6.3. That is not in Debian stable, but is available in
backports.
This commit was sponsored by andrea rota.
Tor unfortunately does not come out of the box configured to let hidden
services register themselves on the fly via the ControlPort.
And, changing the config to enable the ControlPort and a particular type
of auth for it may break something already using the ControlPort, or
lessen the security of the system.
So, this leaves only one option to us: Add a hidden service to the
torrc. git-annex enable-tor does so, and picks an unused high port for
tor to listen on for connections to the hidden service.
It's up to the caller to somehow pick a local port to listen on
that won't be used by something else. That may be difficult to do..
This commit was sponsored by Jochen Bartl on Patreon.
Yesod didn't used to do auth checks for that, but this may have changed.
I don't have a way to reproduce the reported problem yet, but this change
certianly won't hurt anything.
This commit was sponsored by Thom May on Patreon.
Restarting a crashing git process could result in filename encoding issues
when not in a unicode locale, as the restarted processes's handles were not
read in raw mode.
Since rawMode is always used when starting a coprocess, didn't bother
to parameterise it and just always enable it for simplicity.
This commit was sponsored by Jake Vosloo on Patreon.
gpg-agent started deleting its socket file on shutdown, and this tickled an
ugly behavior in removeDirectoryRecursive,
https://github.com/haskell/directory/issues/60
Running removeDirectoryRecursive again on exception avoids the problem.
gpg 2.1.15 (or so) seems to have added some new fields to the --with-colons
--list-secret-keys output. These include "fpr" and "grp", and come before
the "uid" line. So, the parser was giving up before it saw the name. Fix by
continuing to look for the uid line until the next "sec" line.
This commit was sponsored by Ole-Morten,Duesund on Patreon.
This gets rid of quite a lot of ugly hacks around json generation.
I doubt that any real-world json parsers can parse incomplete objects, so
while it's not as nice to need to wait for the complete object, especially
for commands like `git annex info` that take a while, it doesn't seem worth
the added complexity.
This also causes the order of fields within the json objects to be
reordered. Since any real json parser shouldn't care, the only possible
problem would be with ad-hoc parsers of the old json output.
Keeping Text.JSON use for now, because it seems a better fit for most of
the commands, which don't use very structured JSON objects, but just output
whatever fields suites them. But this lets Aeson be used when a more
structured data type is available to serialize to JSON.
Mostly the username is only used for the git committer or other display
purposes, and we can just fall back to a dummy value in these cases.
The only remaining place where an error is thrown is when starting local
pairing, which needs the username to be known.
Sadly my bug report about this is not going to get fixed it seems, so
I have to drag around a whole added module file just to deal with it.
https://github.com/haskell/directory/issues/52
It started exporting a isSymbolicLink which supports windows. But,
git-annex does no use symlinks on windows yet and this conflicts with the
function by the same name from unix-compat, so hide it.
This fixes behavior in this situation:
l1 <- lockShared Nothing "lck"
l2 <- lockShared Nothing "lck"
dropLock l1
dropLock l2
Before, the lock was dropped upon the second dropLock call, but the fd
remained open, and would never be closed while the program was running.
Fixed by a rather round-about method, but it should work well enough.
It would have been simpler to open open the shared lock once, and not open
it again in the second call to lockShared. But, that's difficult to do
atomically.
This also affects Windows and PID locks, not just posix locks.
In the case of pid locks, multiple calls to waitLock within the same
process are allowed because the side lock is locked using a posix lock,
and so multiple exclusive locks can be taken in the same process. So,
this change fixes a similar problem with pid locks.
l1 <- waitLock (Seconds 1) "lck"
l2 <- waitLock (Seconds 1) "lck"
dropLock l1
dropLock l2
Here the l2 side lock fd remained open but not locked,
although the pid lock file was removed. After this change, the second
dropLock will close both fds to the side lock, and delete the pidlock.
According to https://github.com/redneb/disk-free-space/issues/3 ,
disk-free-space should be at least as portable as my homegrown code was.
One change I noticed is, getDiskSize was not implemented for windows
in the old code, and should work now.
This lets readonly repos be used. If a repo is readonly, we can ignore the
keys database, because nothing that we can do will change the state of the
repo anyway.
* Removed the webapp-secure build flag, rolling it into the webapp build
flag.
* Removed the quvi and tahoe build flags, which only adds aeson to
the core dependencies.
* Removed the feed build flag, which only adds feed to the core
dependencies.
Build flags have cost in both code complexity and also make Setup configure
have to work harder to find a usable set of build flags when some
dependencies are missing.
This reverts commit d14770ca9c.
That changed the type of error from an IOError to something else, so broke
stuff that was catching IOErrors.
So back to a UserError, but be explicit this time that's what it's
throwing.
Using fail here causes a "user error" exception to be thrown, which implies
the user is at fault in its wording, which is incorrect.
Also audited for other uses of fail in git-annex; the others are in monadic
contexts where fail may not throw an exception, and involve user input, so
kept them as-is.
The repo path is typically relative, not absolute, so
providing it to absPathFrom doesn't yield an absolute path.
This is not a bug, just unclear documentation.
Indeed, there seem to be no reason to simplifyPath here, which absPathFrom
does, so instead just combine the repo path and the TopFilePath.
Also, removed an export of the TopFilePath constructor; asTopFilePath
is provided to construct one as-is.
Several tricky parts:
* When the conflict is just between the same key being locked and unlocked,
the unlocked version wins, and the file is not renamed in this case.
* Need to update associated file map when conflict resolution renames
an unlocked file.
* git merge runs the smudge filter on the conflicting file, and actually
overwrites the file with the same content it had before, and so
invalidates its inode cache. This makes it difficult to know when it's
safe to remove such files as conflict cruft, without going so far as to
compare their entire contents.
Dealt with this by preventing the smudge filter from populating the file
when a merge is run. However, that also prevents the smudge filter being
run for non-conflicting files, so eg moving a file won't put its new
content into place.
* Ideally, if a merge or a merge conflict resolution renames an unlocked
file, the file in the work tree can just be moved, rather than copying
the content to a new worktree file.
This is attempted to be done in merge conflict resolution, but
due to git merge's behavior of running smudge filters, what actually
seems to happen is the old worktree file with the content is deleted and
rewritten as a pointer file, so doesn't get reused.
So, this is probably not as efficient as it optimally could be.
If that becomes a problem, could look into running the merge in a separate
worktree and updating the real worktree more efficiently, similarly to the
direct mode merge. However, the direct mode merge had a lot of bugs, and
I'd rather not use that more error-prone method unless really needed.
Writes are optimised by queueing up multiple writes when possible.
The queue is flushed after the Annex monad action finishes. That makes it
happen on program termination, and also whenever a nested Annex monad action
finishes.
Reads are optimised by checking once (per AnnexState) if the database
exists. If the database doesn't exist yet, all reads return mempty.
Reads also cause queued writes to be flushed, so reads will always be
consistent with writes (as long as they're made inside the same Annex monad).
A future optimisation path would be to determine when that's not necessary,
which is probably most of the time, and avoid flushing unncessarily.
Design notes for this commit:
- separate reads from writes
- reuse a handle which is left open until program
exit or until the MVar goes out of scope (and autoclosed then)
- writes are queued
- queue is flushed periodically
- immediate queue flush before any read
- auto-flush queue when database handle is garbage collected
- flush queue on exit from Annex monad
(Note that this may happen repeatedly for a single database connection;
or a connection may be reused for multiple Annex monad actions,
possibly even concurrent ones.)
- if database does not exist (or is empty) the handle
is not opened by reads; reads instead return empty results
- writes open the handle if it was not open previously
Caused by AMP.. Since I've finally upgraded my dev laptop to 7.10,
I may start missing imports that are not needed with it but are with older
versions..