Unfortunately ReceiveMessage didn't handle unknown messages the way it
was documented to; client sending VERSION would cause the server to
return an ERROR and hang up. Fixed that, but old releases of git-annex
use the P2P protocol for tor and will still have that behavior.
So, version is not negotiated for Remote.P2P connections, only for
Remote.Git connections, which will support VERSION from their first
release. There will need to be a later flag day to change Remote.P2P;
left a commented out line that is the only thing that will need to be
changed then.
Version 1 of the P2P protocol is not implemented yet, but updated
the docs for the DATA change that will be allowed by that version.
This commit was sponsored by Jeff Goeke-Smith on Patreon.
Much like Remote.P2P, there's a pool of connections to a peer, in order
to support concurrent operations.
Deals with old git-annex-ssh on the remote that does not support p2pstdio,
by only trying once to use it, and remembering if it's not supported.
Made p2pstdio send an AUTH_SUCCESS with its uuid, which serves the dual
purposes of something to detect to see that the connection is working,
and a way to verify that it's connected to the right uuid.
(There's a redundant uuid check since the uuid field is sent
by git_annex_shell, but I anticipate that being removed later when
the legacy git-annex-shell stuff gets removed.)
Not entirely happy with Remote.Git.runSsh's behavior
when the proto action fails. Running the fallback will work ok, but what
will we do when the fallbacks later get removed? It might be better to
try to reconnect, in case the connection got closed.
This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
Not yet used by git-annex, but this will allow faster transfers etc than
using individual ssh connections and rsync.
Not called git-annex-shell p2p, because git-annex p2p does something
else and I don't want two subcommands with the same name between the two
for sanity reasons.
This commit was sponsored by Øyvind Andersen Holm.
lockContentShared had a screwy caveat that it didn't verify that the content
was present when locking it, but in the most common case, eg indirect mode,
it failed to lock when the content is not present.
That led to a few callers forgetting to check inAnnex when using it,
but the potential data loss was unlikely to be noticed because it only
affected direct mode I think.
Fix data loss bug when the local repository uses direct mode, and a
locally modified file is dropped from a remote repsitory. The bug
caused the modified file to be counted as a copy of the original file.
(This is not a severe bug because in such a situation, dropping
from the remote and then modifying the file is allowed and has the same
end result.)
And, in content locking over tor, when the remote repository is
in direct mode, it neglected to check that the content was actually
present when locking it. This could cause git annex drop to remove
the only copy of a file when it thought the tor remote had a copy.
So, make lockContentShared do its own inAnnex check. This could perhaps
be optimised for direct mode, to avoid the check then, since locking
the content necessarily verifies it exists there, but I have not bothered
with that.
This commit was sponsored by Jeff Goeke-Smith on Patreon.
This way we know that after enable-tor, the tor hidden service is fully
published and working, and so there should be no problems with it at
pairing time.
It has to start up its own temporary listener on the hidden service. It
would be nice to have it start the remotedaemon running, so that extra
step is not needed afterwards. But, there may already be a remotedaemon
running, in communication with the assistant and we don't want to start
another one. I thought about trying to HUP any running remotedaemon, but
Windows does not make it easy to do that. In any case, having the user
start the remotedaemon themselves lets them know it needs to be running
to serve the hidden service.
This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
This reverts commit 3037feb1bf.
On second thought, this was an overcomplication of what should be the
lowest-level primitive. Let's build bi-directional links at the pairing
level with eg magic wormhole.
Both the local and remote git repositories get remotes added
pointing at one-another.
Makes pairing twice as easy!
Security: The new LINK command in the protocol can be sent repeatedly,
but only by a peer who has authenticated with us. So, it's entirely safe to
add a link back to that peer, or to some other peer it knows about.
Anything we receive over such a link, the peer could send us over the
current connection.
There is some risk of being flooded with LINKs, and adding too many
remotes. To guard against that, there's a hard cap on the number of remotes
that can be set up this way. This will only be a problem if setting up
large p2p networks that have exceptional interconnectedness.
A new, dedicated authtoken is created when sending LINK.
This also allows, in theory, using a p2p network like tor, to learn about
links on other networks, like telehash.
This commit was sponsored by Bruno BEAUFILS on Patreon.
Seems that git upload-pack outputs a "ONCDN " that is not read by the
remote git receive-pack. This fixes:
[2016-12-09 17:08:32.77159731] P2P > ERROR protocol parse error: "ONCDN "
This is more efficient. Note that the peer will get CHANGED messages for
all refs changed since the connection opened, even if those changes
happened before it sent NOTIFYCHANGE.
Added to change notification to P2P protocol.
Switched to a TBChan so that a single long-running thread can be
started, and serve perhaps intermittent requests for change
notifications, without buffering all changes in memory.
The P2P runner currently starts up a new thread each times it waits
for a change, but that should allow later reusing a thread. Although
each connection from a peer will still need a new watcher thread to run.
The dependency on stm-chans is more or less free; some stuff in yesod
uses it, so it was already indirectly pulled in when building with the
webapp.
This commit was sponsored by Francois Marier on Patreon.
The attacker could just send a very lot of data, with no \n and it would
all be buffered in memory until the kernel killed git-annex or perhaps OOM
killed some other more valuable process.
This is a low impact security hole, only affecting communication between
local git-annex and git-annex-shell on the remote system. (With either
able to be the attacker). Only those with the right ssh key can do it. And,
there are probably lots of ways to construct git repositories that make git
use a lot of memory in various ways, which would have similar impact as
this attack.
The fix in P2P/IO.hs would have been higher impact, if it had made it to a
released version, since it would have allowed DOSing the tor hidden
service without needing to authenticate.
(The LockContent and NotifyChanges instances may not be really
exploitable; since the line is read and ignored, it probably gets read
lazily and does not end up staying buffered in memory.)
I'm unsure why this fixed it, but it did. Seems to suggest that the
memory leak is not due to a bug in my code, but that ghc didn't manage
to take full advantage of laziness, or was failing to gc something it
could have.
The switch to hGetMetered subtly changed the laziness of how DATA was
read, and broke git protocol relaying. Fix by sending received data to
the git process's stdin immediately, which ensures that the lazy
bytestring is all read from the peer before going on to process the next
message from the peer.
Display progress meter on send and receive from remote.
Added a new hGetMetered that can read an exact number of bytes (or
less), updating a meter as it goes.
This commit was sponsored by Andreas on Patreon.
This is needed in addition to StoreContent, because retrieveKeyFile can
be used to retrieve to different destination files, not only the tmp
file for a key.
This commit was sponsored by Ole-Morten Duesund on Patreon.
Similar to GCrypt remotes, P2P remotes have an url, so Remote.Git has to
separate them out and handle them, passing off to Remote.P2P.
This commit was sponsored by Ignacio on Patreon.
ReadContent can't update the log, since it reads lazily. This part of
the P2P monad will need to be rethought.
Associated files are heavily sanitized when received from a peer;
they could be an exploit vector.
This commit was sponsored by Jochen Bartl on Patreon.
Untested, and it does not yet update transfer logs.
Verifying transferred content is modeled on git-annex-shell recvkey.
In a direct mode or annex.thin repository, content can change while it's
being transferred. So, verification is always done, even if annex.verify
would normally prevent it.
Note that a WORM or URL key could change in a way the verification
doesn't catch. That can happen in git-annex-shell recvkey too. We don't
worry about it, because those key backends don't guarantee preservation
of data. (Which is to say, I worried about it, and then convinced myself
again it was ok.)
Each worker thread needs to run in the Annex monad, but the
remote-daemon's liftAnnex can only run 1 action at a time. Used
Annex.Concurrent to deal with that.
P2P.Annex is incomplete as of yet.
It's possible, in direct or thin mode, that an object file gets
truncated or appended to as it's being sent. This would break the
protocol badly, so make sure never to send too many bytes, and to
close the protocol connection if too few bytes are available.