Commit graph

38 commits

Author SHA1 Message Date
Joey Hess
2a45b5ae9a
avoid failure to lock content of removed file causing drop etc to fail
This was already prevented in other ways, but as seen in commit
c30fd24d91, those were a bit fragile.
And I'm not sure races were avoided in every case before. At least a
race between two separate git-annex processes, dropping the same
content, seemed possible.

This way, if locking fails, and the content is not present, it will
always do the right thing. Also, it avoids the overhead of an unncessary
inAnnex check for every file.

This commit was sponsored by Denis Dzyubenko on Patreon.
2020-07-25 11:59:33 -04:00
Joey Hess
aa1ad0b7ca
remove redundant imports
Clean build under ghc 8.8.3, which seems to do better at finding cases
where two imports both provide the same symbol, and warns about one of
them.

This commit was sponsored by Ilya Shlyakhter on Patreon.
2020-06-22 11:05:34 -04:00
Joey Hess
c19211774f
use filepath-bytestring for annex object manipulations
git-annex find is now RawFilePath end to end, no string conversions.
So is git-annex get when it does not need to get anything.
So this is a major milestone on optimisation.

Benchmarks indicate around 30% speedup in both commands.

Probably many other performance improvements. All or nearly all places
where a file is statted use RawFilePath now.
2019-12-11 15:25:07 -04:00
Joey Hess
40ecf58d4b
update licenses from GPL to AGPL
This does not change the overall license of the git-annex program, which
was already AGPL due to a number of sources files being AGPL already.

Legally speaking, I'm adding a new license under which these files are
now available; I already released their current contents under the GPL
license. Now they're dual licensed GPL and AGPL. However, I intend
for all my future changes to these files to only be released under the
AGPL license, and I won't be tracking the dual licensing status, so I'm
simply changing the license statement to say it's AGPL.

(In some cases, others wrote parts of the code of a file and released it
under the GPL; but in all cases I have contributed a significant portion
of the code in each file and it's that code that is getting the AGPL
license; the GPL license of other contributors allows combining with
AGPL code.)
2019-03-13 15:48:14 -04:00
Joey Hess
a0e573a3cc
comment typo 2018-11-12 11:53:44 -04:00
Joey Hess
6ecd55a9fa
Fixed some other potential hangs in the P2P protocol
Finishes the start made in 983c9d5a53, by
handling the case where `transfer` fails for some other reason, and so the
ReadContent callback does not get run. I don't know of a case where
`transfer` does fail other than the locking dealt with in that commit, but
it's good to have a guarantee.

StoreContent and StoreContentTo had a similar problem.
Things like `getViaTmp` may decide not to run the transfer action.
And `transfer` could certianly fail, if another transfer of the same
object was in progress. (Or a different object when annex.pidlock is set.)

If the transfer action was not run, the content of the object would
not all get consumed, and so would get interpreted as protocol commands,
which would not go well.

My approach to fixing all of these things is to set a TVar only
once all the data in the transfer is known to have been read/written.
This way the internals of `transfer`, `getViaTmp` etc don't matter.

So in ReadContent, it checks if the transfer completed.
If not, as long as it didn't throw an exception, send empty and Invalid
data to the callback. On an exception the state of the protocol is unknown
so it has to raise ProtoFailureException and close the connection,
same as before.

In StoreContent, if the transfer did not complete
some portion of the DATA has been read, so the protocol is in an unknown
state and it has to close the conection as well.

(The ProtoFailureMessage used here matches the one in Annex.Transfer, which
is the most likely reason. Not ideal to duplicate it..)

StoreContent did not ever close the protocol connection before. So this is
a protocol change, but only in an exceptional circumstance, and it's not
going to break anything, because clients already need to deal with the
connection breaking at any point.

The way this new behavior looks (here origin has annex.pidlock = true so will
only accept one upload to it at a time):

git annex copy --to origin -J2
copy x (to origin...) ok
copy y (to origin...)
  Lost connection (fd:25: hGetChar: end of file)

This work is supported by the NIH-funded NICEMAN (ReproNim TR&D3) project.
2018-11-06 14:52:32 -04:00
Joey Hess
983c9d5a53
git-annex-shell: fix transfer hang
Fix hang when transferring the same objects to two different clients at the
same time. (Or when annex.pidlock is used, two different objects to the
same or different clients.)

Could also potentially occur if a client was downloading an object and
somehow lost connection but that git-annex-shell was still running and
holding the transfer lock.

This does not guarantee that, if `transfer` fails for some other reason,
a DATA response will be made.

This work is supported by the NIH-funded NICEMAN (ReproNim TR&D3) project.
2018-11-06 13:00:37 -04:00
Joey Hess
0b053b9611
Fix a P2P protocol hang
When readContent got Nothing from prepSendAnnex, it did not run its
callback, and the callback is what sends the DATA reply.

sendContent checks with contentSize that the object file is present, but
that doesn't really guarantee that prepSendAnnex won't return Nothing.

So, it was possible for a P2P protocol GET to not receive a response,
and appear to hang. When what it's really doing is waiting for the next
protocol command.

This seems most likely to happen when the annex is in direct mode, and the
file being requested has been modified. It could also happen in an indirect
mode repository if genInodeCache somehow failed. Perhaps due to a race
with a drop of the content file.

Fixed by making readContent behave the way its spec said it should,
and run the callback with L.empty in this case.

Note that, it's finee for readContent to send any amount of data
to the callback, including L.empty. sendBytes deals with that
by making sure it sends exactly the specified number of bytes,
aborting the protocol if it's too short. So, when L.empty is sent,
the protocol will end up aborting.

This work is supported by the NIH-funded NICEMAN (ReproNim TR&D3) project.
2018-11-02 13:41:50 -04:00
Joey Hess
6134431254
clean P2P protocol shutdown on EOF try 2
Same goal as b18fb1e343 but without
breaking backwards compatability. Just return IO exceptions when running
the P2P protocol, so that git-annex-shell can detect eof and avoid the
ugly message.

This commit was sponsored by Ethan Aubin.
2018-09-25 16:49:59 -04:00
Joey Hess
b657242f5d
enforce retrievalSecurityPolicy
Leveraged the existing verification code by making it also check the
retrievalSecurityPolicy.

Also, prevented getViaTmp from running the download action at all when the
retrievalSecurityPolicy is going to prevent verifying and so storing it.

Added annex.security.allow-unverified-downloads. A per-remote version
would be nice to have too, but would need more plumbing, so KISS.
(Bill the Cat reference not too over the top I hope. The point is to
make this something the user reads the documentation for before using.)

A few calls to verifyKeyContent and getViaTmp, that don't
involve downloads from remotes, have RetrievalAllKeysSecure hard-coded.
It was also hard-coded for P2P.Annex and Command.RecvKey,
to match the values of the corresponding remotes.

A few things use retrieveKeyFile/retrieveKeyFileCheap without going
through getViaTmp.
* Command.Fsck when downloading content from a remote to verify it.
  That content does not get into the annex, so this is ok.
* Command.AddUrl when using a remote to download an url; this is new
  content being added, so this is ok.

This commit was sponsored by Fernando Jimenez on Patreon.
2018-06-21 13:37:01 -04:00
Joey Hess
46d4316954
implement annex.retry et al
Added annex.retry, annex.retry-delay, and per-remote versions to configure
transfer retries.

This commit was supported by the NSF-funded DataLad project.
2018-03-29 13:04:07 -04:00
Joey Hess
31e1adc005
deal with unlocked files
P2P protocol version 1 adds VALID|INVALID after DATA; INVALID means the
file was detected to change content while it was being sent and so we
may not have received the valid content of the file.

Added new MustVerify constructor for Verification, which forces
verification even when annex.verify=false etc. This is used when INVALID
and in protocol version 0.

As well as changing git-annex-shell p2psdio, this makes git-annex tor
remotes always force verification, since they don't yet use protocol
version 1. Previously, annex.verify=false could skip verification when
using tor remotes, and let bad data into the repository.

This commit was sponsored by Jack Hill on Patreon.
2018-03-13 14:27:14 -04:00
Joey Hess
e16b069331
use total size from DATA
Noticed that getting a key whose size is not known resulted in a
progress display that didn't include the percent complete.

Fixed for P2P by making the size sent with DATA be used to update the
meter's total size.

In order for rateLimitMeterUpdate to also learn the total size,
had to make it be passed the Meter, and some other reorg in
Utility.Metered was also done so that --json-progress can construct a
Meter to pass to rateLimitMeterUpdate.

When the fallback rsync is done, the progress display still doesn't
include the percent complete. Only way to fix that seems to be to let rsync
display its output again, but that would conflict with git-annex's
own progress meter, which is also being displayed.

This commit was sponsored by Henrik Riomar on Patreon.
2018-03-12 21:46:58 -04:00
Joey Hess
596af7cbc4
move protocol version stuff to the Net free monad
Needs to be in Net not Local, so that Net actions can take the protocol
version into account.

This commit was sponsored by an anonymous bitcoin donor.
2018-03-12 15:20:51 -04:00
Joey Hess
c81768d425
version the P2P protocol
Unfortunately ReceiveMessage didn't handle unknown messages the way it
was documented to; client sending VERSION would cause the server to
return an ERROR and hang up. Fixed that, but old releases of git-annex
use the P2P protocol for tor and will still have that behavior.

So, version is not negotiated for Remote.P2P connections, only for
Remote.Git connections, which will support VERSION from their first
release. There will need to be a later flag day to change Remote.P2P;
left a commented out line that is the only thing that will need to be
changed then.

Version 1 of the P2P protocol is not implemented yet, but updated
the docs for the DATA change that will be allowed by that version.

This commit was sponsored by Jeff Goeke-Smith on Patreon.
2018-03-12 14:36:35 -04:00
Joey Hess
ba53f60801
refactor 2018-03-06 15:14:53 -04:00
Joey Hess
00be07070c
fix build on windows 2016-12-30 12:31:51 -04:00
Joey Hess
b219be5100
refactor 2016-12-30 12:31:17 -04:00
Joey Hess
38f9337e16
Revert "p2p --link now defaults to setting up a bi-directional link"
This reverts commit 3037feb1bf.

On second thought, this was an overcomplication of what should be the
lowest-level primitive. Let's build bi-directional links at the pairing
level with eg magic wormhole.
2016-12-16 18:26:07 -04:00
Joey Hess
3037feb1bf
p2p --link now defaults to setting up a bi-directional link
Both the local and remote git repositories get remotes added
pointing at one-another.

Makes pairing twice as easy!

Security: The new LINK command in the protocol can be sent repeatedly,
but only by a peer who has authenticated with us. So, it's entirely safe to
add a link back to that peer, or to some other peer it knows about.
Anything we receive over such a link, the peer could send us over the
current connection.

There is some risk of being flooded with LINKs, and adding too many
remotes. To guard against that, there's a hard cap on the number of remotes
that can be set up this way. This will only be a problem if setting up
large p2p networks that have exceptional interconnectedness.

A new, dedicated authtoken is created when sending LINK.

This also allows, in theory, using a p2p network like tor, to learn about
links on other networks, like telehash.

This commit was sponsored by Bruno BEAUFILS on Patreon.
2016-12-16 16:38:06 -04:00
Joey Hess
16c6333f09
fix build with old ghc 2016-12-10 11:12:18 -04:00
Joey Hess
9dd510bf29
make tor hidden service work when directory watching is not available
Avoid crashing when built w/o inotify..
2016-12-09 16:40:47 -04:00
Joey Hess
f7687e0876
only start ref change watcher thread once per P2P connection
This is more efficient. Note that the peer will get CHANGED messages for
all refs changed since the connection opened, even if those changes
happened before it sent NOTIFYCHANGE.
2016-12-09 15:08:54 -04:00
Joey Hess
e152c322f8
refactor ref change watching
Added to change notification to P2P protocol.

Switched to a TBChan so that a single long-running thread can be
started, and serve perhaps intermittent requests for change
notifications, without buffering all changes in memory.

The P2P runner currently starts up a new thread each times it waits
for a change, but that should allow later reusing a thread. Although
each connection from a peer will still need a new watcher thread to run.

The dependency on stm-chans is more or less free; some stuff in yesod
uses it, so it was already indirectly pulled in when building with the
webapp.

This commit was sponsored by Francois Marier on Patreon.
2016-12-09 15:01:09 -04:00
Joey Hess
bdf2a31424
typo 2016-12-09 12:54:12 -04:00
Joey Hess
71e8cd408e
content removal is supposed to succed if the content was already not present 2016-12-09 12:48:22 -04:00
Joey Hess
38516b2fca
update progress logs in remotedaemon send/receive 2016-12-08 19:56:02 -04:00
Joey Hess
0f4ee4f298
fix memory leak
I'm unsure why this fixed it, but it did. Seems to suggest that the
memory leak is not due to a bug in my code, but that ghc didn't manage
to take full advantage of laziness, or was failing to gc something it
could have.
2016-12-08 18:42:52 -04:00
Joey Hess
af41519126
convert P2P runners from Maybe to Either String
So we get some useful error messages when things fail.

This commit was sponsored by Peter Hogg on Patreon.
2016-12-08 15:47:49 -04:00
Joey Hess
0541f19bea
fix math error that caused resumes to always fail 2016-12-07 15:36:39 -04:00
Joey Hess
db79b69aa0
ReadWriteMode not AppendMode
AppendMode does not allow seeking..
2016-12-07 15:24:28 -04:00
Joey Hess
99c36f318c
open file for append, not write, so resuming works
WriteMode zeros any existing content, so the seek filled with zeros, and
verification failed after download.
2016-12-07 15:06:07 -04:00
Joey Hess
f744bd5391
refactor 2016-12-06 15:43:03 -04:00
Joey Hess
2bd2e0880c
added StoreContentTo
This is needed in addition to StoreContent, because retrieveKeyFile can
be used to retrieve to different destination files, not only the tmp
file for a key.

This commit was sponsored by Ole-Morten Duesund on Patreon.
2016-12-06 15:05:44 -04:00
Joey Hess
a8c868c2e1
plumb assicated files through P2P protocol for updating transfer logs
ReadContent can't update the log, since it reads lazily. This part of
the P2P monad will need to be rethought.

Associated files are heavily sanitized when received from a peer;
they could be an exploit vector.

This commit was sponsored by Jochen Bartl on Patreon.
2016-12-02 16:42:54 -04:00
Joey Hess
b16a1cee4b
plumb peer uuid through to runLocal
This will allow updating transfer logs with the uuid.
2016-12-02 15:39:49 -04:00
Joey Hess
71ddb10699
initial implementation of P2P.Annex runner
Untested, and it does not yet update transfer logs.

Verifying transferred content is modeled on git-annex-shell recvkey.
In a direct mode or annex.thin repository, content can change while it's
being transferred. So, verification is always done, even if annex.verify
would normally prevent it.

Note that a WORM or URL key could change in a way the verification
doesn't catch. That can happen in git-annex-shell recvkey too. We don't
worry about it, because those key backends don't guarantee preservation
of data. (Which is to say, I worried about it, and then convinced myself
again it was ok.)
2016-12-02 14:54:33 -04:00
Joey Hess
881274d021
make remote-daemon able to send and receive objects over tor
Each worker thread needs to run in the Annex monad, but the
remote-daemon's liftAnnex can only run 1 action at a time. Used
Annex.Concurrent to deal with that.

P2P.Annex is incomplete as of yet.
2016-12-02 13:52:43 -04:00