Commit graph

3705 commits

Author SHA1 Message Date
asakurareiko@f3d908c71c009580228b264f63f21c7274df7476
ae00d385fe Added a comment 2021-10-07 06:17:30 +00:00
asakurareiko@f3d908c71c009580228b264f63f21c7274df7476
bcf7bd8505 2021-10-07 05:53:23 +00:00
jkniiv
f36272be0c Added a comment: the WSL1 use case 2021-10-07 04:12:48 +00:00
jkniiv
2b06612ed5 rename WSL1 section to highlight date, add wording about being experimental, reword some awkwardness, add further directions 2021-10-07 03:56:37 +00:00
asakurareiko@f3d908c71c009580228b264f63f21c7274df7476
c3afda1699 Add steps for WSL1 2021-10-06 22:35:27 +00:00
spwhitton
3b4e56c760 Added a comment 2021-10-02 17:04:02 +00:00
Joey Hess
9012fa0187
reinject: Fix crash when reinjecting a file from outside the repository
Commit 4bf7940d6b introduced this
problem, but was otherwise doing a good thing. Problem being
that fileRef "/foo" used to return ":./foo", which was actually wrong,
but as long as there was no foo in the local repository, catKey
could operate on it without crashing. After that fix though, fileRef
would return eg "../../foo", resulting in fileRef returning
":./../../foo", which will make git cat-file crash since that's
not a valid path in the repo.

Fix is simply to make fileRef detect paths outside the repo and return
Nothing. Then catKey can be skipped. This needed several bugfixes to
dirContains as well, in previous commits.

In Command.Smudge, this led to needing to check for Nothing. That case
should actually never happen, because the fileoutsiderepo check will
detect it earlier.

Sponsored-by: Brock Spratlen on Patreon
2021-10-01 14:06:34 -04:00
spwhitton
6fbca0bb5b add sign off 2021-09-30 21:19:39 +00:00
spwhitton
c6e1a6a3a1 post bug 2021-09-30 21:15:04 +00:00
Joey Hess
a92427c0d3
comment 2021-09-27 14:14:52 -04:00
Joey Hess
69c1c02339
comment 2021-09-27 13:59:54 -04:00
Joey Hess
64cac1a721
avoid potentially very long bwlimit delay at start
I first saw this getting with -J2 over ssh, but later saw it also
without the -J2. It was resuming, and the calulated unboundDelay was
many minutes. The first update of the meter jumped to some large value,
because of the resuming, and so it thought the BW was super fast.

Avoid by waiting until the second meter update.

Might be a good idea to also guard for the delay being many seconds
and avoid waiting. But how many? If BW is legitimately super fast, and a
remote happens to read more than a 32kb or so chunk at a time, it could
in theory download megabytes or gigabytes of data before the first meter
update. It would actually be appropriate then to delay for a long time,
if the desired BW was low. Could make up some numbers that are sane now,
but tech may improve.

(BTW, pleased to see bwlimit does work with -J. I had worried that
it might not, if the meter update happened in a different thread than
the downloading, but it's done in the same thread.)

Sponsored-by: Brett Eisenberg on Patreon
2021-09-22 19:23:30 -04:00
Joey Hess
1033a81d63
mention tiny wart 2021-09-22 15:29:07 -04:00
Joey Hess
fc9abca231
Merge branch 'bwlimit' 2021-09-22 15:27:28 -04:00
Joey Hess
e8496d62e4
improved bwrate limiting implementation
New method is much better. Avoids unrestrained transfer at the beginning
(except for the first block. Keeps right at or a few kb/s below the
configured limit, with very little varation in the actual reported bandwidth.

Removed the /s part of the config as it's not needed.

Ready to merge.

Sponsored-by: Luke Shumaker on Patreon
2021-09-22 15:27:16 -04:00
Joey Hess
00c452f0db
Merge branch 'master' of ssh://git-annex.branchable.com 2021-09-22 11:15:51 -04:00
Joey Hess
44d3d50785
note 2021-09-22 11:10:55 -04:00
Atemu
f71cffa401 Added a comment 2021-09-22 14:17:59 +00:00
Joey Hess
b76a4453cb
bwlimit branch 2021-09-22 09:54:14 -04:00
Joey Hess
b8130655cc
comment 2021-09-11 17:01:39 -04:00
Joey Hess
4f42292b13
improve url download failure display
* When downloading urls fail, explain which urls failed for which
  reasons.
* web: Avoid displaying a warning when downloading one url failed
  but another url later succeeded.

Some other uses of downloadUrl use urls that are effectively internal use,
and should not all be displayed to the user on failure. Eg, Remote.Git
tries different urls where content could be located depending on how the
remote repo is set up. Exposing those urls to the user would lead to wild
goose chases. So had to parameterize it to control whether it displays urls
or not.

A side effect of this change is that when there are some youtube urls
and some regular urls, it will try regular urls first, even if the
youtube urls are listed first. This seems like an improvement if
anything, but in any case there's no defined order of urls that it's
supposed to use.

Sponsored-by: Dartmouth College's Datalad project
2021-09-01 15:33:38 -04:00
yarikoptic
07aa981379 initial TODO on improving 'get' errors reporting 2021-08-31 14:40:48 +00:00
Joey Hess
0418d54f28
update 2021-08-30 14:34:25 -04:00
Joey Hess
8208daaf17
idea for making more special remotes support importtree
Sponsored-by: Jack Hill on Patreon
2021-08-30 14:27:22 -04:00
Joey Hess
7b1709105a
Merge branch 'master' of ssh://git-annex.branchable.com 2021-08-27 12:30:35 -04:00
Joey Hess
3093e8fcc3
thought 2021-08-27 00:59:24 -04:00
jkniiv@b330fc3a602d36a37a67b2a2d99d4bed3bb653cb
3007e3c177 Added a comment: it turns out I had to file this as a bug 2021-08-27 01:38:29 +00:00
Joey Hess
ab7b5a492c
--batch-keys
New --batch-keys option added to these commands:  get, drop, move, copy, whereis

git-annex-matching-options had to be reworded since some of its options
can be used to match on keys, not only files.

Sponsored-by: Luke Shumaker on Patreon
2021-08-25 14:21:12 -04:00
Joey Hess
8944569988
comment 2021-08-24 11:21:12 -04:00
jkniiv@b330fc3a602d36a37a67b2a2d99d4bed3bb653cb
20a252a129 Added a comment: git annex sync --no-commit --content takes double the time of git annex get . 2021-08-20 02:05:53 +00:00
Joey Hess
53744e132d
incremental verification for gitlfs and httpalso
And that should be all the special remotes supporting it on linux now,
except for in the odd edge case here and there.

Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:17:10 -04:00
Joey Hess
f5e09a1dbe
incremental verification for S3
Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:07:00 -04:00
Joey Hess
d154e7022e
incremental verification for web special remote
Except when configuration makes curl be used. It did not seem worth
trying to tail the file when curl is downloading.

But when an interrupted download is resumed, it does not read the whole
existing file to hash it. Same reason discussed in
commit 7eb3742e4b76d1d7a487c2c53bf25cda4ee5df43; that could take a long
time with no progress being displayed. And also there's an open http
request, which needs to be consumed; taking a long time to hash the file
might cause it to time out.

Also in passing implemented it for git and external special remotes when
downloading from the web. Several others like S3 are within striking
distance now as well.

Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:02:22 -04:00
Joey Hess
1dca3ba26a
status update 2021-08-18 12:58:27 -04:00
Joey Hess
d67da1f4db
idea 2021-08-18 12:39:03 -04:00
Joey Hess
4bbc6a25fa
comment 2021-08-17 10:28:18 -04:00
Joey Hess
ffa1f6ed30
Merge branch 'master' of ssh://git-annex.branchable.com 2021-08-16 17:30:04 -04:00
Joey Hess
f8463ad52f
status update 2021-08-16 17:29:39 -04:00
Joey Hess
b1622eb932
incremental verify for directory special remote
Added fileRetriever', which will let the remaining special remotes
eventually also support incremental verify.

Sponsored-by: Dartmouth College's DANDI project
2021-08-16 16:51:33 -04:00
Joey Hess
ec82299730
status update
I was wrong about S3 supporting tailVerify.
2021-08-16 15:15:32 -04:00
https://christian.amsuess.com/chrysn
905fef31b3 Added a comment: Another example 2021-08-15 17:42:54 +00:00
Joey Hess
751242b55e
status update 2021-08-13 16:34:18 -04:00
Joey Hess
51d59fb260
comment 2021-08-12 14:49:48 -04:00
Joey Hess
7eb3742e4b
incremental verify for chunked remotes
Simply feed each chunk in turn to the incremental verifier.

When resuming an interrupted retrieve, it does not do incremental
verification. That would need to read the file, up to the resume point,
and feed it to the incremental verifier. That seems easy to get wrong.
Also it would mean extra work done before the transfer can start. Which
would complicate displaying progress, and would perhaps not appear to the
user as if it was resuming from where it left off. Instead, in that
situation, return UnVerified, and let the verification be done in a
separate pass.

Granted, Annex.CopyFile does manage all that, but it's not complicated
by dealing with chunks too.

Sponsored-by: Dartmouth College's DANDI project
2021-08-11 14:42:49 -04:00
Joey Hess
8886ff1cff
done! 2021-08-04 12:40:25 -04:00
Joey Hess
c67b1e31a6
branch 2021-08-03 17:05:50 -04:00
Joey Hess
629e95fd8e
update 2021-08-03 14:03:25 -04:00
Joey Hess
bb56186daa
new todo.. I seem to have cracked a longstanding problem
Sponsored-by: Jochen Bartl on Patreon
2021-08-03 13:51:23 -04:00
Joey Hess
461035c6ec
close
I'm now reasonably sure I've identified both cases where this can
happen. v8 upgrades and certian filesystems eg NFS. Both are handled as
well as can be, though it may involve some extra checksumming work.
2021-07-30 15:22:22 -04:00
Joey Hess
1fc6e6a65f
close this frustrating todo due to lack of followup and/or being fixed 2021-07-26 11:16:58 -04:00