Commit graph

45888 commits

Author SHA1 Message Date
Joey Hess
4c785c338a
p2phttp: notice when new repositories are added to --directory
When a uuid is not known, rescan for new repositories. Easy.

When a repository is removed, it will also get removed from the server
state on the next scan. But until a new uuid is seen, there will not be
a scan. This leaves the server trying to serve a uuid whose repository
is gone. That seems buggy. While getting just fails, dropping fails the
first time, but seems to leave the server in an unusable state, so the
next drop attempt hangs. The server is still able to serve other uuids,
only the one whose repository was removed has that problem.
2024-11-21 15:09:12 -04:00
Joey Hess
758ea89c74
skip over repositories in --directory that do not have annex.uuid set 2024-11-21 14:18:18 -04:00
Joey Hess
3c18398d5a
p2phttp support --jobs with --directory
--jobs is usually an Annex option setter, but --directory runs in IO, so
would not have that available. So instead moved the option parser into
the command's Options.
2024-11-21 14:15:14 -04:00
Joey Hess
9f84dd82da
p2phttp --directory implementation
Untested, but it compiles, so.

Known problems:

* --jobs is not available to startIO
* Does not notice when new repositories are added to a directory.
* Does not notice when repositories are removed from a directory.
2024-11-21 14:02:58 -04:00
Joey Hess
6bdf4a85fb
move the p2phttp server state map into a data type 2024-11-21 12:24:14 -04:00
Joey Hess
d7ed99a55f
document p2phttp --directory
The option is not implemented yet.
2024-11-20 13:40:38 -04:00
Joey Hess
07026cf58b
add proxied uuids to http server state map
This fixes support for proxying after last commit broke it.

Note that withP2PConnections is called at server startup, and so only
proxies seen at that point will appear in the map and be used. It was
already the case that a proxy added after p2phttp was running would not
be served.

I think that is possibly a bug, but at least this commit doesn't
introduce the problem, though it might make it harder to fix it.

As bugs go, it's probably not a big deal, because after all,
git configs needs to be set in the local repository, followed by
git-annex updateproxy being run, to set up proxying. If someone is doing
that, they can restart their http server I suppose.
2024-11-20 13:22:25 -04:00
Joey Hess
254073569f
p2pHttpApp with a map of UUIDs to server states
This is early groundwork for making p2phttp support serving multiple
repositories from a single daemon.

So far only 1 repository is served still. And this commit breaks support
for proxying!
2024-11-20 12:51:25 -04:00
Joey Hess
b8a717a617
reuse http url password for p2phttp url when on same host
When remote.name.annexUrl is an annex+http(s) url, that uses the same
hostname as remote.name.url, which is itself a http(s) url, they are
assumed to share a username and password.

This avoids unnecessary duplicate password prompts.
2024-11-19 15:27:26 -04:00
Joey Hess
3510072883
update 2024-11-19 14:42:50 -04:00
Joey Hess
aaba82f3c8
comments 2024-11-19 14:26:47 -04:00
Joey Hess
6489342b71
tag INM7 2024-11-19 14:12:11 -04:00
Joey Hess
440b908732
comment 2024-11-19 13:12:43 -04:00
Joey Hess
8c11c06a31
skip git-remote-annex tests on windows
The NullSoftInstaller does not install git-remote-annex. For that
matter, it does not install git-annex-shell either. I don't know quite
how it would make sense to do so, without hard links.
It could contain 3 copies of the same binary.
2024-11-19 13:01:12 -04:00
Joey Hess
73950a6a0c
split git-remote-annex test 2024-11-19 12:54:23 -04:00
Joey Hess
6b92e143cc
retitle OSX bug 2024-11-19 12:46:01 -04:00
Joey Hess
df29f29e0d
git-remote-annex: Fix cloning from a special remote on a crippled filesystem
Not initializing and so deleting the bundles only causes a little more work
on the first git fetch.
2024-11-19 12:43:51 -04:00
Joey Hess
1ff54a3b44
add git-remote-annex as a dep of the test target 2024-11-19 12:13:13 -04:00
yarikoptic
10d7091404 initial report on failing tests 2024-11-18 13:55:55 +00:00
Joey Hess
b7c55bd451
update 2024-11-15 16:36:43 -04:00
Joey Hess
fbf3a60366
close 2024-11-15 15:34:03 -04:00
Joey Hess
4142f7227c
comments 2024-11-15 15:31:49 -04:00
Joey Hess
51b2d6d8c5
avoid storing same filename repeatedly in versioned import from S3
Logically, this should make it need a lot less memory when files have
been changed many times. In my tests, it didn't seem to change memory
use at all. Unsure why, it is working. It's possible the Response is not
getting garbage collected due to pinning. But as far as I can see, all
parts of it that are retained get copied in a way that won't keep the
whole thing pinned in memory.
2024-11-15 15:27:42 -04:00
Joey Hess
dc5bf24823
use 80% less memory when importing from a versioned S3 bucket
Same idea as commit eb714c107b, but even
better, because a lot of the response is DeleteMarker, that can be garbage
collected now.
2024-11-15 14:19:17 -04:00
Joey Hess
eb714c107b
use 20% less memory when listing unversioned S3 bucket 2024-11-15 13:24:13 -04:00
lell
43a4adda6e 2024-11-15 09:27:28 +00:00
lell
35105a79ed 2024-11-15 09:26:56 +00:00
lell
505f1a7cd9 2024-11-15 09:25:32 +00:00
matrss
6b3920b168 Added a comment 2024-11-15 08:54:07 +00:00
lell
18d1c565be 2024-11-15 08:44:55 +00:00
Joey Hess
cea3dd500a
fixed 2024-11-14 16:16:56 -04:00
Joey Hess
4b87669ae2
S3 use last Key when there is no Marker element
Fix infinite loop and memory blowup when importing from an unversioned S3
bucket that is large enough to need pagination.

I don't think there actually ever will be a Marker element, a delimiter is
not set.

Probably this code path was never tested with pagination! Also the aws
library's lack of any docs made it easy to mess up.

Versioned buckets seem to not have the same problem. The API docs for
ListObjectVersions say that NextKeyMarker will always be provided when
paginating.
2024-11-14 16:12:37 -04:00
Joey Hess
b9b3e1257d
comments 2024-11-14 15:27:00 -04:00
Joey Hess
f724ff0388
comment 2024-11-14 14:20:05 -04:00
Joey Hess
c8ff4a1152
close 2024-11-14 13:55:55 -04:00
Joey Hess
1e17d0ee34
Merge branch 'checkbucketversioning' 2024-11-14 13:52:19 -04:00
Joey Hess
44da423e2e
S3: Send git-annex or other configured User-Agent.
--user-agent is the only way to configure it currently

(Needs aws-0.24.3)
2024-11-13 16:10:37 -04:00
Joey Hess
3f7953869d
fix 2024-11-13 16:02:55 -04:00
Joey Hess
55cd211215
response 2024-11-13 14:42:03 -04:00
Joey Hess
28428524e5
Merge branch 'master' of ssh://git-annex.branchable.com 2024-11-13 14:12:37 -04:00
Joey Hess
0a6ca3c401
comment 2024-11-13 14:12:19 -04:00
yarikoptic
653c5114b9 Added a comment 2024-11-13 18:11:58 +00:00
Joey Hess
0a94273ff8
Merge branch 'master' of ssh://git-annex.branchable.com 2024-11-13 14:09:51 -04:00
Joey Hess
b94221594b
add: When adding a dotfile as a non-large file, mention that it's a dotfile
This is to reduce user confusion when their annex.largefiles matches it,
or is not set.

Note that, when annex.dotfiles is set, but a dotfile is not matched by
annex.largefiles, the "non-large file" message will be displayed. That
makes sense because whether the file is a dotfile does not matter with that
configuration.

Also, this slightly optimised the annex.dotfiles path in passing,
by avoiding the slight slowdown caused by the check added in commit
876d5b6c6f in that case.
2024-11-13 14:09:24 -04:00
yarikoptic
1c8a5dc586 Added a comment 2024-11-13 18:08:59 +00:00
yarikoptic
593f992e9a Added a comment 2024-11-13 17:57:58 +00:00
Joey Hess
876d5b6c6f
add: Consistently treat files in a dotdir as dotfiles, even when ran inside that dotdir
Assistant and smudge also updated.

This does add a small amount of extra work, getting the TopFilePath.
Not enough to be concerned by.

Also improve documentation to make clear that files inside dotdirs are
treated as dotfiles.

Sponsored-by: Eve on Patreon
2024-11-13 13:43:01 -04:00
Joey Hess
1f59dae0bd
fix link 2024-11-13 12:59:22 -04:00
Joey Hess
646d7e02cc
dotfile 2024-11-13 12:57:54 -04:00
datamanager
6710869043 Added a comment 2024-11-13 00:30:25 +00:00