Commit graph

39 commits

Author SHA1 Message Date
Joey Hess
960daf210b
run with noMessages
This avoids extraneous output from p2phttp, including eg, progress
displays when transferring to proxied special remotes.
2024-07-29 13:35:08 -04:00
Joey Hess
fbbedae497
add --clusterjobs option and default to 1
The default of 1 is not ideal at all, but it avoids an accidental M*N
causing so much concurrency it becomes unusable.
2024-07-28 10:36:22 -04:00
Joey Hess
d1faa13d6a
implement proxy connection pool
removeOldestProxyConnectionPool will be innefficient the larger the pool
is. A better data structure could be more efficient. Eg, make each value
in the pool include the timestamp of its oldest element, then the oldest
value can be found and modified, rather than rebuilding the whole Map.

But, for pools of a few hundred items, this should be fine. It's O(n*n log n)
or so.

Also, when more than 1 connection with the same pool key exists,
it's efficient even for larger pools, since removeOldestProxyConnectionPool
is not needed.

The default of 1 idle connection could perhaps be larger.. like the
number of jobs? Otoh, it seems good to ramp up and down the number of
connections, which does happen. With 1, there is at most one stale
connection, which might cause a request to fail.
2024-07-26 17:03:31 -04:00
Joey Hess
3d14e2cf58
http server support for proxies, incomplete
Refactored git-annex-shell code so this can use checkCanProxy'.

At this point all that remains is opening a proxy connection,
and using a proxy connection.
2024-07-25 13:19:24 -04:00
Joey Hess
7bd616e169
Remote.Git retrieveKeyFile works with annex+http urls
This includes a bugfix to serveGet, it hung at the end.
2024-07-24 10:28:44 -04:00
Joey Hess
73ffb58456
p2phttp support https 2024-07-23 15:37:36 -04:00
Joey Hess
b7149e897b
add --bind option and listen to both ipv4 and ipv6 by default 2024-07-23 15:19:56 -04:00
Joey Hess
4e15b786ca
Remote.Git checkpresent works with annex+http urls. 2024-07-23 14:31:32 -04:00
Joey Hess
b0eed55d4f
factor out http server and client into own modules
To avoid a cycle when Remote.Git uses the client.
2024-07-23 14:12:38 -04:00
Joey Hess
6bbc4565e6
started wiring p2phttp into Remote.Git
but we have a cycle, ugh
2024-07-23 13:53:10 -04:00
Joey Hess
5c39652235
starting support for remote.name.annexUrl set to annex+http
In this case, Remote.Git should not use that url for all access to
the repository. It will only be used for annex operations, which isn't
done yet.
2024-07-23 09:12:21 -04:00
Joey Hess
2acde0152a
fix build 2024-07-22 21:19:20 -04:00
Joey Hess
06de2ad972
change default port to 9417
Port 80 would need root, not a good idea, so pick something that might
work by default.

9418 is git protocol's port. 9419 is used by something, but nothing
known uses 9417, so it's as good a default as any.
2024-07-22 20:52:17 -04:00
Joey Hess
7f4cff7ae9
locking over http basically working 2024-07-22 19:44:26 -04:00
Joey Hess
e979e85bff
make serveKeepLocked check auth just to be safe 2024-07-22 19:15:52 -04:00
Joey Hess
d5eaf0f567
improve clientKeepLocked 2024-07-22 16:56:44 -04:00
Joey Hess
48eb6671e4
improve clientGet types 2024-07-22 16:23:08 -04:00
Joey Hess
3069e28dd8
implemented servePutOffset and clientPutOffset
But, it's buggy: the server hangs without processing the VALIDITY,
and I can't seem to work out why. As far as I can see, storefile
is getting as far as running the validitycheck, which is supposed to
read that, but never does.

This is especially strange because what seems like the same protocol
doesn't hang when servePut runs it. This made me think that it needed
to use inAnnexWorker to be more like servePut, but that didn't help.

Another small problem with this is that it does create an empty
.git/annex/tmp/ file for the key. Since this will usually be used in
combination with servePut, that doesn't seem worth worrying about much.
2024-07-22 15:04:10 -04:00
Joey Hess
b240a11b79
clientPut seeking to offset 2024-07-22 12:50:21 -04:00
Joey Hess
a01426b713
avoid padding in servePut
This means that when the client sends a truncated data to indicate
invalidity, DATA is not passed the full expected data. That leaves the
P2P connection in a state where it cannot be reused. While so far, they
are not reused, they will be later when proxies are supported. So, have
to close the P2P connection in this situation.
2024-07-22 12:30:30 -04:00
Joey Hess
4826a3745d
servePut and clientPut implementation
Made the data-length header required even for v0. This simplifies the
implementation, and doesn't preclude extra verification being done for
v0.

The connectionWaitVar is an ugly hack. In servePut, nothing waits
on the waitvar, and I could not find a good way to make anything wait on
it.
2024-07-22 10:27:44 -04:00
Joey Hess
97a2d0e4fb
use worker pool in withLocalP2PConnections
This allows multiple clients to be handled at the same time.
2024-07-11 14:37:52 -04:00
Joey Hess
2228d56db3
serveGet invalidation 2024-07-11 11:42:32 -04:00
Joey Hess
74c6175795
fix serveGet early handle close
Needed that waitv after all..
2024-07-11 09:55:17 -04:00
Joey Hess
1e0f92a5a1
implemented serveGet and clientGet
Both are only at bare proof of concept stage. Still need to deal with
signaling validity and invalidity, and checking it.

And there's a bad bug: After -JN*2 requests, another request hangs!

So, I think it's failing to free up the Annex worker and end of request
lifetime.

Perhaps I need to use this:

https://docs.servant.dev/en/stable/cookbook/managed-resource/ManagedResource.html
2024-07-10 16:06:39 -04:00
Joey Hess
f9b7ce7224
add Annex worker pool to P2PHttp
This will be needed for get and store, since those need to run Annex
actions.

withLocalP2PConnections will also probably use it.
2024-07-10 12:19:47 -04:00
Joey Hess
d4b9aea87b
implement gettimestamp 2024-07-10 10:23:10 -04:00
Joey Hess
7c588a5791
implement remove-before
The reason to use removeBeforeRemoteEndTime is twofold.

First, removeBefore sends two protocol commands. Currently, the HTTP
protocol runner only supports sending a single command per invocation.

Secondly, the http server gets a monotonic timestamp from the client. So
translating back to a POSIXTime would be annoying.

The timestamp flow with a proxy will be:

- client gets timestamp, which gets the monotonic timestamp from the
  proxied remote via the proxy. The timestamp is currently not
  proxied when there is a single proxy.
- client calls remove-before
- http server calls removeBeforeRemoteEndTime which sends REMOVE-BEFORE
  to the proxied remote.
2024-07-10 10:03:26 -04:00
Joey Hess
b8a26712c6
implement clientRemove
Tested removal.
2024-07-10 09:20:13 -04:00
Joey Hess
48f76cb3e8
implement serveRemove and send WWW-Authenticate header on auth failure 2024-07-10 09:13:01 -04:00
Joey Hess
97d0fc9b65
git-annex p2phttp options 2024-07-10 00:01:55 -04:00
Joey Hess
08371c3745
started on auth 2024-07-09 17:30:55 -04:00
Joey Hess
3d13521479
set up handles for p2phttp
Now it fully works.. for the first request. But then it gets stuck
waiting for the P2P protocol runner to shut down.
2024-07-09 13:50:42 -04:00
Joey Hess
edf8a3df2d
p2phttp is almost working for checkpresent
The server is fully running annex actions, only the P2PConnection is
wrong, currently using stdio.
2024-07-09 13:37:55 -04:00
Joey Hess
0bdee626ad
thread in a state 2024-07-08 14:00:23 -04:00
Joey Hess
82d66ede5e
convert lockcontent api to http long polling
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.

Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.

And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:

So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
2024-07-08 12:57:46 -04:00
Joey Hess
522700d1c4
implemented servant-client support for websockets 2024-07-08 07:44:59 -04:00
Joey Hess
1dbb5ec70d
servant API type is complete 2024-07-07 12:59:12 -04:00
Joey Hess
86ce3bf1e4
started servant implementation of HTTP P2P protocol 2024-07-07 12:08:10 -04:00