removeOldestProxyConnectionPool will be innefficient the larger the pool
is. A better data structure could be more efficient. Eg, make each value
in the pool include the timestamp of its oldest element, then the oldest
value can be found and modified, rather than rebuilding the whole Map.
But, for pools of a few hundred items, this should be fine. It's O(n*n log n)
or so.
Also, when more than 1 connection with the same pool key exists,
it's efficient even for larger pools, since removeOldestProxyConnectionPool
is not needed.
The default of 1 idle connection could perhaps be larger.. like the
number of jobs? Otoh, it seems good to ramp up and down the number of
connections, which does happen. With 1, there is at most one stale
connection, which might cause a request to fail.
Refactored git-annex-shell code so this can use checkCanProxy'.
At this point all that remains is opening a proxy connection,
and using a proxy connection.
Port 80 would need root, not a good idea, so pick something that might
work by default.
9418 is git protocol's port. 9419 is used by something, but nothing
known uses 9417, so it's as good a default as any.
But, it's buggy: the server hangs without processing the VALIDITY,
and I can't seem to work out why. As far as I can see, storefile
is getting as far as running the validitycheck, which is supposed to
read that, but never does.
This is especially strange because what seems like the same protocol
doesn't hang when servePut runs it. This made me think that it needed
to use inAnnexWorker to be more like servePut, but that didn't help.
Another small problem with this is that it does create an empty
.git/annex/tmp/ file for the key. Since this will usually be used in
combination with servePut, that doesn't seem worth worrying about much.
This means that when the client sends a truncated data to indicate
invalidity, DATA is not passed the full expected data. That leaves the
P2P connection in a state where it cannot be reused. While so far, they
are not reused, they will be later when proxies are supported. So, have
to close the P2P connection in this situation.
Made the data-length header required even for v0. This simplifies the
implementation, and doesn't preclude extra verification being done for
v0.
The connectionWaitVar is an ugly hack. In servePut, nothing waits
on the waitvar, and I could not find a good way to make anything wait on
it.
Both are only at bare proof of concept stage. Still need to deal with
signaling validity and invalidity, and checking it.
And there's a bad bug: After -JN*2 requests, another request hangs!
So, I think it's failing to free up the Annex worker and end of request
lifetime.
Perhaps I need to use this:
https://docs.servant.dev/en/stable/cookbook/managed-resource/ManagedResource.html
The reason to use removeBeforeRemoteEndTime is twofold.
First, removeBefore sends two protocol commands. Currently, the HTTP
protocol runner only supports sending a single command per invocation.
Secondly, the http server gets a monotonic timestamp from the client. So
translating back to a POSIXTime would be annoying.
The timestamp flow with a proxy will be:
- client gets timestamp, which gets the monotonic timestamp from the
proxied remote via the proxy. The timestamp is currently not
proxied when there is a single proxy.
- client calls remove-before
- http server calls removeBeforeRemoteEndTime which sends REMOVE-BEFORE
to the proxied remote.
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.