This means that when the client sends a truncated data to indicate
invalidity, DATA is not passed the full expected data. That leaves the
P2P connection in a state where it cannot be reused. While so far, they
are not reused, they will be later when proxies are supported. So, have
to close the P2P connection in this situation.
Always truncate instead. The padding risked something not noticing the
content was bad and getting a file that was corrupted in a novel way
with the padding "X" at the end. A truncated file is better.
Made the data-length header required even for v0. This simplifies the
implementation, and doesn't preclude extra verification being done for
v0.
The connectionWaitVar is an ugly hack. In servePut, nothing waits
on the waitvar, and I could not find a good way to make anything wait on
it.
Base64 can include '/', and with UUIDs and keys both used in routes,
the encoding needs to avoid that. Use base64url everywhere in the HTTP
protocol for consistency.
This came down to SendBytes waiting on the waitv. Nothing ever filled
it.
Only Annex.Proxy needs the waitv, and it handles filling it. So make it
optional.
Both are only at bare proof of concept stage. Still need to deal with
signaling validity and invalidity, and checking it.
And there's a bad bug: After -JN*2 requests, another request hangs!
So, I think it's failing to free up the Annex worker and end of request
lifetime.
Perhaps I need to use this:
https://docs.servant.dev/en/stable/cookbook/managed-resource/ManagedResource.html
The reason to use removeBeforeRemoteEndTime is twofold.
First, removeBefore sends two protocol commands. Currently, the HTTP
protocol runner only supports sending a single command per invocation.
Secondly, the http server gets a monotonic timestamp from the client. So
translating back to a POSIXTime would be annoying.
The timestamp flow with a proxy will be:
- client gets timestamp, which gets the monotonic timestamp from the
proxied remote via the proxy. The timestamp is currently not
proxied when there is a single proxy.
- client calls remove-before
- http server calls removeBeforeRemoteEndTime which sends REMOVE-BEFORE
to the proxied remote.
This is less efficient, but it guarantees that the timestamps that the
client is sending are local timestamps, which turns out to be necessary
for the HTTP PTP protocol server.
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.