Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
Added v2-v0 endpoints. These are tedious, but will be needed in order to
use the HTTP protocol to proxy to repositories with older git-annex,
where git-annex-shell will be speaking an older version of the protocol.
Changed GET to use 422 when the content is not present. 404 is needed to
detect when a protocol version is not supported.
Managed to avoid netstrings. Actually, using netstrings while streaming
lazy ByteString turns out to be very difficult. So instead, have a
header that specifies the expected amount of data, and then it can just
arrange to send a different amount of data if it needs to indicate
INVALID.
Also improved the interface for GET of a key.
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
Only implemented server side, not used client side yet.
And not yet implemented for proxies/clusters, for which there's a build
warning about unhandled cases.
This is P2P protocol version 3. Probably will be the only change in that
version..
Added a dependency on clock to access a monotonic clock.
On i386-ancient, that is at version 0.2.0.0.