2024-07-02 20:14:45 +00:00
[[!toc ]]
Draft 1 of a complete [[P2P_protocol]] over HTTP.
2024-07-07 16:08:10 +00:00
## base64 encoding of keys, uuids, and filenames
A git-annex key can contain text in any encoding. So can a filename,
and it's even possible, though unlikely, that the UUID of a git-annex
repository might.
But this protocol requires that UTF-8 be used throughout, except
where bodies use `Content-Type: application/octet-stream`.
So, all git-annex keys, uuids, and filenames in this protocol are
base64 encoded.
2024-07-02 20:14:45 +00:00
## authentication
A git-annex protocol endpoint can optionally operate in readonly mode without
authentication.
Authentication is required to make any changes.
Authentication is done using HTTP basic auth.
The user is recommended to only authenticate over HTTPS, since otherwise
HTTP basic auth (as well as git-annex data) can be snooped. But some users
may want git-annex to use HTTP in eg a LAN.
## protocol version
Each request in the protocol is versioned. The versions correspond
2024-07-05 19:34:58 +00:00
to P2P protocol versions.
2024-07-02 20:14:45 +00:00
2024-07-03 20:59:22 +00:00
The protocol version comes before the request. Eg: `/git-annex/v3/put`
2024-07-02 20:14:45 +00:00
If the server does not support a particular protocol version, the
request will fail with a 404, and the client should fall back to an earlier
2024-07-03 20:59:22 +00:00
protocol version.
2024-07-02 20:14:45 +00:00
## common request parameters
2024-07-05 19:00:05 +00:00
Every request supports these common parameters, and unless documented
otherwise, a request requires both of them to be included.
2024-07-02 20:14:45 +00:00
* `clientuuid`
The value is the UUID of the git-annex repository of the client.
* `serveruuid`
The value is the UUID of the git-annex repository that the server
should serve.
Any request may also optionally include these parameters:
* `bypass`
The value is the UUID of a cluster gateway, which the server should avoid
connecting to when serving a cluster. This is the equivilant of the
`BYPASS` message in the [[P2P_Protocol]].
This parameter can be given multiple times to list several cluster
gateway UUIDs.
2024-07-07 16:59:12 +00:00
This parameter is only available for v2 and above.
2024-07-05 19:34:58 +00:00
2024-07-02 20:14:45 +00:00
[Internally, git-annex can use these common parameters, plus the protocol
version, to create a P2P session. The P2P session is driven through
the AUTH, VERSION, and BYPASS messages, leaving the session ready to
service requests.]
2024-07-05 19:00:05 +00:00
## requests
### GET /git-annex/key/$key
This is a simple, unversioned interface to get a key from the server.
It is not part of the P2P protocol per se, but is provided to let
other clients than git-annex easily download the content of keys from the
http server.
2024-07-05 19:34:58 +00:00
When the key is not present on the server, this returns a 404 Not Found.
2024-07-05 19:00:05 +00:00
### GET /git-annex/v3/key/$key
Get the content of a key from the server.
This is designed so it can be used both by a peer in the P2P protocol,
and by a regular HTTP client that just wants to download a file.
Example:
> GET /git-annex/v3/key/SHA1--foo&associatedfile=bar&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
< X-git-annex-data-length: 3
< Content-Type: application/octet-stream
<
< foo
The key to get is the part of the url after "/git-annex/vN/key/"
and before any url parameters.
All parameters are optional, including the common parameters, and these:
* `associatedfile`
The name of a file in the git repository, for informational purposes
only.
use netstrings for framing binary data with json at the end
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
2024-07-05 15:53:03 +00:00
2024-07-05 19:00:05 +00:00
* `offset`
Number of bytes to skip sending from the beginning of the file.
use netstrings for framing binary data with json at the end
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
2024-07-05 15:53:03 +00:00
2024-07-05 19:00:05 +00:00
Request headers are currently ignored, so eg Range requests are
not supported. (This would be possible to implement, up to a point.)
The body of the request is empty.
The server's response will have a `Content-Type` header of
`application/octet-stream`.
use netstrings for framing binary data with json at the end
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
2024-07-05 15:53:03 +00:00
2024-07-05 19:00:05 +00:00
The server's response will have a `X-git-annex-data-length`
header that indicates the number of bytes of content that are expected to
be sent. Note that there is no Content-Length header.
use netstrings for framing binary data with json at the end
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
2024-07-05 15:53:03 +00:00
2024-07-05 19:00:05 +00:00
The body of the response is the content of the key.
use netstrings for framing binary data with json at the end
This will be easy to implement with servant. It's also very efficient,
and fairly future-proof. Eg, could add another frame with other data.
This does make it a bit harder to use this protocol, but netstrings
probably take about 5 minutes to implement? Let's see...
import Text.Read
import Data.List
toNetString :: String -> String
toNetString s = show (length s) ++ ":" ++ s ++ ","
nextNetString :: String -> Maybe (String, String)
nextNetString s = case break (== ':') s of
([], _) -> Nothing
(sn, rest) -> do
n <- readMaybe sn
let (v, rest') = splitAt n (drop 1 rest)
return (v, drop 1 rest')
Ok, well, that took about 10 minutes ;-)
2024-07-05 15:53:03 +00:00
2024-07-05 19:00:05 +00:00
If the length of the body is different than what the the
X-git-annex-data-length header indicated, then the data is invalid and
should not be used. This can happen when eg, the data was being sent from
an unlocked annexed file, which got modified while it was being sent.
2024-07-02 20:14:45 +00:00
2024-07-05 19:34:58 +00:00
When the content is not present, the server will respond with
422 Unprocessable Content.
### GET /git-annex/v2/key/$key
Identical to v3.
### GET /git-annex/v1/key/$key
Identical to v3.
### GET /git-annex/v0/key/$key
2024-07-08 17:26:02 +00:00
Same as v3, except the X-git-annex-data-length header is not used.
2024-07-05 19:34:58 +00:00
Additional checking client-side will be required to validate the data.
2024-07-02 20:14:45 +00:00
2024-07-05 19:00:05 +00:00
### POST /git-annex/v3/checkpresent
2024-07-02 20:14:45 +00:00
Checks if a key is currently present on the server.
Example:
2024-07-03 20:59:22 +00:00
> POST /git-annex/v3/checkpresent?key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-05 14:08:43 +00:00
< {"present": true}
2024-07-02 20:14:45 +00:00
There is one required additional parameter, `key`.
The body of the request is empty.
2024-07-05 14:08:43 +00:00
The server responds with a JSON object with a "present" field that is true
if the key is present, or false if it is not present.
2024-07-02 20:14:45 +00:00
2024-07-05 19:34:58 +00:00
### POST /git-annex/v2/checkpresent
Identical to v3.
### POST /git-annex/v1/checkpresent
Identical to v3.
### POST /git-annex/v0/checkpresent
Identical to v3.
2024-07-05 19:00:05 +00:00
### POST /git-annex/v3/lockcontent
2024-07-02 20:14:45 +00:00
Locks the content of a key on the server, preventing it from being removed.
2024-07-05 19:00:05 +00:00
Example:
> POST /git-annex/v3/lockcontent?key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
convert lockcontent api to http long polling
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
2024-07-08 14:40:38 +00:00
< {"locked": true}
2024-07-05 19:00:05 +00:00
2024-07-02 20:14:45 +00:00
There is one required additional parameter, `key`.
convert lockcontent api to http long polling
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
2024-07-08 14:40:38 +00:00
The server will return `{"locked": true}` if it was able to lock the key,
or `{"locked": false}` if it was not.
2024-07-02 20:14:45 +00:00
convert lockcontent api to http long polling
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
2024-07-08 14:40:38 +00:00
The key will remain locked for 10 minutes. But, usually `keeplocked`
is used to control the lifetime of the lock. (See below.)
2024-07-03 20:59:22 +00:00
2024-07-05 19:34:58 +00:00
### POST /git-annex/v2/lockcontent
Identical to v3.
### POST /git-annex/v1/lockcontent
Identical to v3.
### POST /git-annex/v0/lockcontent
Identical to v3.
convert lockcontent api to http long polling
Websockets would work, but the problem with using them for this is that
each lockcontent call is a separate websocket connection. And that's an
actual TCP connection. One TCP connection per file dropped would be too
expensive. With http long polling, regular http pipelining can be used,
so it will reuse a TCP connection.
Unfortunately, at least with servant, bi-directional streams with long
polling don't result in true bidirectional full duplex communication.
Servant processes the whole client body stream before generating the server
body stream. I think it's entirely possible to do full bi-directional
communication over http, but it would need changes to servant.
And, there's no way for the client to tell if the server successfully
locked the content, since the server will keep processing the client
stream no matter what.:
So, added a new api endpoint, keeplocked. lockcontent will lock the key
for 10 minutes with retention lock, and then a call to keeplocked will
keep it locked for as long as needed. This does mean that there will
need to be a Map of locks by key, and I will probably want to add
some kind of lock identifier that lockcontent returns.
2024-07-08 14:40:38 +00:00
### POST /git-annex/v3/keeplocked
Controls the lifetime of a lock on a key that was earlier obtained
with `lockcontent`.
Example:
> POST /git-annex/v3/keeplocked?key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
> Connection: Keep-Alive
> Keep-Alive: timeout=1200
[some time later]
> {"unlock": true}
< {"locked": false}
There is one required additional parameter, `key`.
This uses long polling. So it's important to use
Connection and Keep-Alive headers.
This keeps an active lock from expiring until the client sends
`{"unlock": true}`, and then it immediately unlocks it.
The client can send `{"unlock": false}` any number of times first.
This has no effect, but may be useful to keep the connection alive.
This must be called within ten minutes of `lockcontent`, otherwise
the lock will have already expired when this runs. Note that this
does not indicate if the lock expired, it always returns
`{"locked": false}`.
If the connection is closed before the client sends `{"unlock": true},
or even if the web server gets shut down, the content will remain
locked for 10 minutes from the time it was first locked.
### POST /git-annex/v2/keeplocked
Identical to v3.
### POST /git-annex/v1/keeplocked
Identical to v3.
### POST /git-annex/v0/keeplocked
Identical to v3.
2024-07-05 19:00:05 +00:00
### POST /git-annex/v3/remove
2024-07-02 20:14:45 +00:00
Remove a key's content from the server.
Example:
2024-07-03 20:59:22 +00:00
> POST /git-annex/v3/remove?key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-05 14:08:43 +00:00
< {"removed": true}
2024-07-02 20:14:45 +00:00
There is one required additional parameter, `key`.
The body of the request is empty.
2024-07-05 14:08:43 +00:00
The server responds with a JSON object with a "removed" field that is true
if the key was removed (or was not present on the server),
or false if the key was not able to be removed.
2024-07-02 20:14:45 +00:00
2024-07-05 14:08:43 +00:00
The JSON object can have an additional field "plusuuids" that is a list of
UUIDs of other repositories that the content was removed from.
2024-07-02 20:14:45 +00:00
2024-07-05 14:08:43 +00:00
If the server does not allow removing the key due to a policy
(eg due to being read-only or append-only), it will respond with a JSON
object with an "error" field that has an error message as its value.
2024-07-02 20:14:45 +00:00
2024-07-05 19:34:58 +00:00
### POST /git-annex/v2/remove
Identical to v3.
### POST /git-annex/v1/remove
Same as v3, except the JSON will not include "plusuuids".
### POST /git-annex/v0/remove
Identival to v1.
2024-07-05 19:00:05 +00:00
## POST /git-annex/v3/remove-before
2024-07-03 20:59:22 +00:00
Remove a key's content from the server, but only before a specified time.
Example:
> POST /git-annex/v3/remove-before?timestamp=4949292929&key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-05 14:08:43 +00:00
< {"removed": true}
2024-07-03 20:59:22 +00:00
This is the same as the `remove` request, but with an additional parameter,
`timestamp`.
2024-07-04 19:26:05 +00:00
If the server's monotonic clock is past the specified timestamp, the
2024-07-05 14:08:43 +00:00
removal will fail and the server will respond with: `{"removed": false}`
This is used to avoid removing content after a point in
2024-07-04 19:26:05 +00:00
time where it is no longer locked in other repostitories.
2024-07-03 20:59:22 +00:00
2024-07-05 19:00:05 +00:00
## POST /git-annex/v3/gettimestamp
2024-07-03 20:59:22 +00:00
Gets the current timestamp from the server.
Example:
> POST /git-annex/v3/gettimestamp?clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-05 14:08:43 +00:00
< {"timestamp": 59459392}
2024-07-03 20:59:22 +00:00
The body of the request is empty.
2024-07-05 14:08:43 +00:00
The server responds with JSON object with a timestmap field that has the
current value of its monotonic clock, as a number of seconds.
2024-07-03 20:59:22 +00:00
Important: If multiple servers are serving this protocol for the same
repository, they MUST all use the same monotonic clock.
2024-07-05 19:00:05 +00:00
### POST /git-annex/v3/put
2024-07-02 20:14:45 +00:00
Store content on the server.
Example:
2024-07-03 20:59:22 +00:00
> POST /git-annex/v3/put?key=SHA1--foo&associatedfile=bar&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-02 20:14:45 +00:00
> Content-Type: application/octet-stream
2024-07-07 16:08:10 +00:00
> X-git-annex-data-length: 3
2024-07-05 19:00:05 +00:00
>
> foo
2024-07-05 14:08:43 +00:00
< {"stored": true}
2024-07-02 20:14:45 +00:00
There is one required additional parameter, `key`.
2024-07-05 14:08:43 +00:00
There are are also these optional parameters:
2024-07-02 20:14:45 +00:00
* `associatedfile`
The name of a file in the git repository, for informational purposes
only.
* `offset`
Number of bytes that have been omitted from the beginning of the file.
Usually this will be determined by making a `putoffset` request.
The `Content-Type` header should be `application/octet-stream`.
2024-07-05 19:00:05 +00:00
The `X-git-annex-data-length` must be included. It indicates the number
of bytes of content that are expected to be sent.
Note that there is no need to send a Content-Length header.
If the length of the body is different than what the the
X-git-annex-data-length header indicated, then the data is invalid and
should not be used. This can happen when eg, the data was being sent from
an unlocked annexed file, which got modified while it was being sent.
2024-07-02 20:14:45 +00:00
2024-07-05 14:08:43 +00:00
The server responds with a JSON object with a field "stored"
that is true if it received the data and stored the
content.
2024-07-02 20:14:45 +00:00
2024-07-05 14:08:43 +00:00
The JSON object can have an additional field "plusuuids" that is a list of
UUIDs of other repositories that the content was stored to.
2024-07-02 20:14:45 +00:00
2024-07-05 19:00:05 +00:00
If the server does not allow storing the key due eg to a policy
(eg due to being read-only or append-only), or due to the data being
invalid, or because it ran out of disk space, it will respond with a
JSON object with an "error" field that has an error message as its value.
2024-07-02 20:14:45 +00:00
2024-07-05 19:34:58 +00:00
### POST /git-annex/v2/put
Identical to v3.
### POST /git-annex/v1/put
Same as v3, except the JSON will not include "plusuuids".
### POST /git-annex/v0/put
Same as v1, except there is no X-git-annex-data-length header.
Additional checking client-side will be required to validate the data.
2024-07-05 19:00:05 +00:00
### POST /git-annex/v3/putoffset
2024-07-02 20:14:45 +00:00
Asks the server what `offset` can be used in a `put` of a key.
This should usually be used right before sending a `put` request.
The offset may not be valid after some point in time, which could result in
the `put` request failing.
Example:
2024-07-03 20:59:22 +00:00
> POST /git-annex/v3/putoffset?key=SHA1--foo&clientuuid=79a5a1f4-07e8-11ef-873d-97f93ca91925&serveruuid=ecf6d4ca-07e8-11ef-8990-9b8c1f696bf6 HTTP/1.1
2024-07-05 14:08:43 +00:00
< {"offset": 10}
2024-07-02 20:14:45 +00:00
There is one required additional parameter, `key`.
The body of the request is empty.
2024-07-05 14:08:43 +00:00
The server responds with a JSON object with an "offset" field that
is the largest allowable offset.
2024-07-02 20:14:45 +00:00
2024-07-05 19:34:58 +00:00
If the server already has the content of the key, it will respond with a
JSON object with an "alreadyhave" field that is set to true. This JSON
object may also have a field "plusuuids" that lists
the UUIDs of other repositories where the content is stored, in addition to
the serveruuid.
2024-07-05 14:08:43 +00:00
If the server does not allow storing the key due to a policy
(eg due to being read-only or append-only), it will respond with a JSON
object with an "error" field that has an error message as its value.
2024-07-02 20:14:45 +00:00
[Implementation note: This will be implemented by sending `PUT` and
returning the `PUT-FROM` offset. To avoid leaving the P2P protocol stuck
part way through a `PUT`, a synthetic empty `DATA` followed by `INVALID`
will be used to get the P2P protocol back into a state where it will accept
any request.]
2024-07-05 19:34:58 +00:00
### POST /git-annex/v2/putoffset
Identical to v3.
### POST /git-annex/v1/putoffset
Same as v3, except the JSON will not include "plusuuids".
2024-07-02 20:14:45 +00:00
## parts of P2P protocol that are not supported over HTTP
`NOTIFYCHANGE` is not supported, but it would be possible to extend
this HTTP protocol to support it.
`CONNECT` is not supported, and due to the bi-directional message passing
2024-07-05 14:08:43 +00:00
nature of it, it cannot easily be done over HTTP (would need websockets).
It should not be necessary anyway, because the git repository itself can be
accessed over HTTP.