2012-07-28 19:41:49 +00:00
|
|
|
{- notification broadcaster
|
|
|
|
-
|
|
|
|
- This is used to allow clients to block until there is a new notification
|
|
|
|
- that some thing occurred. It does not communicate what the change is,
|
|
|
|
- it only provides blocking reads to wait on notifications.
|
|
|
|
-
|
|
|
|
- Multiple clients are supported. Each has a unique id.
|
|
|
|
-
|
|
|
|
- Copyright 2012 Joey Hess <joey@kitenet.net>
|
|
|
|
-
|
|
|
|
- Licensed under the GNU GPL version 3 or higher.
|
|
|
|
-}
|
|
|
|
|
2012-07-28 20:01:50 +00:00
|
|
|
module Utility.NotificationBroadcaster (
|
|
|
|
NotificationBroadcaster,
|
2012-07-28 19:41:49 +00:00
|
|
|
NotificationHandle,
|
2012-07-29 01:11:40 +00:00
|
|
|
NotificationId,
|
2012-07-28 20:01:50 +00:00
|
|
|
newNotificationBroadcaster,
|
2012-07-28 19:41:49 +00:00
|
|
|
newNotificationHandle,
|
|
|
|
notificationHandleToId,
|
|
|
|
notificationHandleFromId,
|
|
|
|
sendNotification,
|
|
|
|
waitNotification,
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
checkNotification,
|
2012-07-28 19:41:49 +00:00
|
|
|
) where
|
|
|
|
|
|
|
|
import Common
|
|
|
|
|
|
|
|
import Control.Concurrent.STM
|
|
|
|
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
{- One TMVar per client, which are empty when no notification is pending,
|
|
|
|
- and full when a notification has been sent but not yet seen by the
|
|
|
|
- client. The list TMVar is never empty, so never blocks. -}
|
|
|
|
type NotificationBroadcaster = TMVar [TMVar ()]
|
2012-07-28 19:41:49 +00:00
|
|
|
|
2012-07-29 00:30:46 +00:00
|
|
|
newtype NotificationId = NotificationId Int
|
2012-07-29 01:21:22 +00:00
|
|
|
deriving (Read, Show, Eq, Ord)
|
2012-07-29 00:30:46 +00:00
|
|
|
|
2012-07-28 19:41:49 +00:00
|
|
|
{- Handle given out to an individual client. -}
|
2012-07-29 00:30:46 +00:00
|
|
|
data NotificationHandle = NotificationHandle NotificationBroadcaster NotificationId
|
2012-07-28 19:41:49 +00:00
|
|
|
|
2012-07-28 20:01:50 +00:00
|
|
|
newNotificationBroadcaster :: IO NotificationBroadcaster
|
2012-07-29 01:11:40 +00:00
|
|
|
newNotificationBroadcaster = atomically $ newTMVar []
|
2012-07-28 19:41:49 +00:00
|
|
|
|
2013-03-27 18:56:15 +00:00
|
|
|
{- Allocates a notification handle for a client to use.
|
|
|
|
-
|
|
|
|
- An immediate notification can be forced the first time waitNotification
|
|
|
|
- is called on the handle. This is useful in cases where a notification
|
|
|
|
- may be sent while the new handle is being constructed. Normally,
|
|
|
|
- such a notification would be missed. Forcing causes extra work,
|
|
|
|
- but ensures such notifications get seen.
|
|
|
|
-}
|
|
|
|
newNotificationHandle :: Bool -> NotificationBroadcaster -> IO NotificationHandle
|
|
|
|
newNotificationHandle force b = NotificationHandle
|
2012-07-28 19:41:49 +00:00
|
|
|
<$> pure b
|
2012-07-28 20:01:50 +00:00
|
|
|
<*> addclient
|
2012-12-13 04:24:19 +00:00
|
|
|
where
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
addclient = atomically $ do
|
2013-03-27 18:56:15 +00:00
|
|
|
s <- if force
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
then newTMVar ()
|
|
|
|
else newEmptyTMVar
|
|
|
|
l <- takeTMVar b
|
|
|
|
putTMVar b $ l ++ [s]
|
|
|
|
return $ NotificationId $ length l
|
2012-07-28 19:41:49 +00:00
|
|
|
|
2012-07-29 00:30:46 +00:00
|
|
|
{- Extracts the identifier from a notification handle.
|
2012-07-28 19:41:49 +00:00
|
|
|
- This can be used to eg, pass the identifier through to a WebApp. -}
|
2012-07-29 00:30:46 +00:00
|
|
|
notificationHandleToId :: NotificationHandle -> NotificationId
|
2012-07-28 19:41:49 +00:00
|
|
|
notificationHandleToId (NotificationHandle _ i) = i
|
|
|
|
|
2012-07-29 00:30:46 +00:00
|
|
|
notificationHandleFromId :: NotificationBroadcaster -> NotificationId -> NotificationHandle
|
2012-07-28 19:41:49 +00:00
|
|
|
notificationHandleFromId = NotificationHandle
|
|
|
|
|
|
|
|
{- Sends a notification to all clients. -}
|
2012-07-28 20:01:50 +00:00
|
|
|
sendNotification :: NotificationBroadcaster -> IO ()
|
2012-07-28 19:41:49 +00:00
|
|
|
sendNotification b = do
|
|
|
|
l <- atomically $ readTMVar b
|
|
|
|
mapM_ notify l
|
2012-12-13 04:24:19 +00:00
|
|
|
where
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
notify s = atomically $
|
|
|
|
whenM (isEmptyTMVar s) $
|
|
|
|
putTMVar s ()
|
2012-07-28 19:41:49 +00:00
|
|
|
|
|
|
|
{- Used by a client to block until a new notification is available since
|
|
|
|
- the last time it tried. -}
|
|
|
|
waitNotification :: NotificationHandle -> IO ()
|
2012-07-29 00:30:46 +00:00
|
|
|
waitNotification (NotificationHandle b (NotificationId i)) = do
|
2012-07-28 19:41:49 +00:00
|
|
|
l <- atomically $ readTMVar b
|
assistant: Start a new git-annex transferkeys process after a network connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
2014-01-06 20:03:39 +00:00
|
|
|
atomically $ takeTMVar (l !! i)
|
|
|
|
|
|
|
|
{- Used by a client to check if there has been a new notification since the
|
|
|
|
- last time it checked, without blocking. -}
|
|
|
|
checkNotification :: NotificationHandle -> IO Bool
|
|
|
|
checkNotification (NotificationHandle b (NotificationId i)) = do
|
|
|
|
l <- atomically $ readTMVar b
|
|
|
|
maybe False (const True) <$> atomically (tryTakeTMVar (l !! i))
|