implement proxy connection pool
removeOldestProxyConnectionPool will be innefficient the larger the pool is. A better data structure could be more efficient. Eg, make each value in the pool include the timestamp of its oldest element, then the oldest value can be found and modified, rather than rebuilding the whole Map. But, for pools of a few hundred items, this should be fine. It's O(n*n log n) or so. Also, when more than 1 connection with the same pool key exists, it's efficient even for larger pools, since removeOldestProxyConnectionPool is not needed. The default of 1 idle connection could perhaps be larger.. like the number of jobs? Otoh, it seems good to ramp up and down the number of connections, which does happen. With 1, there is at most one stale connection, which might cause a request to fail.
This commit is contained in:
parent
fb43b7ea3f
commit
d1faa13d6a
5 changed files with 114 additions and 38 deletions
|
@ -32,9 +32,6 @@ Planned schedule of work:
|
|||
|
||||
* test http server proxying with special remotes
|
||||
|
||||
* http server proxying needs to reuse connections to special remotes,
|
||||
keeping a pool of open ones. Question: How many to keep in the pool?
|
||||
|
||||
* Make http server support clusters.
|
||||
|
||||
* Support proxying to git remotes using annex+http urls.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue