doc/: s/amoung/among/gi

Qouth ye olde [Wiktionary](http://en.wiktionary.org/wiki/amoung)

Archaic spelling of among.
This commit is contained in:
Richard Hartmann 2013-12-18 22:09:18 +01:00
parent 6be4a204bb
commit b11d88dd17
10 changed files with 11 additions and 11 deletions

View file

@ -22,7 +22,7 @@ A related problem though is the size of the tree objects git needs to
commit. Having the logs in a separate branch doesn't help with that.
As more keys are added, the tree object size will increase, and git will
take longer and longer to commit, and use more space. One way to deal with
this is simply by splitting the logs amoung subdirectories. Git then can
this is simply by splitting the logs among subdirectories. Git then can
reuse trees for most directories. (Check: Does it still have to build
dup trees in memory?)
@ -54,7 +54,7 @@ Let's use one branch per uuid, named git-annex/$UUID.
- (BTW, UUIDs probably don't compress well, and this reduces the bloat of having
them repeated lots of times in the tree.)
- Per UUID branches mean that if it wants to find a file's location
amoung configured remotes, it can examine only their branches, if
among configured remotes, it can examine only their branches, if
desired.
- It's important that the per-repo branches propigate beyond immediate
remotes. If there is a central bare repo, that means push --all. Without

View file

@ -4,7 +4,7 @@
subject="comment 3"
date="2013-07-17T19:59:50Z"
content="""
Note that git-annex now uses locks to communicate amoung multiple processes, so it's now possible to eg run two `git annex get` processes, and one will skip over the file the other is downloading and go on to the next file, and so on.
Note that git-annex now uses locks to communicate among multiple processes, so it's now possible to eg run two `git annex get` processes, and one will skip over the file the other is downloading and go on to the next file, and so on.
This is an especially nice speedup when downloading encrypted data, since the decryption of one file will tend to happen while the other process is downloading the next file (assuming files of approximately the same size, and that decryption takes approxiately as long as downloading).

View file

@ -24,7 +24,7 @@ Simple, when performing various git annex command over ssh, in particular a mult
Slightly more elaborate design for using ssh connection caching:
* Per-uuid ssh socket in `.git/annex/ssh/user@host.socket`
* Can be shared amoung concurrent git-annex processes as well as ssh
* Can be shared among concurrent git-annex processes as well as ssh
invocations inside the current git-annex.
* Also a lock file, `.git/annex/ssh/user@host.lock`.
Open and take shared lock before running ssh; store lock in lock pool.