avoid uncessary keys db writes; doubled speed!
When running eg git-annex get, for each file it has to read from and write to the keys database. But it's reading exclusively from one table, and writing to a different table. So, it is not necessary to flush the write to the database before reading. This avoids writing the database once per file, instead it will buffer 1000 changes before writing. Benchmarking getting 1000 small files from a local origin, git-annex get now takes 13.62s, down from 22.41s! git-annex drop now takes 9.07s, down from 18.63s! Wowowowowowowow! (It would perhaps have been better if there were separate databases for the two tables. At least it would have avoided this complexity. Ah well, this is better than splitting the table in a annex.version upgrade.) Sponsored-by: Dartmouth College's Datalad project
This commit is contained in:
parent
ba7ecbc6a9
commit
6fbd337e34
7 changed files with 117 additions and 59 deletions
|
@ -53,15 +53,6 @@ whenAnnexed a file = ifAnnexed file (a file) (return Nothing)
|
|||
ifAnnexed :: RawFilePath -> (Key -> Annex a) -> Annex a -> Annex a
|
||||
ifAnnexed file yes no = maybe no yes =<< lookupKey file
|
||||
|
||||
{- Find all annexed files and update the keys database for them.
|
||||
-
|
||||
- Normally the keys database is updated incrementally when it's being
|
||||
- opened, and changes are noticed. Calling this explicitly allows
|
||||
- running the update at an earlier point.
|
||||
-
|
||||
- All that needs to be done is to open the database,
|
||||
- that will result in Database.Keys.reconcileStaged
|
||||
- running, and doing the work.
|
||||
-}
|
||||
{- Find all annexed files and update the keys database for them. -}
|
||||
scanAnnexedFiles :: Annex ()
|
||||
scanAnnexedFiles = Database.Keys.runWriter (const noop)
|
||||
scanAnnexedFiles = Database.Keys.updateDatabase
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue