Commit graph

285 commits

Author SHA1 Message Date
Joey Hess
c35a9047d3
improve data types for sqlite
This is a non-backwards compatable change, so not suitable for merging
w/o a annex.version bump and transition code. Not yet tested.

This improves performance of git-annex benchmark --databases
across the board by 10-25%, since eg Key roundtrips as a ByteString.

(serializeKey' produces a lazy ByteString, so there is still a
copy involved in converting it to a strict ByteString. It may be faster
to switch to using bytestring-strict-builder.)

FilePath and Key are both stored as blobs. This avoids mojibake in some
situations. It would be possible to use varchar instead, if persistent
could avoid converting that to Text, but it seems there is no good
way to do so. See doc/todo/sqlite_database_improvements.mdwn

Eliminated some ugly artifacts of using Read/Show serialization;
constructors and quoted strings are no longer stored in sqlite.

Renamed SRef to SSha to reflect that it is only ever a git sha,
not a ref name. Since it is limited to the characters in a sha,
it is not affected by mojibake, so still uses String.
2019-10-29 17:05:36 -04:00
Joey Hess
e1b21a0491
benchmark: Add --databases to benchmark sqlite databases
Rescued from commit 11d6e2e260 which removed
db benchmarks in favor of benchmarking arbitrary git-annex commands. Which
is nice and general, but microbenchmarks are useful too.
2019-10-29 17:05:10 -04:00
Joey Hess
25f912de5b
benchmark: Add --databases to benchmark sqlite databases
Rescued from commit 11d6e2e260 which removed
db benchmarks in favor of benchmarking arbitrary git-annex commands. Which
is nice and general, but microbenchmarks are useful too.
2019-10-29 16:59:27 -04:00
Joey Hess
0f7fd008d4
fix sql syntax 2019-10-24 11:57:17 -04:00
Joey Hess
098afe144e
display sqlite error message when it crashes 2019-10-24 11:50:55 -04:00
Joey Hess
94efc400e9
horrible impementation of isInodeKnown
The only good thing about it is it does not require a major version bump
to improve the database. That will need to happen at some point though.

Potentially very very slow in a large repository.

Ugly use of raw sql.
2019-10-23 14:37:29 -04:00
Joey Hess
904b175707
Fix build with persistent-2.10.
Added an additional constraint that persistent needs.
This also builds with persistent-2.9.2 without needing any cpp.
2019-10-17 11:58:31 -04:00
Joey Hess
9828f45d85
add RemoteStateHandle
This solves the problem of sameas remotes trampling over per-remote
state. Used for:

* per-remote state, of course
* per-remote metadata, also of course
* per-remote content identifiers, because two remote implementations
  could in theory generate the same content identifier for two different
  peices of content

While chunk logs are per-remote data, they don't use this, because the
number and size of chunks stored is a common property across sameas
remotes.

External special remote had a complication, where it was theoretically
possible for a remote to send SETSTATE or GETSTATE during INITREMOTE or
EXPORTSUPPORTED. Since the uuid of the remote is typically generate in
Remote.setup, it would only be possible to pass a Maybe
RemoteStateHandle into it, and it would otherwise have to construct its
own. Rather than go that route, I decided to send an ERROR in this case.
It seems unlikely that any existing external special remote will be
affected. They would have to make up a git-annex key, and set state for
some reason during INITREMOTE. I can imagine such a hack, but it doesn't
seem worth complicating the code in such an ugly way to support it.

Unfortunately, both TestRemote and Annex.Import needed the Remote
to have a new field added that holds its RemoteStateHandle.
2019-10-14 13:51:42 -04:00
Joey Hess
2e6fd5de71
fix flipped diffUTCTime
fsck --incremental/--more: Fix bug that prevented the incremental fsck
information from being updated every 5 minutes as it was supposed to be; it
was only updated after 1000 files were checked, which may be more files
that are possible to fsck in a given fsck time window.

Thanks to Peter Simons for help with analysis of this bug.

Auditing for other cases of the same mistake, the keys db also had it
backwards. This seems unlikely to really have been a problem;
it would need associated files updates etc to be coming in slowly for some
reason and then be interrupted to cause any problem.

IIRC the design of the keys db assumes that any interruped
operation will be restarted, and so it can lose any buffered database
updates safely.
2019-10-03 09:54:19 -04:00
Joey Hess
9628ae2e67
Close sqlite databases more robustly.
Had a report of close throwing ErrorBusy on CIFS.

Retrying up to 16 seconds is a balance between hopefully waiting long
enough for the problem to clear up and waiting so long that git-annex seems
to hang.

The new dependency is free; persistent depends on unliftio-core.
2019-09-26 12:25:21 -04:00
Joey Hess
3f0eef4baa
v7 for all repositories
* Default to v7 for new repositories.
* Automatically upgrade v5 repositories to v7.
2019-08-30 14:09:14 -04:00
Joey Hess
018b5b8173
Support building with socks-0.6 and persistant-template-2.7
persistent-template now needs UndecidableInstances.

socks changed defaultSocksConf to take a SockAddr.
2019-07-30 12:50:48 -04:00
Joey Hess
9a5ddda511
remove many old version ifdefs
Drop support for building with ghc older than 8.4.4, and with older
versions of serveral haskell libraries than will be included in Debian 10.

The only remaining version ifdefs in the entire code base are now a couple
for aws!

This commit should only be merged after the Debian 10 release.
And perhaps it will need to wait longer than that; it would make
backporting new versions of  git-annex to Debian 9 (stretch) which
has been actively happening as recently as this year.

This commit was sponsored by Ilya Shlyakhter.
2019-07-05 15:09:37 -04:00
Joey Hess
6babb2c73f
remove wrong uniqueness constraint from ContentIdentifier db
Fix bug that caused importing from a special remote to repeatedly download
unchanged files when multiple files in the remote have the same content.

Unfortunately, there's really no good way to remove a uniqueness constraint
from a sqlite database. The best that can be done is to make a new table
and copy the data over. But that would require using persistent's
migrations or raw sql, and I don't want to do either.

Instead, a sledgehammer approach: Renamed .git/annex/cid to
.git/annex/cids. When the new database doesn't exist, it will be populated
from the git-annex branch.

Noting deletes the old database. Don't want to delete it out from under
some long-running git-annex process that might be using it. It could
eventually be deleted. But this is such a new feature, probably few repos
have the database in any case.
2019-04-09 19:58:24 -04:00
Joey Hess
40ecf58d4b
update licenses from GPL to AGPL
This does not change the overall license of the git-annex program, which
was already AGPL due to a number of sources files being AGPL already.

Legally speaking, I'm adding a new license under which these files are
now available; I already released their current contents under the GPL
license. Now they're dual licensed GPL and AGPL. However, I intend
for all my future changes to these files to only be released under the
AGPL license, and I won't be tracking the dual licensing status, so I'm
simply changing the license statement to say it's AGPL.

(In some cases, others wrote parts of the code of a file and released it
under the GPL; but in all cases I have contributed a significant portion
of the code in each file and it's that code that is getting the AGPL
license; the GPL license of other contributors allows combining with
AGPL code.)
2019-03-13 15:48:14 -04:00
Joey Hess
e3a704224f
fix export db locking deadlock 2019-03-07 16:06:02 -04:00
Joey Hess
9a72785307
fixes to export db lookup when accessing importtree=yes
Now in a fresh clone with a importtree=yes remote enabled,
git annex fsck --from the remote works.
2019-03-07 14:10:56 -04:00
Joey Hess
50797ee2c5
remove obsolete comment 2019-03-07 13:02:46 -04:00
Joey Hess
71fec9060c
move 2019-03-07 12:56:40 -04:00
Joey Hess
ee251b2e2e
implement updating the ContentIdentifier db with info from the git-annex branch
untested

This won't be super slow, but it does need to diff two likely large
trees, and since the git-annex branch rarely sits still, it will most
likely be run at the beginning of every import.

A possible speed improvement would be to only run this when the database
did not contain a ContentIdentifier. But that would only speed up
imports when there is no new version of a file on the special remote,
at most renames of existing files being imported.

A better speed improvement would be to record something in the git-annex
branch that indicates when an import has been run, and only do the diff
if the git-annex branch has record of a newer import than we've seen
before. Then, it would only run when there is in fact new
ContentIdentifier information available from a remote. Certianly doable,
but didn't want to complicate things yet.
2019-03-06 18:04:30 -04:00
Joey Hess
f85f06aae3
change to more efficient IKey 2019-03-06 11:14:33 -04:00
Joey Hess
3c652e1499
limit to requested remote 2019-03-05 15:56:28 -04:00
Joey Hess
cd3a2b023a
initial try at using storeExportWithContentIdentifier
Untested, and I'm not sure about the locking of the ContentIdentifier db.
2019-03-04 17:50:41 -04:00
Joey Hess
b67fa2180e
add getExportedKey
Not optimised because that would need transition code to be written for
existing export datbases.
2019-03-04 17:27:10 -04:00
Joey Hess
138d07eb97
add getContentIdentifiers
Changed the database schema for this, with a new index.
2019-03-04 16:48:07 -04:00
Joey Hess
1c8793691a
import: update location log for removed files 2019-03-01 13:26:59 -04:00
Joey Hess
7acee61adf
rename 2019-03-01 12:50:33 -04:00
Joey Hess
d0066d9a87
fully update export db during import
This makes exporting immediately after import and merge be a no-op.
2019-02-27 15:29:41 -04:00
Joey Hess
775e6ed86f
fix table name 2019-02-27 13:52:56 -04:00
Joey Hess
fd304dce60
split out Types.Import and some changes to the types in it 2019-02-21 13:39:09 -04:00
Joey Hess
a818bc5e73
add Database.ContentIdentifier
Does not yet have a way to update with new information from the
git-annex branch, which will be needed when multiple repos are importing
from the same remote.
2019-02-20 16:59:10 -04:00
Joey Hess
9216718fa0
fix one more export conflict false positive
Somehow forgot about the case where the current export db tree is the
same one in the export log, and it warned about an export conflict when
getting a file in that situation. Of course it's no conflict at all!

This commit was sponsored by Jochen Bartl on Patreon.
2019-01-30 13:18:42 -04:00
Joey Hess
ad1d422dd7
fix false positive in export conflict detection
Like the earlier fixed one in Command.Export, it occurred when the same
tree was exported by multiple clones. Previous fix was incomplete since
several other places looked at the list of exported trees to detect when
there was an export conflict. Added a single unified function to avoid
missing any places it needed to be fixed.

This commit was sponsored by mo on Patreon.
2019-01-30 12:36:30 -04:00
Joey Hess
d3ab5e626b
rename key2file and file2key
What these generate is not really suitable to be used as a filename,
which is why keyFile and fileKey further escape it. These are just
serializing Keys.

Also removed a quickcheck test that was very unlikely to test anything
useful, since it relied on random chance creating something that looks
like a serialized key. The other test is sufficient for testing what
that was intended to test anyway.
2019-01-14 13:03:35 -04:00
Joey Hess
f62114e5ad
Merge branch 'remove-esqueleto' 2018-11-20 11:50:04 -04:00
Joey Hess
d65df7ab21
improve messages around export conflicts
When an export conflict prevents accessing a special remote, be clearer
about what the problem is and how to resolve it.

This commit was sponsored by Trenton Cronholm on Patreon.
2018-11-13 15:50:06 -04:00
Joey Hess
635b0b1df6
Merge remote-tracking branch 'seanparsons/remove-esqueleto' into remove-esqueleto 2018-11-10 12:03:52 -04:00
Joey Hess
0db653f12c
remove unused BangPatterns 2018-11-09 13:09:26 -04:00
Joey Hess
f78f97780c
Fix build with persistent-sqlite older than 2.6.3.
This commit was sponsored by Jack Hill on Patreon.
2018-11-09 13:09:02 -04:00
Sean Parsons
ef5a735d47 Fixed a not equal condition in addAssociatedFile. 2018-11-06 22:50:00 +00:00
Sean Parsons
42bdc9fa2f Removed Esqueleto as a dependency. 2018-11-06 22:18:55 +00:00
Joey Hess
6d8b8a3275
remove unused import 2018-10-31 12:58:53 -04:00
Joey Hess
fcc9eea554
avoid closeDb opening the db if it's not already open 2018-10-30 22:19:05 -04:00
Joey Hess
1428568554
retry when sqlite throws ErrorIO
I suspect this may be due to SQLITE_IOERR_SHORT_READ, but have not
verified.

I was able to reproduce it on Linux after running the test suite in a loop
for 1-3 hours until it failed.

The WAL mode entry change in 3963c5fcf5
may have hidden the problem I was seeing; I have not seen an ErrorIO
since then.
2018-10-30 18:06:38 -04:00
Joey Hess
3963c5fcf5
better approach to enabling WAL mode
The old approach opened the database an extra time to enable WAL mode,
but more recent persistent-sqlite has a better API to enable it.
2018-10-30 13:47:38 -04:00
Joey Hess
a89db2c604
link to bug report blob
delete from tree
2018-10-30 12:07:38 -04:00
Joey Hess
57107cf213
sqlite bug report that the developers never responded to
Adding to git-annex's history so it doesn't get lost; the gmane archive
of it is long gone.
2018-10-30 12:04:28 -04:00
Joey Hess
718915e9fc
improve comments 2018-10-30 11:52:05 -04:00
Joey Hess
def5d8b02c
Fix potential crash in exporttree database due to failure to honor uniqueness constraint
I don't know the circumstances, but have a report of this:

git-annex: failed to commit changes to sqlite database: Just SQLite3 returned
ErrorConstraint while attempting to perform step.

All 3 tables in the export db have uniqueness constraints on them,
insertUnique is used for all the rest, but this use of insertMany
means it doesn't check the constraint. I guess that's what caused the
crash, but I have not been able to test it yet.

Use putMany when available, as it should be faster than mapM of insertMany.

This commit was sponsored by Brock Spratlen on Patreon.
2018-10-09 16:56:33 -04:00
Joey Hess
b8ed97f5d8
add missing space 2018-10-09 16:32:59 -04:00
Joey Hess
a7f0b99a33
fix v6 deadlock with git 2.1.4
I don't know why git diff --raw would run the clean filter, but it did
with this version of git. Perhaps it is cleaning the file to generate the
diff to search with -G?  But then why would newer gits not run the clean
filter?

It caused git annex to deadlock because the keys database was locked
and ran a git command that ran git-annex, which tried to read from the
keys database.

This commit was sponsored by Brett Eisenberg on Patreon.
2018-09-13 13:55:25 -04:00
Joey Hess
50fa17aee6
v6: recover from race between git mv and git-annex get/drop
Update pointer file next time reconcileStaged is run to recover from the
race.

Note that restagePointerFile causes git to run the clean filter,
and that will run reconcileStaged. So, normally by the time the git
annex get/drop command finishes, the race has already been dealt with.
It may be that, in some case, that won't happen and the race will be
dealt with at a later point. git-annex could run reconcileStaged at
shutdown if that becomes a problem.

This does not handle the situation where the git mv is committed before
git-annex gets a chance to run again. git commit does run the clean
filter, and that happens to re-inject the content if it was supposed to
be dropped but is still populated. But, the case where the file was
supposed to be gotten but is not populated is not handled yet.

This commit was supported by the NSF-funded DataLad project.
2018-08-22 15:56:43 -04:00
Joey Hess
18ecf41917
avoid running reconcileStaged when the index has not changed
This commit was supported by the NSF-funded DataLad project.
2018-08-22 13:04:12 -04:00
Joey Hess
5e56d9b620
v6: Update associated files database when git has staged changes to pointer files
This commit was supported by the NSF-funded DataLad project.
2018-08-21 17:02:20 -04:00
Joey Hess
710d6a35ed
fix build with old version of persistent 2017-09-25 09:57:41 -04:00
Joey Hess
129418615b
refactor 2017-09-20 16:22:32 -04:00
Joey Hess
5f9eff3f32
fix bug that prevented db being written to disk in SingleWriter mode
The bug occurred when closeDb was not called, and garbage collection of
the DbHandle didn't give the workerThread time to shut down. Fixed by
exiting the runSqlite action when a commit is made.

(MultiWriter mode already forked off a runSqlite action, so avoided the
problem.)

This commit was sponsored by Brock Spratlen on Patreon.
2017-09-18 19:42:20 -04:00
Joey Hess
f4be3c3f89
merge changes made on other repos into ExportTree
Now when one repository has exported a tree, another repository can get
files from the export, after syncing.

There's a bug: While the database update works, somehow the database on
disk does not get updated, and so the database update is run the next
time, etc. Wasn't able to figure out why yet.

This commit was sponsored by Ole-Morten Duesund on Patreon.
2017-09-18 19:21:41 -04:00
Joey Hess
0ad7e36dc1
update ExportTree table efficiently
Use same diff and key lookup except when the whole tree has to be
scanned.

This commit was sponsored by Peter Hogg on Patreon.
2017-09-18 14:27:50 -04:00
Joey Hess
b03d77c211
add ExportTree table to export db
New table needed to look up what filenames are used in the currently
exported tree, for reasons explained in export.mdwn.

Also, added smart constructors for ExportLocation and ExportDirectory to
make sure they contain filepaths with the right direction slashes.

And some code refactoring.

This commit was sponsored by Francois Marier on Patreon.
2017-09-18 13:59:59 -04:00
Joey Hess
e1f5c90c92
split out Types.Export 2017-09-15 16:46:03 -04:00
Joey Hess
c633144d28
remove empty directories when removing from export
The subtle part of this is what happens when the remote fails to remove
an empty directory. The removal from the export needs to fail in that
case, so the removal will be tried again later. However, removeExportLocation
has already been run and changed the export db, so if the next run
checks getExportLocation, it might decide nothing remains to be done,
leaving the empty directory.

Dealt with that by making removeEmptyDirectories, handle a failure
by calling addExportLocation, reverting the database changes so the next
run will be guaranteed to try deleting the empty directory again.

This commit was sponsored by Thomas Hochstein on Patreon.
2017-09-15 15:22:53 -04:00
Joey Hess
e223cf568f
add table to keep track of what subdirectories are populated in the export
So empty subdirectories can be identified and removed.

This commit was sponsored by Jochen Bartl on Patreon.
2017-09-15 14:35:22 -04:00
Joey Hess
6ab14710fc
fix consistency bug reading from export database
The export database has writes made to it and then expects to read back
the same data immediately. But, the way that Database.Handle does
writes, in order to support multiple writers, makes that not work, due
to caching issues. This resulted in export re-uploading files it had
already successfully renamed into place.

Fixed by allowing databases to be opened in MultiWriter or SingleWriter
mode. The export database only needs to support a single writer; it does
not make sense for multiple exports to run at the same time to the same
special remote.

All other databases still use MultiWriter mode. And by inspection,
nothing else in git-annex seems to be relying on being able to
immediately query for changes that were just written to the database.

This commit was supported by the NSF-funded DataLad project.
2017-09-06 17:19:07 -04:00
Joey Hess
4da763439b
use export db to correctly handle duplicate files
Removed uncorrect UniqueKey key in db schema; a key can appear multiple
times with different files.

The database has to be flushed after each removal. But when adding files
to the export, lots of changes are able to be queued up w/o flushing.
So it's still fairly efficient.

If large removals of files from exports are too slow, an alternative
would be to make two passes over the diff, one pass queueing deletions
from the database, then a flush and the a second pass updating the
location log. But that would use more memory, and need to look up
exportKey twice per removed file, so I've avoided such optimisation yet.

This commit was supported by the NSF-funded DataLad project.
2017-09-04 14:39:32 -04:00
Joey Hess
2c90ed1fea
flush queued changes to export db on exit 2017-09-04 14:00:54 -04:00
Joey Hess
7eb9889bfd
track exported files in a sqlite database
Went with a separate db per export remote, rather than a single export
database. Mostly because there will probably not be a lot of separate
export remotes, and it might be convenient to be able to delete a given
remote's export database.

This commit was supported by the NSF-funded DataLad project.
2017-09-04 13:53:08 -04:00
Joey Hess
ca0daa8bb8
factor non-type stuff out of Key 2017-02-24 13:42:30 -04:00
Joey Hess
3b22ad9f47
Work around sqlite's incorrect handling of umask when creating databases.
Refactored some common code into initDb.

This only deals with the problem when creating new databases. If a repo
got bad permissions into it, it's up to the user to deal with it.

This commit was sponsored by Ole-Morten Duesund on Patreon.
2017-02-13 17:39:16 -04:00
Joey Hess
23d71423e1
work around ghc segfault
hSetEncoding of a closed handle segfaults.
https://ghc.haskell.org/trac/ghc/ticket/7161

8484c0c197 introduced the crash.
In particular, stdin may get closed (by eg, getContents) and then trying
to set its encoding will crash. We didn't need to adjust stdin's
encoding anyway, but only stderr, to work around
https://github.com/yesodweb/persistent/issues/474

Thanks to Mesar Hameed for assistance related to reproducing this bug.
2016-12-30 18:14:19 -04:00
Joey Hess
8484c0c197
Always use filesystem encoding for all file and handle reads and writes.
This is a big scary change. I have convinced myself it should be safe. I
hope!
2016-12-24 14:46:31 -04:00
Joey Hess
0a4479b8ec
Avoid backtraces on expected failures when built with ghc 8; only use backtraces for unexpected errors.
ghc 8 added backtraces on uncaught errors. This is great, but git-annex was
using error in many places for a error message targeted at the user, in
some known problem case. A backtrace only confuses such a message, so omit it.

Notably, commands like git annex drop that failed due to eg, numcopies,
used to use error, so had a backtrace.

This commit was sponsored by Ethan Aubin.
2016-11-15 21:29:54 -04:00
Joey Hess
206dab8b44
remove unused imports 2016-10-18 16:16:47 -04:00
Joey Hess
148bd0dbfd
refactor 2016-10-17 14:58:33 -04:00
Joey Hess
e34046de38
slightly more efficient checking of versionUsesKeysDatabase
It's a mvar lookup either way, but I think this way will be slightly more
efficient. And it reduces the number of places where it's checked to 1.
2016-07-19 14:02:49 -04:00
Joey Hess
2619019630
Avoid any access to keys database in v5 mode repositories, which are not supposed to use that database. 2016-07-19 12:12:19 -04:00
Joey Hess
5f0b551c0c
assistant: Fix race in v6 mode that caused downloaded file content to sometimes not replace pointer files.
The keys database handle needs to be closed after merging, because the
smudge filter, in another process, updates the database. Old cached info
can be read for a while from the open database handle; closing it ensures
that the info written by the smudge filter is available.

This is pretty horribly ad-hoc, and it's especially nasty that the
transferrer closes the database every time.
2016-05-16 14:49:12 -04:00
Joey Hess
d05a75e45a
fix bug in unlocked file scanner that skipped over executable unlocked files 2016-04-14 13:07:46 -04:00
Joey Hess
6702f5a9c6
fix typo 2016-02-14 19:31:40 -04:00
Joey Hess
49215d68ae
devblog 2016-02-14 18:01:35 -04:00
Joey Hess
cf260d9a15
Fix storing of filenames of v6 unlocked files when the filename is not representable in the current locale.
This is a mostly backwards compatable change. I broke backwards
compatability in the case where a filename starts with double-quote.
That seems likely to be very rare, and v6 unlocked files are a new feature
anyway, and fsck needs to fix missing associated file mappings anyway. So,
I decided that is good enough.

The encoding used is to just show the String when it contains a problem
character. While that adds some overhead to addAssociatedFile and
removeAssociatedFile, those are not called very often. This approach has
minimal decode overhead, because most filenames won't be encoded that way,
and it only has to look for the leading double-quote to skip the expensive
read. So, getAssociatedFiles remains fast.

I did consider using ByteString instead, but getting a FilePath converted
with all chars intact, even surrigates, is difficult, and it looks like
instance PersistField ByteString uses Text, which I don't trust for problem
encoded data. It would probably be slower too, and it would make the
database less easy to inspect manually.
2016-02-14 16:37:25 -04:00
Joey Hess
9df13e73ae
if keys database cannot be opened due to permissions, ignore
This lets readonly repos be used. If a repo is readonly, we can ignore the
keys database, because nothing that we can do will change the state of the
repo anyway.
2016-02-12 14:16:35 -04:00
Joey Hess
737e45156e
remove 163 lines of code without changing anything except imports 2016-01-20 16:36:33 -04:00
Joey Hess
927e1a067e
fix import warnings 2016-01-14 10:30:54 -04:00
Joey Hess
fd3d866dec
another fix for old ghc 2016-01-13 12:32:57 -04:00
Joey Hess
423fffcd41
change keys database to use IKey type with more efficient serialization
This breaks any existing keys database!

IKey serializes more efficiently than SKey, although this limits the
use of its Read/Show instances.

This makes the keys database use less disk space, and so should be a win.

Updated benchmark:

benchmarking keys database/getAssociatedFiles from 1000 (hit)
time                 64.04 μs   (63.95 μs .. 64.13 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 64.02 μs   (63.96 μs .. 64.08 μs)
std dev              218.2 ns   (172.5 ns .. 299.3 ns)

benchmarking keys database/getAssociatedFiles from 1000 (miss)
time                 52.53 μs   (52.18 μs .. 53.21 μs)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 52.31 μs   (52.18 μs .. 52.91 μs)
std dev              734.6 ns   (206.2 ns .. 1.623 μs)

benchmarking keys database/getAssociatedKey from 1000 (hit)
time                 64.60 μs   (64.46 μs .. 64.77 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 64.74 μs   (64.57 μs .. 65.20 μs)
std dev              900.2 ns   (389.7 ns .. 1.733 μs)

benchmarking keys database/getAssociatedKey from 1000 (miss)
time                 52.46 μs   (52.29 μs .. 52.68 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 52.63 μs   (52.35 μs .. 53.37 μs)
std dev              1.362 μs   (562.7 ns .. 2.608 μs)
variance introduced by outliers: 24% (moderately inflated)

benchmarking keys database/addAssociatedFile to 1000 (old)
time                 487.3 μs   (484.7 μs .. 490.1 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 490.9 μs   (487.8 μs .. 496.5 μs)
std dev              13.95 μs   (6.841 μs .. 22.03 μs)
variance introduced by outliers: 20% (moderately inflated)

benchmarking keys database/addAssociatedFile to 1000 (new)
time                 6.633 ms   (5.741 ms .. 7.751 ms)
                     0.905 R²   (0.850 R² .. 0.965 R²)
mean                 8.252 ms   (7.803 ms .. 8.602 ms)
std dev              1.126 ms   (900.3 μs .. 1.430 ms)
variance introduced by outliers: 72% (severely inflated)

benchmarking keys database/getAssociatedFiles from 10000 (hit)
time                 65.36 μs   (64.71 μs .. 66.37 μs)
                     0.998 R²   (0.995 R² .. 1.000 R²)
mean                 65.28 μs   (64.72 μs .. 66.45 μs)
std dev              2.576 μs   (920.8 ns .. 4.122 μs)
variance introduced by outliers: 42% (moderately inflated)

benchmarking keys database/getAssociatedFiles from 10000 (miss)
time                 52.34 μs   (52.25 μs .. 52.45 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 52.49 μs   (52.42 μs .. 52.59 μs)
std dev              255.4 ns   (205.8 ns .. 312.9 ns)

benchmarking keys database/getAssociatedKey from 10000 (hit)
time                 64.76 μs   (64.67 μs .. 64.84 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 64.67 μs   (64.62 μs .. 64.72 μs)
std dev              177.3 ns   (148.1 ns .. 217.1 ns)

benchmarking keys database/getAssociatedKey from 10000 (miss)
time                 52.75 μs   (52.66 μs .. 52.82 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 52.69 μs   (52.63 μs .. 52.75 μs)
std dev              210.6 ns   (173.7 ns .. 265.9 ns)

benchmarking keys database/addAssociatedFile to 10000 (old)
time                 489.7 μs   (488.7 μs .. 490.7 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 490.4 μs   (489.6 μs .. 492.2 μs)
std dev              3.990 μs   (2.435 μs .. 7.604 μs)

benchmarking keys database/addAssociatedFile to 10000 (new)
time                 9.994 ms   (9.186 ms .. 10.74 ms)
                     0.959 R²   (0.928 R² .. 0.979 R²)
mean                 9.906 ms   (9.343 ms .. 10.40 ms)
std dev              1.384 ms   (1.051 ms .. 2.100 ms)
variance introduced by outliers: 69% (severely inflated)
2016-01-12 14:01:50 -04:00
Joey Hess
75f61df323
cleanup 2016-01-12 13:31:13 -04:00
Joey Hess
ca2a527e93
add FileKeyIndex to Keys db to optimize getAssociatedKey
This is a schema change so will break any existing keys databases. But,
it's not been released yet, so I'm still able to make such changes.

This speeds up the benchmark quite nicely:

benchmarking keys database/getAssociatedKey from 1000 (hit)
time                 91.65 μs   (91.48 μs .. 91.81 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 91.78 μs   (91.66 μs .. 91.94 μs)
std dev              468.3 ns   (353.1 ns .. 624.3 ns)

benchmarking keys database/getAssociatedKey from 1000 (miss)
time                 53.33 μs   (53.23 μs .. 53.40 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 53.43 μs   (53.36 μs .. 53.53 μs)
std dev              274.2 ns   (211.7 ns .. 361.5 ns)

benchmarking keys database/getAssociatedKey from 10000 (hit)
time                 92.99 μs   (92.74 μs .. 93.27 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 92.90 μs   (92.76 μs .. 93.16 μs)
std dev              608.7 ns   (404.1 ns .. 963.5 ns)

benchmarking keys database/getAssociatedKey from 10000 (miss)
time                 53.12 μs   (52.91 μs .. 53.39 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 52.84 μs   (52.68 μs .. 53.16 μs)
std dev              715.4 ns   (400.4 ns .. 1.370 μs)
2016-01-12 13:07:14 -04:00
Joey Hess
f9c5aa84e0
add database benchmark
The benchmark shows that the database access is quite fast indeed!
And, it scales linearly to the number of keys, with one exception,
getAssociatedKey.

Based on this benchmark, I don't think I need worry about optimising
for cases where all files are locked and the database is mostly empty.
In those cases, database access will be misses, and according to this
benchmark, should add only 50 milliseconds to runtime.

(NB: There may be some overhead to getting the database opened and locking
the handle that this benchmark doesn't see.)

joey@darkstar:~/src/git-annex>./git-annex benchmark
setting up database with 1000
setting up database with 10000
benchmarking keys database/getAssociatedFiles from 1000 (hit)
time                 62.77 μs   (62.70 μs .. 62.85 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 62.81 μs   (62.76 μs .. 62.88 μs)
std dev              201.6 ns   (157.5 ns .. 259.5 ns)

benchmarking keys database/getAssociatedFiles from 1000 (miss)
time                 50.02 μs   (49.97 μs .. 50.07 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 50.09 μs   (50.04 μs .. 50.17 μs)
std dev              206.7 ns   (133.8 ns .. 295.3 ns)

benchmarking keys database/getAssociatedKey from 1000 (hit)
time                 211.2 μs   (210.5 μs .. 212.3 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 211.0 μs   (210.7 μs .. 212.0 μs)
std dev              1.685 μs   (334.4 ns .. 3.517 μs)

benchmarking keys database/getAssociatedKey from 1000 (miss)
time                 173.5 μs   (172.7 μs .. 174.2 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 173.7 μs   (173.0 μs .. 175.5 μs)
std dev              3.833 μs   (1.858 μs .. 6.617 μs)
variance introduced by outliers: 16% (moderately inflated)

benchmarking keys database/getAssociatedFiles from 10000 (hit)
time                 64.01 μs   (63.84 μs .. 64.18 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 64.85 μs   (64.34 μs .. 66.02 μs)
std dev              2.433 μs   (547.6 ns .. 4.652 μs)
variance introduced by outliers: 40% (moderately inflated)

benchmarking keys database/getAssociatedFiles from 10000 (miss)
time                 50.33 μs   (50.28 μs .. 50.39 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 50.32 μs   (50.26 μs .. 50.38 μs)
std dev              202.7 ns   (167.6 ns .. 252.0 ns)

benchmarking keys database/getAssociatedKey from 10000 (hit)
time                 1.142 ms   (1.139 ms .. 1.146 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.142 ms   (1.140 ms .. 1.144 ms)
std dev              7.142 μs   (4.994 μs .. 10.98 μs)

benchmarking keys database/getAssociatedKey from 10000 (miss)
time                 1.094 ms   (1.092 ms .. 1.096 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.095 ms   (1.095 ms .. 1.097 ms)
std dev              4.277 μs   (2.591 μs .. 7.228 μs)
2016-01-12 13:07:03 -04:00
Joey Hess
8111eb21e6
split out raw sql interface 2016-01-11 15:52:11 -04:00
Joey Hess
b1a1b40a15
fix inverted logic in old associated files cleanup 2016-01-07 15:54:10 -04:00
Joey Hess
aa4f353e5d
clarify absPathFrom
The repo path is typically relative, not absolute, so
providing it to absPathFrom doesn't yield an absolute path.
This is not a bug, just unclear documentation.

Indeed, there seem to be no reason to simplifyPath here, which absPathFrom
does, so instead just combine the repo path and the TopFilePath.

Also, removed an export of the TopFilePath constructor; asTopFilePath
is provided to construct one as-is.
2016-01-05 17:33:48 -04:00
Joey Hess
b3d60ca285
use TopFilePath for associated files
Fixes several bugs with updates of pointer files. When eg, running
git annex drop --from localremote
it was updating the pointer file in the local repository, not the remote.
Also, fixes drop ../foo when run in a subdir, and probably lots of other
problems. Test suite drops from ~30 to 11 failures now.

TopFilePath is used to force thinking about what the filepath is relative
to.

The data stored in the sqlite db is still just a plain string, and
TopFilePath is a newtype, so there's no overhead involved in using it in
DataBase.Keys.
2016-01-05 17:22:19 -04:00
Joey Hess
ec28151722
improve data type 2016-01-01 15:56:24 -04:00
Joey Hess
f7256842cc
wait for git lstree to exit 2016-01-01 15:51:29 -04:00
Joey Hess
9b99595473
only do scan when there's a branch, not in freshly created new repo 2016-01-01 15:16:16 -04:00
Joey Hess
f36f24197a
scan for unlocked files on init/upgrade of v6 repo 2016-01-01 15:09:42 -04:00
Joey Hess
bcdc6db2c3
fix build with pre-AMP ghc 2015-12-28 17:21:26 -04:00
Joey Hess
b61575516b
fix build with pre-AMP GHC 2015-12-28 12:41:47 -04:00
Joey Hess
9d3474ef1b
unused import 2015-12-24 13:07:42 -04:00
Joey Hess
c21567dfd3
typo 2015-12-24 13:06:03 -04:00
Joey Hess
4224fae71f
optimise read and write for Keys database (untested)
Writes are optimised by queueing up multiple writes when possible.
The queue is flushed after the Annex monad action finishes. That makes it
happen on program termination, and also whenever a nested Annex monad action
finishes.

Reads are optimised by checking once (per AnnexState) if the database
exists. If the database doesn't exist yet, all reads return mempty.

Reads also cause queued writes to be flushed, so reads will always be
consistent with writes (as long as they're made inside the same Annex monad).
A future optimisation path would be to determine when that's not necessary,
which is probably most of the time, and avoid flushing unncessarily.

Design notes for this commit:

- separate reads from writes
- reuse a handle which is left open until program
  exit or until the MVar goes out of scope (and autoclosed then)
- writes are queued
  - queue is flushed periodically
  - immediate queue flush before any read
  - auto-flush queue when database handle is garbage collected
  - flush queue on exit from Annex monad
    (Note that this may happen repeatedly for a single database connection;
    or a connection may be reused for multiple Annex monad actions,
    possibly even concurrent ones.)
- if database does not exist (or is empty) the handle
  is not opened by reads; reads instead return empty results
- writes open the handle if it was not open previously
2015-12-23 19:18:52 -04:00
Joey Hess
959b060e26
allow flushDbQueue to be run repeatedly 2015-12-23 16:36:08 -04:00
Joey Hess
d43ac8056b
auto-close database connections when MVar is GCed 2015-12-23 16:11:36 -04:00
Joey Hess
6d38f54db4
split out Database.Queue from Database.Handle
Fsck can use the queue for efficiency since it is write-heavy, and only
reads a value before writing it. But, the queue is not suited to the Keys
database.
2015-12-23 14:59:58 -04:00
Joey Hess
38a23928e9
temporarily remove cached keys database connection
The problem is that shutdown is not always called, particularly in the test
suite. So, a database connection would be opened, possibly some changes
queued, and then not shut down.

One way this can happen is when using Annex.eval or Annex.run with a new
state. A better fix might be to make both of them call Keys.shutdown
(and be sure to do it even if the annex action threw an error).

Complication: Sometimes they're run reusing an existing state, so shutting
down a database connection could cause problems for other users of that
same state. I think this would need a MVar holding the database handle,
so it could be emptied once shut down, and another user of the database
connection could then start up a new one if it got shut down. But, what if
2 threads were concurrently using the same database handle and one shut it
down while the other was writing to it? Urgh.

Might have to go that route eventually to get the database access to run
fast enough. For now, a quick fix to get the test suite happier, at the
expense of speed.
2015-12-16 14:05:26 -04:00
Joey Hess
622da992f8
reorder database shutdown to be concurrency safe
If a DbHandle is in use by another thread, it could be queueing changes
while shutdown is running. So, wait for the worker to finish before
flushing the queue, so that any last-minute writes are included. Before
this fix, they would be silently dropped.

Of course, if the other thread continues to try to use a DbHandle once it's
closed, it will block forever as the worker is no longer reading from the
jobs MVar. So, that would crash with
"thread blocked indefinitely in an MVar operation".
2015-12-16 13:52:43 -04:00
Joey Hess
1a051f4300
comment 2015-12-16 13:24:45 -04:00
Joey Hess
0a7a2dae4e
add getAssociatedKey
I guess this is just as efficient as the getAssociatedFiles query, but I
have not tried to optimise the database yet.
2015-12-15 13:05:23 -04:00
Joey Hess
ce73a96e4e
use InodeCache when dropping a key to see if a pointer file can be safely reset
The Keys database can hold multiple inode caches for a given key. One for
the annex object, and one for each pointer file, which may not be hard
linked to it.

Inode caches for a key are recorded when its content is added to the annex,
but only if it has known pointer files. This is to avoid the overhead of
maintaining the database when not needed.

When the smudge filter outputs a file's content, the inode cache is not
updated, because git's smudge interface doesn't let us write the file. So,
dropping will fall back to doing an expensive verification then. Ideally,
git's interface would be improved, and then the inode cache could be
updated then too.
2015-12-09 17:54:54 -04:00
Joey Hess
5e8c628d2e
add inode cache to the db
Renamed the db to keys, since it is various info about a Keys.

Dropping a key will update its pointer files, as long as their content can
be verified to be unmodified. This falls back to checksum verification, but
I want it to use an InodeCache of the key, for speed. But, I have not made
anything populate that cache yet.
2015-12-09 17:00:37 -04:00
Joey Hess
05b598a057
stash DbHandle in Annex state 2015-12-09 14:55:47 -04:00
Joey Hess
a6e5ee0d0e
associated files database 2015-12-07 14:35:37 -04:00
Joey Hess
5072c62932
avoid ugly error about MVar if the sqlite worker thread crashes 2015-10-12 13:00:22 -04:00
Joey Hess
4ed82e5328 fsck: Work around bug in persistent that broke display of problematically encoded filenames on stderr when using --incremental. 2015-09-09 17:02:00 -04:00
Joey Hess
bc4129cc77 fsck: Commit incremental fsck database after every 1000 files fscked, or every 5 minutes, whichever comes first.
Previously, commits were made every 1000 files fscked.

Also, improve docs
2015-07-31 16:42:15 -04:00
Joey Hess
ecb0d5c087 use lock pools throughout git-annex
The one exception is in Utility.Daemon. As long as a process only
daemonizes once, which seems reasonable, and as long as it avoids calling
checkDaemon once it's already running as a daemon, the fcntl locking
gotchas won't be a problem there.

Annex.LockFile has it's own separate lock pool layer, which has been
renamed to LockCache. This is a persistent cache of locks that persist
until closed.

This is not quite done; lockContent stil needs to be converted.
2015-05-19 14:09:52 -04:00
Joey Hess
ec267aa1ea rejigger imports for clean build with ghc 7.10's AMP changes
The explict import Prelude after import Control.Applicative is a trick
to avoid a warning.
2015-05-10 16:20:30 -04:00
Joey Hess
addc82dab7 removed all uses of undefined from code base
It's a code smell, can lead to hard to diagnose error messages.
2015-04-19 00:38:29 -04:00
Joey Hess
5d974b26fc generated TH uses forall 2015-02-22 16:57:19 -04:00
Joey Hess
e143d5e7d1 avoid closing db handle when reconnecting to do a write 2015-02-22 14:21:39 -04:00
Joey Hess
bf80a16c2e complete work around for sqlite SELECT ErrorBusy on new connection bug 2015-02-22 14:08:26 -04:00
Joey Hess
b541a5e38b WIP 2015-02-18 17:46:58 -04:00
Joey Hess
a01285ff33 more extensions needed by newer version of persistent 2015-02-18 17:30:07 -04:00
Joey Hess
80683871ee deal with rare SELECT ErrorBusy failures
I think they might be a sqlite bug. In discussions with sqlite devs.
2015-02-18 16:56:52 -04:00
Joey Hess
af254615b2 use WAL mode to ensure read from db always works, even when it's being written to
Also, moved the database to a subdir, as there are multiple files.

This seems to work well with concurrent fscks, although they still do
redundant work due to the commit granularity. Occasionally two writes will
conflict, and one is then deferred and happens later.

Except, with 3 concurrent fscks, I got failures:

git-annex: user error (SQLite3 returned ErrorBusy while attempting to perform prepare "SELECT \"fscked\".\"key\"\nFROM \"fscked\"\nWHERE \"fscked\".\"key\" = ?\n": database is locked)

Argh!!!
2015-02-18 15:54:24 -04:00
Joey Hess
17cb219231 more robust handling of deferred commits
Still not robust enough. I have 3 fscks running concurrently, and am
seeing:

("commit deferred",user error (SQLite3 returned ErrorBusy while attempting
to perform step.))

and

git-annex: user error (SQLite3 returned ErrorBusy while attempting to perform prepare "SELECT \"fscked\".\"key\"\nFROM \"fscked\"\nWHERE \"fscked\".\"key\" = ?\n": database is locked)
2015-02-18 14:11:27 -04:00
Joey Hess
3414229354 fsck: Multiple incremental fscks of different repos (some remote) can now be in progress at the same time in the same repo without it getting confused about which files have been checked for which remotes. 2015-02-17 17:08:11 -04:00
Joey Hess
a3370ac459 allow for concurrent incremental fsck processes again (sorta)
Sqlite doesn't support multiple concurrent writers
at all. One of them will fail to write. It's not even possible to have two
processes building up separate transactions at the same time. Before using
sqlite, incremental fsck could work perfectly well with multiple fsck
processes running concurrently. I'd like to keep that working.

My partial solution, so far, is to make git-annex buffer writes, and every
so often send them all to sqlite at once, in a transaction. So most of the
time, nothing is writing to the database. (And if it gets unlucky and
a write fails due to a collision with another writer, it can just wait and
retry the write later.) This lets multiple processes write to the database
successfully.

But, for the purposes of concurrent, incremental fsck, it's not ideal.
Each process doesn't immediately learn of files that another process has
checked. So they'll tend to do redundant work.

Only way I can see to improve this is to use some other mechanism for
short-term IPC between the fsck processes. Not yet done.

----

Also, make addDb check if an item is in the database already, and not try
to re-add it. That fixes an intermittent crash with
"SQLite3 returned ErrorConstraint while attempting to perform step."

I am not 100% sure why; it only started happening when I moved write
buffering into the queue. It seemed to generally happen on the same file
each time, so could just be due to multiple files having the same key.
However, I doubt my sound repo has many duplicate keys, and I suspect
something else is going on.

----

Updated benchmark, with the 1000 item queue: 6m33.808s
2015-02-17 16:56:12 -04:00
Joey Hess
afb3e3e472 avoid crash when starting fsck --incremental when one is already running
Turns out sqlite does not like having its database deleted out from
underneath it. It might suffice to empty the table, but I would rather
start each fsck over with a new database, so I added a lock file, and
running incremental fscks use a shared lock.

This leaves one concurrency bug left; running two concurrent fsck --more
will lead to: "SQLite3 returned ErrorBusy while attempting to perform step."
and one or both will fail. This is a concurrent writers problem.
2015-02-17 13:30:24 -04:00
Joey Hess
ea76d04e15 show error when sqlite crashes worker thread
Better than "blocked indefinitely in MVar"..
2015-02-17 13:03:57 -04:00
Joey Hess
99a1287f4f avoid fromIntegral overhead 2015-02-16 17:22:00 -04:00
Joey Hess
7d36e7d18d commit new transaction after 60 seconds
Database.Handle can now be given a CommitPolicy, making it easy to specify
transaction granularity.

Benchmarking the old git-annex incremental fsck that flips sticky bits
to the new that uses sqlite, running in a repo with 37000 annexed files,
both from cold cache:

old: 6m6.906s
new: 6m26.913s

This commit was sponsored by TasLUG.
2015-02-16 17:05:42 -04:00
Joey Hess
d2766df914 commit more transactions when fscking
This makes interrupt and resume work, robustly.

But, incremental fsck is slowed down by all those transactions..
2015-02-16 16:07:36 -04:00
Joey Hess
91e9146d1b convert incremental fsck to using sqlite database
Did not keep backwards compat for sticky bit records. An incremental fsck
that is already in progress will start over on upgrade to this version.

This is not yet ready for merging. The autobuilders need to have sqlite
installed.

Also, interrupting a fsck --incremental does not commit the database.
So, resuming with fsck --more restarts from beginning.

Memory: Constant during a fsck of tens of thousands of files.
(But, it does seem to buffer whole transation in memory, so
may really scale with number of files.)

CPU: ?
2015-02-16 15:35:26 -04:00