Commit graph

226 commits

Author SHA1 Message Date
Joey Hess
9ed63d1545 Promote file not found warning message to an error. 2014-09-11 13:36:28 -04:00
Joey Hess
b874f84086 New annex.hardlink setting. Closes: #758593
* New annex.hardlink setting. Closes: #758593
* init: Automatically detect when a repository was cloned with --shared,
  and set annex.hardlink=true, as well as marking the repository as
  untrusted.

Had to reorganize Logs.Trust a bit to avoid a cycle between it and
Annex.Init.
2014-09-05 13:44:09 -04:00
Joey Hess
d279180266 reorganize and refactor lock code
Added a convenience Utility.LockFile that is not a windows/posix
portability shim, but still manages to cut down on the boilerplate around
locking.

This commit was sponsored by Johan Herland.
2014-08-20 16:45:58 -04:00
Joey Hess
6adbd50cd9 testremote: Add testing of behavior when remote is not available
Added a mkUnavailable method, which a Remote can use to generate a version
of itself that is not available. Implemented for several, but not yet all
remotes.

This allows testing that checkPresent properly throws an exceptions when
it cannot check if a key is present or not. It also allows testing that the
other methods don't throw exceptions in these circumstances.

This immediately found several bugs, which this commit also fixes!

* git remotes using ssh accidentially had checkPresent return
  an exception, rather than throwing it
* The chunking code accidentially returned False rather than
  propigating an exception when there were no chunks and
  checkPresent threw an exception for the non-chunked key.

This commit was sponsored by Carlo Matteo Capocasa.
2014-08-10 15:02:59 -04:00
Joey Hess
b4cf22a388 pushed checkPresent exception handling out of Remote implementations
I tend to prefer moving toward explicit exception handling, not away from
it, but in this case, I think there are good reasons to let checkPresent
throw exceptions:

1. They can all be caught in one place (Remote.hasKey), and we know
   every possible exception is caught there now, which we didn't before.
2. It simplified the code of the Remotes. I think it makes sense for
   Remotes to be able to be implemented without needing to worry about
   catching exceptions inside them. (Mostly.)
3. Types.StoreRetrieve.Preparer can only work on things that return a
   Bool, which all the other relevant remote methods already did.
   I do not see a good way to generalize that type; my previous attempts
   failed miserably.
2014-08-06 13:45:19 -04:00
Joey Hess
d05b7b9182 better byteRetriever
Make the byteRetriever be passed the callback that consumes the bytestring.

This way, there's no worries about the lazy bytestring not all being read
when the resource that's creating it is closed.

Which in turn lets bup, ddar, and S3 each switch from using an unncessary
fileRetriver to a byteRetriever. So, more efficient on chunks and encrypted
files.

The only remaining fileRetrievers are hook and external, which really do
retrieve to files.
2014-08-03 01:12:24 -04:00
Joey Hess
32e4368377 S3: support chunking
The assistant defaults to 1MiB chunk size for new S3 special remotes.
Which will work around a couple of bugs:
  http://git-annex.branchable.com/bugs/S3_memory_leaks/
  http://git-annex.branchable.com/bugs/S3_upload_not_using_multipart/
2014-08-02 15:51:58 -04:00
Joey Hess
c3750901d8 specialize Preparer a bit, so resourcePrepare can be added
The forall a. in Preparer made resourcePrepare not seem to be usable, so
I specialized a to Bool. Which works for both Preparer Storer and
Preparer Retriever, but wouldn't let the Preparer be used for hasKey
as it currently stands.
2014-08-02 15:34:09 -04:00
Joey Hess
c03e1c5648 add new section for testing commands 2014-08-01 12:49:26 -04:00
Joey Hess
53b87a859e optimise case of remote that retrieves FileContent, when chunks and encryption are not being used
No need to read whole FileContent only to write it back out to a file in
this case. Can just rename! Yay.

Also indidentially, fixed an attempt to open a file for write that was
already opened for write, which caused a crash and deadlock.
2014-07-29 20:10:14 -04:00
Joey Hess
bc9e4697b9 better type for Retriever
Putting a callback in the Retriever type allows for the callback to
remove the retrieved file when it's done with it.

I did not really want to make Retriever be fixed to Annex Bool,
but when I tried to use Annex a, I got into some type of type mess.
2014-07-29 18:41:41 -04:00
Joey Hess
47e522979c allow Retriever action to update the progress meter
Needed for eg, Remote.External.

Generally, any Retriever that stores content in a file is responsible for
updating the meter, while ones that procude a lazy bytestring cannot update
the meter, so are not asked to.
2014-07-29 17:18:49 -04:00
Joey Hess
1d263e1e7e lift types from IO to Annex
Some remotes like External need to run store and retrieve actions in Annex,
not IO. In order to do that lift, I had to dive pretty deep into the
utilities, making Utility.Gpg and Utility.Tmp be partly converted to using
MonadIO, and Control.Monad.Catch for exception handling.

There should be no behavior changes in this commit.

This commit was sponsored by Michael Barabanov.
2014-07-29 16:28:44 -04:00
Joey Hess
f5af470875 add ContentSource type, for remotes that act on files rather than ByteStrings
Note that currently nothing cleans up a ContentSource's file, when eg,
retrieving chunks.
2014-07-29 15:16:12 -04:00
Joey Hess
58f727afdd resume interrupted chunked uploads
Leverage the new chunked remotes to automatically resume uploads.
Sort of like rsync, although of course not as efficient since this
needs to start at a chunk boundry.

But, unlike rsync, this method will work for S3, WebDAV, external
special remotes, etc, etc. Only directory special remotes so far,
but many more soon!

This implementation will also allow starting an upload from one repository,
interrupting it, and then resuming the upload to the same remote from
an entirely different repository.

Note that I added a comment that storeKey should atomically move the content
into place once it's all received. This was already an undocumented
requirement -- it's necessary for hasKey to work reliably. This resume code
just uses hasKey to find the first chunk that's missing.

Note that if there are two uploads of the same key to the same chunked remote,
one might resume at the point the other had gotten to, but both will then
redundantly upload. As before.

In the non-resume case, this adds one hasKey call per storeKey, and only
if the remote is configured to use chunks. Future work: Try to eliminate that
hasKey. Notice that eg, `git annex copy --to` checks if the key is present
before sending it, so is already running hasKey.. which could perhaps
be cached and reused.

However, this additional overhead is not very large compared with
transferring an entire large file, and the ability to resume
is certianly worth it. There is an optimisation in place for small files,
that avoids trying to resume if the whole file fits within one chunk.

This commit was sponsored by Georg Bauer.
2014-07-28 14:35:52 -04:00
Joey Hess
153ace4524 fix handling of removal of keys that are not present 2014-07-28 14:14:01 -04:00
Joey Hess
9d4a766cd7 resume interrupted chunked downloads
Leverage the new chunked remotes to automatically resume downloads.
Sort of like rsync, although of course not as efficient since this
needs to start at a chunk boundry.

But, unlike rsync, this method will work for S3, WebDAV, external
special remotes, etc, etc. Only directory special remotes so far,
but many more soon!

This implementation will also properly handle starting a download
from one remote, interrupting, and resuming from another one, and so on.

(Resuming interrupted chunked uploads is similarly doable, although
slightly more expensive.)

This commit was sponsored by Thomas Djärv.
2014-07-27 18:56:32 -04:00
Joey Hess
13bbb61a51 add key stability checking interface
Needed for resuming from chunks.

Url keys are considered not stable. I considered treating url keys with a
known size as stable, but just don't feel that is enough information.
2014-07-27 12:33:46 -04:00
Joey Hess
904859d676 wording 2014-07-26 13:25:06 -04:00
Joey Hess
8f93982df6 use same hash directories for chunked key as are used for its parent
This avoids a proliferation of hash directories when using new-style
chunking, and should improve performance since chunks are accessed
in sequence and so should have a common locality.

Of course, when a chunked key is encrypted, its hash directories have no
relation to the parent key.

This commit was sponsored by Christian Kellermann.
2014-07-25 16:09:23 -04:00
Joey Hess
d751591ac8 add chunk metadata to Key
Added new fields for chunk number, and chunk size. These will not appear
in normal keys ever, but will be used for chunked data stored on special
remotes.

This commit was sponsored by Jouni K Seppanen.
2014-07-24 13:36:23 -04:00
Joey Hess
9d71903c2f migrate: Avoid re-checksumming when migrating from hashE to hash backend. 2014-07-10 17:06:04 -04:00
Fraser Tweedale
4eb72392b4 execute remote.<name>.annex-shell on remote, if set
It is useful to be able to specify an alternative git-annex-shell
program to execute on the remote, e.g., to run a version not on the
PATH.  Use remote.<name>.annex-shell if specified, instead of the
default "git-annex-shell" i.e., first so-named executable on the
PATH.
2014-05-16 15:46:43 -04:00
Robie Basak
4184566627 ddar special remote 2014-05-15 16:32:44 -04:00
Joey Hess
ac98853f05 add CredPair cache
Note that this does not yet use SecureMem. It would probably make sense for
the Password part of a CredPair to use SecureMem, and making that change
is better than passing in a String and having it converted to SecureMem in
this code.
2014-04-29 18:08:02 -04:00
Joey Hess
915d038bec reinit: New command that can initialize a new reposotory using the configuration of a previously known repository. Useful if a repository got deleted and you want to clone it back the way it was. 2014-04-15 20:13:35 -04:00
Joey Hess
fe19e15040 reorg matcher types; no non-type code changes 2014-03-29 14:43:34 -04:00
Joey Hess
5c79fa0351
avoid generating arbitrary MetaData with illegal fields 2014-03-26 16:40:52 -04:00
Joey Hess
e426fac273 add desktop notifications
Motivation: Hook scripts for nautilus or other file managers
need to provide the user with feedback that a file is being downloaded.

This commit was sponsored by THM Schoemaker.
2014-03-22 14:12:19 -04:00
Joey Hess
2f52f727c0 hilarious typo 2014-03-18 19:03:35 -04:00
Joey Hess
caa97d1271 Each for each metadata field, there's now an automatically maintained "$field-lastchanged" that gives the timestamp of the last change to that field.
Note that this is a nearly entirely free feature. The data was already
stored in the metadata log in an easily accessible way, and already was
parsed to a time when parsing the log. The generation of the metadata
fields may even be done lazily, although probably not entirely (the map
has to be evaulated to when queried).
2014-03-18 18:55:43 -04:00
Joey Hess
6a4dd42328 finish wiring up groupwanted 2014-03-15 17:08:55 -04:00
Joey Hess
417aea25be vicfg: Allows editing preferred content expressions for groups.
This is stored in the git-annex branch, but not yet actually hooked up and
used.
2014-03-15 16:17:01 -04:00
Joey Hess
8e2997aa69 only run sshCleanup when the command actually used ssh connection caching
Optimises query commands that do not. More importantly, avoids any ssh
connection cleanup delay causing problems at the end of such commands.
2014-03-13 19:30:13 -04:00
Joey Hess
b63276309e clean up cleanup action enumeration 2014-03-13 19:06:26 -04:00
Joey Hess
a3fe8270ca annex.startupscan can be set to false to disable the assistant's startup scan. 2014-03-05 17:44:14 -04:00
Joey Hess
7f9a0c153b thought of another way to break prop_idempotent_key_decode 2014-03-05 00:23:22 -04:00
Joey Hess
e8ab82390e quickcheck says: "a-s--a" is not a legal key filename
Found this in failed armhf build log, where quickcheck found a way to break
prop_idempotent_key_decode. The "s" indicates size, but since nothing comes
after it, that's not valid. When encoding the resulting key, no size was
present, so it encoded to "a--a".

Also, "a-sX--a" is not legal, since X is not a number. Not found by
quickcheck.
2014-03-05 00:10:11 -04:00
Joey Hess
d0fce426c4 pre-commit-annex hook script to automatically extract metadata from lots of types of files
Using the extract(1) program to do the heavy lifting.

Decided to make git-annex run pre-commit-annex when committing. Since
git-annex pre-commit also runs it, it'll be run when git commit is run too,
via the pre-commit hook. This basically gives back the pre-commit hook
that git-annex took away. The implementation avoids repeatedly looking
for the hook script when the assistant is running and committing
repeatedly; only checks if the hook is available once.

To make the script simpler, made git-annex metadata -s field?=value
only set a field when it's not already got a value.

This commit was sponsored by bak.
2014-03-02 20:11:58 -04:00
Joey Hess
7d9486a709 vadd: Allow listing multiple desired values for a field. 2014-03-02 15:36:45 -04:00
Joey Hess
c2e8c21ca6 view, vfilter: Add support for filtering tags and values out of a view, using !tag and field!=value.
Note that negated globs are not supported. Would have complicated the code
to add them, without changing the data type serialization in a
non-backwards-compatable way.

This commit was sponsored by Denver Gingerich.
2014-03-02 14:53:19 -04:00
Joey Hess
6a355686ff annex.listen can be configured, instead of using --listen 2014-03-01 00:31:17 -04:00
Joey Hess
6469c1aca9 webapp: Don't list the public repository group when editing a git repository; it only makes sense for special remotes. 2014-02-28 20:37:03 -04:00
Joey Hess
06e9080f01 metadata: FIeld names are now case insensative. 2014-02-25 18:45:09 -04:00
Joey Hess
b437787eee metadata: Field names limited to alphanumerics and a few whitelisted punctuation characters to avoid issues with views, etc. 2014-02-23 13:34:59 -04:00
Joey Hess
7498c5dd96 annex.genmetadata can be set to make git-annex automatically set metadata (year and month) when adding files 2014-02-23 00:08:29 -04:00
Joey Hess
1435c4f149 factor out new module 2014-02-22 13:35:50 -04:00
Joey Hess
dd7b99c860 add tip about metadata driven views (and more flexible view filtering)
While writing this documentation, I realized that there needed to be a way
to stay in a view like tag=* while adding a filter like tag=work that
applies to the same field.

So, there are really two ways a view can be refined. It can have a new
"field=explicitvalue" filter added to it, which does not change the
"shape" of the view, but narrows the files it shows.
Or, it can have a new view added, which adds another level of
subdirectories.

So, added a vfilter command, which takes explicit values to add to the
filter, and rejects changes that would change the shape of the view.

And, made vadd only accept changes that change the shape of the view.

And, changed the View data type slightly; now components that can match
multiple metadata values can be visible, or not visible.

This commit was sponsored by Stelian Iancu.
2014-02-19 16:29:56 -04:00
Joey Hess
e7672f197e new section for metadata 2014-02-19 14:55:34 -04:00
Joey Hess
39ebfa1a2e pre-commit: Update metadata when committing changes to annexed files within a view.
So the user can now switch to a view and then move files around within it
to manage metadata. For example, moving a file into a new directory
when in the tags=* view adds a tag to it.

Implementation is fairly efficient. One diff-index, which is no more
expensive than the first stage of a git commit, followed by possibly
some cat-file --batch traffic to find the key (when deleting a file).
Very similar to what's done in direct mode when committing. And like
direct mode when updating the WC after a merge, it has to buffer the
diff-tree values in order to make 2 passes over them.

When not in a view, pre-commit now does one extra git symbolic-ref,
which is tiny overhead.

This commit was sponsored by Andrew Eskridge.
2014-02-19 14:17:58 -04:00