Merge branch 'master' of ssh://git-annex.branchable.com

This commit is contained in:
Joey Hess 2013-07-19 18:57:08 -04:00
commit 442418d845
5 changed files with 110 additions and 0 deletions

View file

@ -0,0 +1,12 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawkptNW1PzrVjYlJWP_9e499uH0mjnBV6GQ"
nickname="Christian"
subject="comment 4"
date="2013-07-19T19:00:23Z"
content="""
So it is probably not only an assumption. After fixing the filenames with a script from
http://askubuntu.com/questions/113188/character-encoding-problem-with-filenames-find-broken-filenames
The watcher thread did survive, and the assistant is syncing just fine.
"""]]

View file

@ -0,0 +1,27 @@
[[!comment format=mdwn
username="GLITTAH"
ip="37.130.227.133"
subject="comment 5"
date="2013-07-19T21:23:35Z"
content="""
Disclaimer: I'm thinking out loud of what could make git-annex even more awesome. I don't expect this to be implemented any time soon. Please pardon any dumbassery.
Much easier to implement, but having your remotes (optionally!) act like a swarm would be an awesome feature to have because you bring in a lot of new features that optimize storage, bandwidth, and overall traffic usage. This would be made a lot easier if parts of it were implemented in small steps that added a nifty feature. The best part is, each of these could be implemented by themselves, and they're all features that would be really useful.
Step 1. Concurrent downloads of a file from remotes.
This would make sense to have, it saves upload traffic on your remotes, and you also get faster DL speeds on the receiving end.
Step 2. Implementing part of the super-seeding capabilities.
You upload pieces of a file to different remotes from your laptop, and on your desktop you can download all those pieces and put them together again to get a complete file. If you *really* wanted to get fancy, you could build in redundancy (ala RAID) so if a remote or two gets lost, you don't lose the entire file. This would be a very efficient use of storage if you have a bunch of free cloud storage accounts (~1GB each) and some big files you want to back up.
Step 3. Setting it up so that those remotes could talk to one another and share those pieces.
This is where it gets more like bittorrent. Useful because you upload one copy and in a few hours, have say, 5 complete copies spread across your remotes. You could add or remove remotes from a swarm locally, and push those changes to those remotes, which then adapt themselves to suit the new rules and share those with other remotes in the swarm (rules should be GPG-signed as a safety precaution). Also, if/when deltas get implemented, you could push that delta to the swarm and have all the remotes adopt it. This is cooler than regular bittorrent because the shared file can be updated. As a safety precaution, the delta could be GPG signed so a corrupt file doesn't contaminate the entire swarm. Each remote could have bandwidth/storage limits set in a dotfile.
This is a high-level idea of how it might work, and it's also a HUGE set of features to add, but if implemented, you'd be saving a ton of resources, adding new use cases, and making git-annex more flexible.
"""]]

View file

@ -0,0 +1,10 @@
[[!comment format=mdwn
username="GLITTAH"
ip="37.130.227.133"
subject="comment 6"
date="2013-07-19T21:28:58Z"
content="""
Obviously, Step 3 would only work on remotes that you have control of processes on, but if given login credentials to cloud storage remotes (potentially dangerous!) they could read/write to something like dropbox or rsync.
Another thing, this would be completely trackerless. You just use remote groups (or create swarm definitions) and share those with your remotes. It's completely decentralized!
"""]]

View file

@ -0,0 +1,21 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawln3ckqKx0x_xDZMYwa9Q1bn4I06oWjkog"
nickname="Michael"
subject="git annex merge driver?"
date="2013-07-19T17:04:53Z"
content="""
I've tried rebasing git-annex branch, and I hit a bunch of conflicts (both in uuid.log and for individual content file logs) of the form:
<pre><code>
<<<<<<< HEAD
1369615760.859476s 1 016d9095-0cbc-4734-a498-4e0421e257d7
=======
1369615760.845334s 1 016d9095-0cbc-4734-a498-4e0421e257d7
>>>>>>> 52e60e8... update
1369615359.195672s 1 38c359dc-a7d9-498d-a818-2e9beae995b8
</code></pre>
As I understand, git-annex has a special timestamp-based merge driver to deal with these. Is there a way to use that with git rebase?
"""]]

View file

@ -0,0 +1,40 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawln3ckqKx0x_xDZMYwa9Q1bn4I06oWjkog"
nickname="Michael"
subject="git checkout --orphan"
date="2013-07-19T17:49:37Z"
content="""
Instead of rebase, --orphan seems to be the right answer for pruning history: create a new git-annex orphan branch and git add and commit the files. So:
<pre><code>
git status
# verify there are no uncommitted or untracked files
# master branch
git branch -m old-master
git checkout --orphan master
git add .
git commit -m 'first commit'
# git annex branch
git branch -m git-annex old-git-annex
git checkout git-annex
git checkout --orphan git-annex
git add .
git commit -m 'first commit'
git checkout master
# at this point, you may want to double-check that everything is still OK
# finally, remove branches and clean up the objects:
git branch -D old-master old-git-annex
git reflog expire --expire=now --all
git prune
git gc
</code></pre>
The repo remains functional and .git is smaller.
"""]]