From adf58b96367fe38aa9a641ff263648109fedba5d Mon Sep 17 00:00:00 2001 From: "https://www.google.com/accounts/o8/id?id=AItOawnG-DZQa3d3Jn7K2q36TlbmZ8v2YuV-23M" Date: Mon, 2 Mar 2015 10:03:57 +0000 Subject: [PATCH] --- ...rent_machines_that_already_have_all_the_data.mdwn | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 doc/forum/Create_a_new_set_of_clones_in_different_machines_that_already_have_all_the_data.mdwn diff --git a/doc/forum/Create_a_new_set_of_clones_in_different_machines_that_already_have_all_the_data.mdwn b/doc/forum/Create_a_new_set_of_clones_in_different_machines_that_already_have_all_the_data.mdwn new file mode 100644 index 0000000000..3336b18a5e --- /dev/null +++ b/doc/forum/Create_a_new_set_of_clones_in_different_machines_that_already_have_all_the_data.mdwn @@ -0,0 +1,12 @@ +Hello. + +I've been looking for a while, but I can't find any documentation for my use case. + +Currently I have several TiB of information synchronized in different geographic locations with Bittorrent Sync. + +I want to switch to git-annex for this purpose, but given the amount of data and the relatively slow pipes between the boxes, a standard init-clone-get flow would take weeks to complete. + +Is there a way to create the clones separately on each box, so there's no need to redistribute already distributed data? + +Thanks in advance, +Fer.