Merge branch 'master' of ssh://git-annex.branchable.com
This commit is contained in:
commit
33d79d84ff
8 changed files with 161 additions and 0 deletions
|
@ -0,0 +1,11 @@
|
|||
### Please describe the problem.
|
||||
The S3 special remote assumes that yout want to use HTTPS iff your service runs on port 443. This isn't true for most minio deployments.
|
||||
|
||||
### What steps will reproduce the problem?
|
||||
Attempt to add a S3 service using HTTPS and a port != 443 as special remote.
|
||||
|
||||
### What version of git-annex are you using? On what operating system?
|
||||
Version 7.20181121 on MacOS and 7.20190130-g024120065 on FreeBSD
|
||||
|
||||
### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
|
||||
Yes. I only encountered the problem because git-annex works well enough for me that I want to put a lot more data into it.
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="maryjil2596"
|
||||
avatar="http://cdn.libravatar.org/avatar/2ce6b78d907f10b244c92330a4f0bd00"
|
||||
subject="Epson Printer Error Code 0x9d"
|
||||
date="2019-03-06T07:44:26Z"
|
||||
content="""
|
||||
Epson is an incredible product. However, there is one thing in my mind, therefore, I want to share issues related to my printer. Whenever I am trying to print any file from the printer an error message shows up on my screen stating \"Error Code 0x9d\". Can anyone guide me <a href=\"https://errorcode0x.com/fix-epson-printer-error-code-0x9d/\">How To Recover the Epson Printer Error Code 0x9d issue?</a> I had tried different methods to solve it but failed.
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="maryjil2596"
|
||||
avatar="http://cdn.libravatar.org/avatar/2ce6b78d907f10b244c92330a4f0bd00"
|
||||
subject="Epson Printer Error Code 0x9d"
|
||||
date="2019-03-06T07:43:56Z"
|
||||
content="""
|
||||
Epson is an incredible product. However, there is one thing in my mind, therefore, I want to share issues related to my printer. Whenever I am trying to print any file from the printer an error message shows up on my screen stating \"Error Code 0x9d\". Can anyone guide me <a href=\"https://errorcode0x.com/fix-epson-printer-error-code-0x9d/\">How To Recover the Epson Printer Error Code 0x9d issue?</a> I had tried different methods to solve it but failed.
|
||||
"""]]
|
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="501st_alpha1"
|
||||
avatar="http://cdn.libravatar.org/avatar/b6fde94dbf127b822f7b6109399d50c9"
|
||||
subject="Figured out how to sync with Keybase"
|
||||
date="2019-03-07T06:52:25Z"
|
||||
content="""
|
||||
While it doesn't \"just work\", I was able to get a solution set up that allows me to use a Keybase encrypted Git repo as a remote. I added the encrypted remote (with URL e.g. `keybase://private/<user>/<repo>.git`). A plain `git annex sync` worked, since that just syncs the normal Git branches. When I tried to do a sync with `--content`, it failed with `unable to check keybase`. My current workaround is to add a special remote that points to KBFS, e.g. `git annex initremote keybase-rsync type=rsync directory=/keybase/private/<user>/git-annex-files/<repo>/ encryption=none`. I originally tried a `directory` special remote, but when I did `git annex sync --content keybase-directory`, it worked for a while, but I started getting `rename: interrupted (Interrupted system call)` and similar errors. Switching to an rsync remote fixed the errors. I added a script [here](https://github.com/501st-alpha1/scott-script/blob/eba2827ebc1b61fe6b0c2fb2acc9b8cf6641465c/git-annex-add-keybase) to automate that plus a few other checks.
|
||||
|
||||
Part way through the sync, I ran into an issue where it would hang immediately after sending a file to KBFS. As documented [here](https://github.com/keybase/client/issues/16467), running `run_keybase` to restart all the Keybase services fixed the issue for me.
|
||||
"""]]
|
|
@ -0,0 +1,24 @@
|
|||
[[!comment format=mdwn
|
||||
username="anarcat"
|
||||
avatar="http://cdn.libravatar.org/avatar/4ad594c1e13211c1ad9edb81ce5110b7"
|
||||
subject="parallelizing checksum and get"
|
||||
date="2019-03-07T18:21:22Z"
|
||||
content="""
|
||||
one thing I would definitely like to see parallelize is CPU and network. right now `git annex get` will:
|
||||
|
||||
1. download file A
|
||||
2. checksum file A
|
||||
3. download file B
|
||||
4. checksum file B
|
||||
|
||||
... serially. If parallelism (`-J2`) is enabled, the following happens, assuming files are roughly the same size:
|
||||
|
||||
1. download file A and B
|
||||
2. checksum file A and B
|
||||
|
||||
This is not much of an improvement... We can get away with maximizing the bandwidth usage *if* file transfers are somewhat interleaved (because of size differences) but the above degenerate case happens actually quite often. The alternative (`-J3` or more) might just download more files in parallel, which is not optimal.
|
||||
|
||||
So could we at least batch the checksum jobs separately from downloads? This would already be an improvement and maximize resource usage while at the same time reducing total transfer time.
|
||||
|
||||
Thanks! :)
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="anarcat"
|
||||
avatar="http://cdn.libravatar.org/avatar/4ad594c1e13211c1ad9edb81ce5110b7"
|
||||
subject="or -c annex.verify=false"
|
||||
date="2019-03-07T18:23:02Z"
|
||||
content="""
|
||||
oh... i guess i can use `-c annex.verify=false` to workaround that problem as well... but that's kind of obscure, really. :)
|
||||
"""]]
|
|
@ -0,0 +1,16 @@
|
|||
[[!comment format=mdwn
|
||||
username="anarcat"
|
||||
avatar="http://cdn.libravatar.org/avatar/4ad594c1e13211c1ad9edb81ce5110b7"
|
||||
subject="comment 1"
|
||||
date="2019-03-07T20:15:10Z"
|
||||
content="""
|
||||
i have had many problems trying this on a ntfs filesystem. the idea was to share files with a friend using a Mac (we're desperate) and having a partial checkout that only showed the files that were present.
|
||||
|
||||
first, `git annex upgrade --version=7` doesn't work - i don't know when or if [[git-annex-upgrade]] ever supported that option.
|
||||
|
||||
then `git annex sync --content some_file some_directory --no-push --no-pull` doesn't work either: this will tell you that `some_file` is not a remote, because that's the argument git-annex expects to `sync`. I tried the `-C` (`--content-of`) option, but it doesn't work on missing files:
|
||||
|
||||
git-annex: /media/anarcat/red-rhl/video/tv/directory/missing-file.mkv not found
|
||||
|
||||
note that this is the local repository path, not the remote. `missing-file.mkv` *is* present on the remote, but is totally missing locally. I have no idea how I can fetch that file, even in unlocked mode, it's really strange... -- [[anarcat]]
|
||||
"""]]
|
|
@ -0,0 +1,76 @@
|
|||
[[!comment format=mdwn
|
||||
username="yarikoptic"
|
||||
avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
|
||||
subject="more details on coreutils cp treatment of reflink"
|
||||
date="2019-03-06T16:00:35Z"
|
||||
content="""
|
||||
> git-annex looks at the file's stat() and only if the device id is the same
|
||||
|
||||
<details>
|
||||
<summary>They are indeed not the same across subvolumes of the same BTRFS file system</summary>
|
||||
|
||||
```
|
||||
$> time cp --reflink=auto home/yoh/reprotraining.ova scrap/tmp
|
||||
cp --reflink=auto home/yoh/reprotraining.ova scrap/tmp 0.00s user 0.00s system 92% cpu 0.004 total
|
||||
|
||||
$> stat home/yoh/reprotraining.ova scrap/tmp/reprotraining.ova
|
||||
File: home/yoh/reprotraining.ova
|
||||
Size: 5081213952 Blocks: 9924248 IO Block: 4096 regular file
|
||||
Device: 2fh/47d Inode: 61771704 Links: 1
|
||||
Access: (0600/-rw-------) Uid: (47521/ yoh) Gid: (47522/ yoh)
|
||||
Access: 2018-06-14 19:23:25.000000000 -0400
|
||||
Modify: 2018-06-11 15:35:57.000000000 -0400
|
||||
Change: 2018-06-14 19:23:25.891351983 -0400
|
||||
Birth: -
|
||||
File: scrap/tmp/reprotraining.ova
|
||||
Size: 5081213952 Blocks: 9924248 IO Block: 4096 regular file
|
||||
Device: 30h/48d Inode: 190040764 Links: 1
|
||||
Access: (0600/-rw-------) Uid: (47521/ yoh) Gid: (47522/ yoh)
|
||||
Access: 2019-03-06 10:38:02.610657786 -0500
|
||||
Modify: 2019-03-06 10:38:02.610657786 -0500
|
||||
Change: 2019-03-06 10:38:02.610657786 -0500
|
||||
Birth: -
|
||||
```
|
||||
</details>
|
||||
|
||||
`cp` seems to just to attempt a cheap clone
|
||||
|
||||
```
|
||||
/* Perform the O(1) btrfs clone operation, if possible.
|
||||
Upon success, return 0. Otherwise, return -1 and set errno. */
|
||||
static inline int
|
||||
clone_file (int dest_fd, int src_fd)
|
||||
{
|
||||
#ifdef FICLONE
|
||||
return ioctl (dest_fd, FICLONE, src_fd);
|
||||
#else
|
||||
(void) dest_fd;
|
||||
(void) src_fd;
|
||||
errno = ENOTSUP;
|
||||
return -1;
|
||||
#endif
|
||||
```
|
||||
|
||||
and if that one fails, assumes that full copy is required:
|
||||
|
||||
```c
|
||||
/* --attributes-only overrides --reflink. */
|
||||
if (data_copy_required && x->reflink_mode)
|
||||
{
|
||||
bool clone_ok = clone_file (dest_desc, source_desc) == 0;
|
||||
if (clone_ok || x->reflink_mode == REFLINK_ALWAYS)
|
||||
{
|
||||
if (!clone_ok)
|
||||
{
|
||||
error (0, errno, _(\"failed to clone %s from %s\"),
|
||||
quoteaf_n (0, dst_name), quoteaf_n (1, src_name));
|
||||
return_val = false;
|
||||
goto close_src_and_dst_desc;
|
||||
}
|
||||
data_copy_required = false;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
BTW, why `rsync` instead of a regular `cp` for local filesystem if it is across the devices?
|
||||
"""]]
|
Loading…
Add table
Reference in a new issue