From 6f74b090adc8fd0567034aa74be8c493fb78f89f Mon Sep 17 00:00:00 2001 From: "chkno@50332f55d5ef2f4b7c6bec5253b853a8f2dc770e" Date: Fri, 27 Dec 2019 00:47:46 +0000 Subject: [PATCH] --- doc/forum/Balanced_Parity.mdwn | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/forum/Balanced_Parity.mdwn b/doc/forum/Balanced_Parity.mdwn index a6437807e1..e2c9af4b63 100644 --- a/doc/forum/Balanced_Parity.mdwn +++ b/doc/forum/Balanced_Parity.mdwn @@ -5,7 +5,7 @@ For example, suppose I wish to store N = 1000 GB on k = 10 servers, each with 15 We'll set the goal of being able to lose r = 3 servers without losing any data (which would reduce our total storage capacity to 7 * 150 GB = 1050 GB). -This can be done by thinking of our files in groups of seven (k - r) and using parchive2 or similar to create 3 (k) parity files for each set: +This can be done by thinking of our files in groups of seven (k - r) and using parchive2 or similar to create 3 (r) parity files for each set: ``` Parity group 000: D000 D001 D002 D003 D004 D005 D006 P000.0 P000.1 P000.2