Ticket #778: tests3.txt

File tests3.txt, 125.6 KB (added by kevan, at 2010-05-14T01:48:24Z)
Line 
1Sat Oct 17 18:30:13 PDT 2009  Kevan Carstensen <kevan@isnotajoke.com>
2  * Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
3
4Fri Oct 30 02:19:08 PDT 2009  "Kevan Carstensen" <kevan@isnotajoke.com>
5  * Refactor some behavior into a mixin, and add tests for the behavior described in #778
6
7Tue Nov  3 19:36:02 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
8  * Alter tests to use the new form of set_shareholders
9
10Tue Nov  3 19:42:32 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
11  * Minor tweak to an existing test -- make the first server read-write, instead of read-only
12
13Wed Nov  4 03:13:24 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
14  * Add a test for upload.shares_by_server
15
16Wed Nov  4 03:28:49 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
17  * Add more tests for comment:53 in ticket #778
18
19Sun Nov  8 16:37:35 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
20  * Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
21
22Mon Nov 16 11:23:34 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
23  * Re-work 'test_upload.py' to be more readable; add more tests for #778
24
25Sun Nov 22 17:20:08 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
26  * Add tests for the behavior described in #834.
27
28Fri Dec  4 20:34:53 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
29  * Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
30
31Thu Jan  7 10:13:25 PST 2010  Kevan Carstensen <kevan@isnotajoke.com>
32  * Alter various unit tests to work with the new happy behavior
33
34Thu May 13 18:25:42 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
35  * Revisions of the #778 tests, per reviewers' comments
36 
37  - Fix comments and confusing naming.
38  - Add tests for the new error messages suggested by David-Sarah
39    and Zooko.
40  - Alter existing tests for new error messages.
41  - Make sure that the tests continue to work with the trunk.
42  - Add a test for a mutual disjointedness assertion that I added to
43    upload.servers_of_happiness.
44  - Fix the comments to correctly reflect read-onlyness
45  - Add a test for an edge case in should_add_server
46  - Add an assertion to make sure that share redistribution works as it
47    should
48  - Alter tests to work with revised servers_of_happiness semantics
49  - Remove tests for should_add_server, since that function no longer exists.
50  - Alter tests to know about merge_peers, and to use it before calling
51    servers_of_happiness.
52  - Add tests for merge_peers.
53  - Add Zooko's puzzles to the tests.
54  - Edit encoding tests to expect the new kind of failure message.
55  - Edit tests to expect error messages with the word "only" moved as far
56    to the right as possible.
57  - Extended and cleaned up some helper functions.
58  - Changed some tests to call more appropriate helper functions.
59  - Added a test for the failing redistribution algorithm
60  - Added a test for the progress message
61  - Added a test for the upper bound on readonly peer share discovery.
62 
63
64New patches:
65
66[Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
67Kevan Carstensen <kevan@isnotajoke.com>**20091018013013
68 Ignore-this: e12cd7c4ddeb65305c5a7e08df57c754
69] {
70hunk ./src/allmydata/test/no_network.py 219
71             c.setServiceParent(self)
72             self.clients.append(c)
73 
74-    def make_server(self, i):
75+    def make_server(self, i, readonly=False):
76         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
77         serverdir = os.path.join(self.basedir, "servers",
78                                  idlib.shortnodeid_b2a(serverid))
79hunk ./src/allmydata/test/no_network.py 224
80         fileutil.make_dirs(serverdir)
81-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats())
82+        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
83+                           readonly_storage=readonly)
84         return ss
85 
86     def add_server(self, i, ss):
87}
88[Refactor some behavior into a mixin, and add tests for the behavior described in #778
89"Kevan Carstensen" <kevan@isnotajoke.com>**20091030091908
90 Ignore-this: a6f9797057ca135579b249af3b2b66ac
91] {
92hunk ./src/allmydata/test/test_upload.py 2
93 
94-import os
95+import os, shutil
96 from cStringIO import StringIO
97 from twisted.trial import unittest
98 from twisted.python.failure import Failure
99hunk ./src/allmydata/test/test_upload.py 12
100 
101 import allmydata # for __full_version__
102 from allmydata import uri, monitor, client
103-from allmydata.immutable import upload
104+from allmydata.immutable import upload, encode
105 from allmydata.interfaces import FileTooLargeError, NoSharesError, \
106      NotEnoughSharesError
107 from allmydata.util.assertutil import precondition
108hunk ./src/allmydata/test/test_upload.py 20
109 from no_network import GridTestMixin
110 from common_util import ShouldFailMixin
111 from allmydata.storage_client import StorageFarmBroker
112+from allmydata.storage.server import storage_index_to_dir
113 
114 MiB = 1024*1024
115 
116hunk ./src/allmydata/test/test_upload.py 91
117 class ServerError(Exception):
118     pass
119 
120+class SetDEPMixin:
121+    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
122+        p = {"k": k,
123+             "happy": happy,
124+             "n": n,
125+             "max_segment_size": max_segsize,
126+             }
127+        self.node.DEFAULT_ENCODING_PARAMETERS = p
128+
129 class FakeStorageServer:
130     def __init__(self, mode):
131         self.mode = mode
132hunk ./src/allmydata/test/test_upload.py 247
133     u = upload.FileHandle(fh, convergence=None)
134     return uploader.upload(u)
135 
136-class GoodServer(unittest.TestCase, ShouldFailMixin):
137+class GoodServer(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
138     def setUp(self):
139         self.node = FakeClient(mode="good")
140         self.u = upload.Uploader()
141hunk ./src/allmydata/test/test_upload.py 254
142         self.u.running = True
143         self.u.parent = self.node
144 
145-    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
146-        p = {"k": k,
147-             "happy": happy,
148-             "n": n,
149-             "max_segment_size": max_segsize,
150-             }
151-        self.node.DEFAULT_ENCODING_PARAMETERS = p
152-
153     def _check_small(self, newuri, size):
154         u = uri.from_string(newuri)
155         self.failUnless(isinstance(u, uri.LiteralFileURI))
156hunk ./src/allmydata/test/test_upload.py 377
157         d.addCallback(self._check_large, SIZE_LARGE)
158         return d
159 
160-class ServerErrors(unittest.TestCase, ShouldFailMixin):
161+class ServerErrors(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
162     def make_node(self, mode, num_servers=10):
163         self.node = FakeClient(mode, num_servers)
164         self.u = upload.Uploader()
165hunk ./src/allmydata/test/test_upload.py 677
166         d.addCallback(_done)
167         return d
168 
169-class EncodingParameters(GridTestMixin, unittest.TestCase):
170+class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
171+    ShouldFailMixin):
172+    def _do_upload_with_broken_servers(self, servers_to_break):
173+        """
174+        I act like a normal upload, but before I send the results of
175+        Tahoe2PeerSelector to the Encoder, I break the first servers_to_break
176+        PeerTrackers in the used_peers part of the return result.
177+        """
178+        assert self.g, "I tried to find a grid at self.g, but failed"
179+        broker = self.g.clients[0].storage_broker
180+        sh     = self.g.clients[0]._secret_holder
181+        data = upload.Data("data" * 10000, convergence="")
182+        data.encoding_param_k = 3
183+        data.encoding_param_happy = 4
184+        data.encoding_param_n = 10
185+        uploadable = upload.EncryptAnUploadable(data)
186+        encoder = encode.Encoder()
187+        encoder.set_encrypted_uploadable(uploadable)
188+        status = upload.UploadStatus()
189+        selector = upload.Tahoe2PeerSelector("dglev", "test", status)
190+        storage_index = encoder.get_param("storage_index")
191+        share_size = encoder.get_param("share_size")
192+        block_size = encoder.get_param("block_size")
193+        num_segments = encoder.get_param("num_segments")
194+        d = selector.get_shareholders(broker, sh, storage_index,
195+                                      share_size, block_size, num_segments,
196+                                      10, 4)
197+        def _have_shareholders((used_peers, already_peers)):
198+            assert servers_to_break <= len(used_peers)
199+            for index in xrange(servers_to_break):
200+                server = list(used_peers)[index]
201+                for share in server.buckets.keys():
202+                    server.buckets[share].abort()
203+            buckets = {}
204+            for peer in used_peers:
205+                buckets.update(peer.buckets)
206+            encoder.set_shareholders(buckets)
207+            d = encoder.start()
208+            return d
209+        d.addCallback(_have_shareholders)
210+        return d
211+
212+    def _add_server_with_share(self, server_number, share_number=None,
213+                               readonly=False):
214+        assert self.g, "I tried to find a grid at self.g, but failed"
215+        assert self.shares, "I tried to find shares at self.shares, but failed"
216+        ss = self.g.make_server(server_number, readonly)
217+        self.g.add_server(server_number, ss)
218+        if share_number:
219+            # Copy share i from the directory associated with the first
220+            # storage server to the directory associated with this one.
221+            old_share_location = self.shares[share_number][2]
222+            new_share_location = os.path.join(ss.storedir, "shares")
223+            si = uri.from_string(self.uri).get_storage_index()
224+            new_share_location = os.path.join(new_share_location,
225+                                              storage_index_to_dir(si))
226+            if not os.path.exists(new_share_location):
227+                os.makedirs(new_share_location)
228+            new_share_location = os.path.join(new_share_location,
229+                                              str(share_number))
230+            shutil.copy(old_share_location, new_share_location)
231+            shares = self.find_shares(self.uri)
232+            # Make sure that the storage server has the share.
233+            self.failUnless((share_number, ss.my_nodeid, new_share_location)
234+                            in shares)
235+
236+    def _setup_and_upload(self):
237+        """
238+        I set up a NoNetworkGrid with a single server and client,
239+        upload a file to it, store its uri in self.uri, and store its
240+        sharedata in self.shares.
241+        """
242+        self.set_up_grid(num_clients=1, num_servers=1)
243+        client = self.g.clients[0]
244+        client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
245+        data = upload.Data("data" * 10000, convergence="")
246+        self.data = data
247+        d = client.upload(data)
248+        def _store_uri(ur):
249+            self.uri = ur.uri
250+        d.addCallback(_store_uri)
251+        d.addCallback(lambda ign:
252+            self.find_shares(self.uri))
253+        def _store_shares(shares):
254+            self.shares = shares
255+        d.addCallback(_store_shares)
256+        return d
257+
258     def test_configure_parameters(self):
259         self.basedir = self.mktemp()
260         hooks = {0: self._set_up_nodes_extra_config}
261hunk ./src/allmydata/test/test_upload.py 784
262         d.addCallback(_check)
263         return d
264 
265+    def _setUp(self, ns):
266+        # Used by test_happy_semantics and test_prexisting_share_behavior
267+        # to set up the grid.
268+        self.node = FakeClient(mode="good", num_servers=ns)
269+        self.u = upload.Uploader()
270+        self.u.running = True
271+        self.u.parent = self.node
272+
273+    def test_happy_semantics(self):
274+        self._setUp(2)
275+        DATA = upload.Data("kittens" * 10000, convergence="")
276+        # These parameters are unsatisfiable with the client that we've made
277+        # -- we'll use them to test that the semnatics work correctly.
278+        self.set_encoding_parameters(k=3, happy=5, n=10)
279+        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
280+                            "shares could only be placed on 2 servers "
281+                            "(5 were requested)",
282+                            self.u.upload, DATA)
283+        # Let's reset the client to have 10 servers
284+        d.addCallback(lambda ign:
285+            self._setUp(10))
286+        # These parameters are satisfiable with the client we've made.
287+        d.addCallback(lambda ign:
288+            self.set_encoding_parameters(k=3, happy=5, n=10))
289+        # this should work
290+        d.addCallback(lambda ign:
291+            self.u.upload(DATA))
292+        # Let's reset the client to have 7 servers
293+        # (this is less than n, but more than h)
294+        d.addCallback(lambda ign:
295+            self._setUp(7))
296+        # These encoding parameters should still be satisfiable with our
297+        # client setup
298+        d.addCallback(lambda ign:
299+            self.set_encoding_parameters(k=3, happy=5, n=10))
300+        # This, then, should work.
301+        d.addCallback(lambda ign:
302+            self.u.upload(DATA))
303+        return d
304+
305+    def test_problem_layouts(self):
306+        self.basedir = self.mktemp()
307+        # This scenario is at
308+        # http://allmydata.org/trac/tahoe/ticket/778#comment:52
309+        #
310+        # The scenario in comment:52 proposes that we have a layout
311+        # like:
312+        # server 1: share 1
313+        # server 2: share 1
314+        # server 3: share 1
315+        # server 4: shares 2 - 10
316+        # To get access to the shares, we will first upload to one
317+        # server, which will then have shares 1 - 10. We'll then
318+        # add three new servers, configure them to not accept any new
319+        # shares, then write share 1 directly into the serverdir of each.
320+        # Then each of servers 1 - 3 will report that they have share 1,
321+        # and will not accept any new share, while server 4 will report that
322+        # it has shares 2 - 10 and will accept new shares.
323+        # We'll then set 'happy' = 4, and see that an upload fails
324+        # (as it should)
325+        d = self._setup_and_upload()
326+        d.addCallback(lambda ign:
327+            self._add_server_with_share(1, 0, True))
328+        d.addCallback(lambda ign:
329+            self._add_server_with_share(2, 0, True))
330+        d.addCallback(lambda ign:
331+            self._add_server_with_share(3, 0, True))
332+        # Remove the first share from server 0.
333+        def _remove_share_0():
334+            share_location = self.shares[0][2]
335+            os.remove(share_location)
336+        d.addCallback(lambda ign:
337+            _remove_share_0())
338+        # Set happy = 4 in the client.
339+        def _prepare():
340+            client = self.g.clients[0]
341+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
342+            return client
343+        d.addCallback(lambda ign:
344+            _prepare())
345+        # Uploading data should fail
346+        d.addCallback(lambda client:
347+            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
348+                            "shares could only be placed on 1 servers "
349+                            "(4 were requested)",
350+                            client.upload, upload.Data("data" * 10000,
351+                                                       convergence="")))
352+
353+
354+        # This scenario is at
355+        # http://allmydata.org/trac/tahoe/ticket/778#comment:53
356+        #
357+        # Set up the grid to have one server
358+        def _change_basedir(ign):
359+            self.basedir = self.mktemp()
360+        d.addCallback(_change_basedir)
361+        d.addCallback(lambda ign:
362+            self._setup_and_upload())
363+        # We want to have a layout like this:
364+        # server 1: share 1
365+        # server 2: share 2
366+        # server 3: share 3
367+        # server 4: shares 1 - 10
368+        # (this is an expansion of Zooko's example because it is easier
369+        #  to code, but it will fail in the same way)
370+        # To start, we'll create a server with shares 1-10 of the data
371+        # we're about to upload.
372+        # Next, we'll add three new servers to our NoNetworkGrid. We'll add
373+        # one share from our initial upload to each of these.
374+        # The counterintuitive ordering of the share numbers is to deal with
375+        # the permuting of these servers -- distributing the shares this
376+        # way ensures that the Tahoe2PeerSelector sees them in the order
377+        # described above.
378+        d.addCallback(lambda ign:
379+            self._add_server_with_share(server_number=1, share_number=2))
380+        d.addCallback(lambda ign:
381+            self._add_server_with_share(server_number=2, share_number=0))
382+        d.addCallback(lambda ign:
383+            self._add_server_with_share(server_number=3, share_number=1))
384+        # So, we now have the following layout:
385+        # server 0: shares 1 - 10
386+        # server 1: share 0
387+        # server 2: share 1
388+        # server 3: share 2
389+        # We want to change the 'happy' parameter in the client to 4.
390+        # We then want to feed the upload process a list of peers that
391+        # server 0 is at the front of, so we trigger Zooko's scenario.
392+        # Ideally, a reupload of our original data should work.
393+        def _reset_encoding_parameters(ign):
394+            client = self.g.clients[0]
395+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
396+            return client
397+        d.addCallback(_reset_encoding_parameters)
398+        # We need this to get around the fact that the old Data
399+        # instance already has a happy parameter set.
400+        d.addCallback(lambda client:
401+            client.upload(upload.Data("data" * 10000, convergence="")))
402+        return d
403+
404+
405+    def test_dropped_servers_in_encoder(self):
406+        def _set_basedir(ign=None):
407+            self.basedir = self.mktemp()
408+        _set_basedir()
409+        d = self._setup_and_upload();
410+        # Add 5 servers, with one share each from the original
411+        # Add a readonly server
412+        def _do_server_setup(ign):
413+            self._add_server_with_share(1, 1, True)
414+            self._add_server_with_share(2)
415+            self._add_server_with_share(3)
416+            self._add_server_with_share(4)
417+            self._add_server_with_share(5)
418+        d.addCallback(_do_server_setup)
419+        # remove the original server
420+        # (necessary to ensure that the Tahoe2PeerSelector will distribute
421+        #  all the shares)
422+        def _remove_server(ign):
423+            server = self.g.servers_by_number[0]
424+            self.g.remove_server(server.my_nodeid)
425+        d.addCallback(_remove_server)
426+        # This should succeed.
427+        d.addCallback(lambda ign:
428+            self._do_upload_with_broken_servers(1))
429+        # Now, do the same thing over again, but drop 2 servers instead
430+        # of 1. This should fail.
431+        d.addCallback(_set_basedir)
432+        d.addCallback(lambda ign:
433+            self._setup_and_upload())
434+        d.addCallback(_do_server_setup)
435+        d.addCallback(_remove_server)
436+        d.addCallback(lambda ign:
437+            self.shouldFail(NotEnoughSharesError,
438+                            "test_dropped_server_in_encoder", "",
439+                            self._do_upload_with_broken_servers, 2))
440+        return d
441+
442+
443+    def test_servers_with_unique_shares(self):
444+        # servers_with_unique_shares expects a dict of
445+        # shnum => peerid as a preexisting shares argument.
446+        test1 = {
447+                 1 : "server1",
448+                 2 : "server2",
449+                 3 : "server3",
450+                 4 : "server4"
451+                }
452+        unique_servers = upload.servers_with_unique_shares(test1)
453+        self.failUnlessEqual(4, len(unique_servers))
454+        for server in ["server1", "server2", "server3", "server4"]:
455+            self.failUnlessIn(server, unique_servers)
456+        test1[4] = "server1"
457+        # Now there should only be 3 unique servers.
458+        unique_servers = upload.servers_with_unique_shares(test1)
459+        self.failUnlessEqual(3, len(unique_servers))
460+        for server in ["server1", "server2", "server3"]:
461+            self.failUnlessIn(server, unique_servers)
462+        # servers_with_unique_shares expects a set of PeerTracker
463+        # instances as a used_peers argument, but only uses the peerid
464+        # instance variable to assess uniqueness. So we feed it some fake
465+        # PeerTrackers whose only important characteristic is that they
466+        # have peerid set to something.
467+        class FakePeerTracker:
468+            pass
469+        trackers = []
470+        for server in ["server5", "server6", "server7", "server8"]:
471+            t = FakePeerTracker()
472+            t.peerid = server
473+            trackers.append(t)
474+        # Recall that there are 3 unique servers in test1. Since none of
475+        # those overlap with the ones in trackers, we should get 7 back
476+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
477+        self.failUnlessEqual(7, len(unique_servers))
478+        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
479+        expected_servers.remove("server4")
480+        for server in expected_servers:
481+            self.failUnlessIn(server, unique_servers)
482+        # Now add an overlapping server to trackers.
483+        t = FakePeerTracker()
484+        t.peerid = "server1"
485+        trackers.append(t)
486+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
487+        self.failUnlessEqual(7, len(unique_servers))
488+        for server in expected_servers:
489+            self.failUnlessIn(server, unique_servers)
490+
491+
492     def _set_up_nodes_extra_config(self, clientdir):
493         cfgfn = os.path.join(clientdir, "tahoe.cfg")
494         oldcfg = open(cfgfn, "r").read()
495}
496[Alter tests to use the new form of set_shareholders
497Kevan Carstensen <kevan@isnotajoke.com>**20091104033602
498 Ignore-this: 3deac11fc831618d11441317463ef830
499] {
500hunk ./src/allmydata/test/test_encode.py 301
501                      (NUM_SEGMENTS-1)*segsize, len(data), NUM_SEGMENTS*segsize)
502 
503             shareholders = {}
504+            servermap = {}
505             for shnum in range(NUM_SHARES):
506                 peer = FakeBucketReaderWriterProxy()
507                 shareholders[shnum] = peer
508hunk ./src/allmydata/test/test_encode.py 305
509+                servermap[shnum] = str(shnum)
510                 all_shareholders.append(peer)
511hunk ./src/allmydata/test/test_encode.py 307
512-            e.set_shareholders(shareholders)
513+            e.set_shareholders(shareholders, servermap)
514             return e.start()
515         d.addCallback(_ready)
516 
517merger 0.0 (
518hunk ./src/allmydata/test/test_encode.py 462
519-            all_peers = []
520hunk ./src/allmydata/test/test_encode.py 463
521+            servermap = {}
522)
523hunk ./src/allmydata/test/test_encode.py 467
524                 mode = bucket_modes.get(shnum, "good")
525                 peer = FakeBucketReaderWriterProxy(mode)
526                 shareholders[shnum] = peer
527-            e.set_shareholders(shareholders)
528+                servermap[shnum] = str(shnum)
529+            e.set_shareholders(shareholders, servermap)
530             return e.start()
531         d.addCallback(_ready)
532         def _sent(res):
533hunk ./src/allmydata/test/test_upload.py 711
534                 for share in server.buckets.keys():
535                     server.buckets[share].abort()
536             buckets = {}
537+            servermap = already_peers.copy()
538             for peer in used_peers:
539                 buckets.update(peer.buckets)
540hunk ./src/allmydata/test/test_upload.py 714
541-            encoder.set_shareholders(buckets)
542+                for bucket in peer.buckets:
543+                    servermap[bucket] = peer.peerid
544+            encoder.set_shareholders(buckets, servermap)
545             d = encoder.start()
546             return d
547         d.addCallback(_have_shareholders)
548hunk ./src/allmydata/test/test_upload.py 933
549         _set_basedir()
550         d = self._setup_and_upload();
551         # Add 5 servers, with one share each from the original
552-        # Add a readonly server
553         def _do_server_setup(ign):
554             self._add_server_with_share(1, 1, True)
555             self._add_server_with_share(2)
556}
557[Minor tweak to an existing test -- make the first server read-write, instead of read-only
558Kevan Carstensen <kevan@isnotajoke.com>**20091104034232
559 Ignore-this: a951a46c93f7f58dd44d93d8623b2aee
560] hunk ./src/allmydata/test/test_upload.py 934
561         d = self._setup_and_upload();
562         # Add 5 servers, with one share each from the original
563         def _do_server_setup(ign):
564-            self._add_server_with_share(1, 1, True)
565+            self._add_server_with_share(1, 1)
566             self._add_server_with_share(2)
567             self._add_server_with_share(3)
568             self._add_server_with_share(4)
569[Add a test for upload.shares_by_server
570Kevan Carstensen <kevan@isnotajoke.com>**20091104111324
571 Ignore-this: f9802e82d6982a93e00f92e0b276f018
572] hunk ./src/allmydata/test/test_upload.py 1013
573             self.failUnlessIn(server, unique_servers)
574 
575 
576+    def test_shares_by_server(self):
577+        test = {
578+                    1 : "server1",
579+                    2 : "server2",
580+                    3 : "server3",
581+                    4 : "server4"
582+               }
583+        shares_by_server = upload.shares_by_server(test)
584+        self.failUnlessEqual(set([1]), shares_by_server["server1"])
585+        self.failUnlessEqual(set([2]), shares_by_server["server2"])
586+        self.failUnlessEqual(set([3]), shares_by_server["server3"])
587+        self.failUnlessEqual(set([4]), shares_by_server["server4"])
588+        test1 = {
589+                    1 : "server1",
590+                    2 : "server1",
591+                    3 : "server1",
592+                    4 : "server2",
593+                    5 : "server2"
594+                }
595+        shares_by_server = upload.shares_by_server(test1)
596+        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
597+        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
598+
599+
600     def _set_up_nodes_extra_config(self, clientdir):
601         cfgfn = os.path.join(clientdir, "tahoe.cfg")
602         oldcfg = open(cfgfn, "r").read()
603[Add more tests for comment:53 in ticket #778
604Kevan Carstensen <kevan@isnotajoke.com>**20091104112849
605 Ignore-this: 3bb2edd299a944cc9586e14d5d83ec8c
606] {
607hunk ./src/allmydata/test/test_upload.py 722
608         d.addCallback(_have_shareholders)
609         return d
610 
611-    def _add_server_with_share(self, server_number, share_number=None,
612-                               readonly=False):
613+    def _add_server(self, server_number, readonly=False):
614         assert self.g, "I tried to find a grid at self.g, but failed"
615         assert self.shares, "I tried to find shares at self.shares, but failed"
616         ss = self.g.make_server(server_number, readonly)
617hunk ./src/allmydata/test/test_upload.py 727
618         self.g.add_server(server_number, ss)
619+
620+    def _add_server_with_share(self, server_number, share_number=None,
621+                               readonly=False):
622+        self._add_server(server_number, readonly)
623         if share_number:
624hunk ./src/allmydata/test/test_upload.py 732
625-            # Copy share i from the directory associated with the first
626-            # storage server to the directory associated with this one.
627-            old_share_location = self.shares[share_number][2]
628-            new_share_location = os.path.join(ss.storedir, "shares")
629-            si = uri.from_string(self.uri).get_storage_index()
630-            new_share_location = os.path.join(new_share_location,
631-                                              storage_index_to_dir(si))
632-            if not os.path.exists(new_share_location):
633-                os.makedirs(new_share_location)
634-            new_share_location = os.path.join(new_share_location,
635-                                              str(share_number))
636-            shutil.copy(old_share_location, new_share_location)
637-            shares = self.find_shares(self.uri)
638-            # Make sure that the storage server has the share.
639-            self.failUnless((share_number, ss.my_nodeid, new_share_location)
640-                            in shares)
641+            self._copy_share_to_server(share_number, server_number)
642+
643+    def _copy_share_to_server(self, share_number, server_number):
644+        ss = self.g.servers_by_number[server_number]
645+        # Copy share i from the directory associated with the first
646+        # storage server to the directory associated with this one.
647+        assert self.g, "I tried to find a grid at self.g, but failed"
648+        assert self.shares, "I tried to find shares at self.shares, but failed"
649+        old_share_location = self.shares[share_number][2]
650+        new_share_location = os.path.join(ss.storedir, "shares")
651+        si = uri.from_string(self.uri).get_storage_index()
652+        new_share_location = os.path.join(new_share_location,
653+                                          storage_index_to_dir(si))
654+        if not os.path.exists(new_share_location):
655+            os.makedirs(new_share_location)
656+        new_share_location = os.path.join(new_share_location,
657+                                          str(share_number))
658+        shutil.copy(old_share_location, new_share_location)
659+        shares = self.find_shares(self.uri)
660+        # Make sure that the storage server has the share.
661+        self.failUnless((share_number, ss.my_nodeid, new_share_location)
662+                        in shares)
663+
664 
665     def _setup_and_upload(self):
666         """
667hunk ./src/allmydata/test/test_upload.py 917
668         d.addCallback(lambda ign:
669             self._add_server_with_share(server_number=3, share_number=1))
670         # So, we now have the following layout:
671-        # server 0: shares 1 - 10
672+        # server 0: shares 0 - 9
673         # server 1: share 0
674         # server 2: share 1
675         # server 3: share 2
676hunk ./src/allmydata/test/test_upload.py 934
677         # instance already has a happy parameter set.
678         d.addCallback(lambda client:
679             client.upload(upload.Data("data" * 10000, convergence="")))
680+
681+
682+        # This scenario is basically comment:53, but with the order reversed;
683+        # this means that the Tahoe2PeerSelector sees
684+        # server 0: shares 1-10
685+        # server 1: share 1
686+        # server 2: share 2
687+        # server 3: share 3
688+        d.addCallback(_change_basedir)
689+        d.addCallback(lambda ign:
690+            self._setup_and_upload())
691+        d.addCallback(lambda ign:
692+            self._add_server_with_share(server_number=2, share_number=0))
693+        d.addCallback(lambda ign:
694+            self._add_server_with_share(server_number=3, share_number=1))
695+        d.addCallback(lambda ign:
696+            self._add_server_with_share(server_number=1, share_number=2))
697+        # Copy all of the other shares to server number 2
698+        def _copy_shares(ign):
699+            for i in xrange(1, 10):
700+                self._copy_share_to_server(i, 2)
701+        d.addCallback(_copy_shares)
702+        # Remove the first server, and add a placeholder with share 0
703+        d.addCallback(lambda ign:
704+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
705+        d.addCallback(lambda ign:
706+            self._add_server_with_share(server_number=0, share_number=0))
707+        # Now try uploading.
708+        d.addCallback(_reset_encoding_parameters)
709+        d.addCallback(lambda client:
710+            client.upload(upload.Data("data" * 10000, convergence="")))
711+        # Try the same thing, but with empty servers after the first one
712+        # We want to make sure that Tahoe2PeerSelector will redistribute
713+        # shares as necessary, not simply discover an existing layout.
714+        d.addCallback(_change_basedir)
715+        d.addCallback(lambda ign:
716+            self._setup_and_upload())
717+        d.addCallback(lambda ign:
718+            self._add_server(server_number=2))
719+        d.addCallback(lambda ign:
720+            self._add_server(server_number=3))
721+        d.addCallback(lambda ign:
722+            self._add_server(server_number=1))
723+        d.addCallback(_copy_shares)
724+        d.addCallback(lambda ign:
725+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
726+        d.addCallback(lambda ign:
727+            self._add_server(server_number=0))
728+        d.addCallback(_reset_encoding_parameters)
729+        d.addCallback(lambda client:
730+            client.upload(upload.Data("data" * 10000, convergence="")))
731+        # Try the following layout
732+        # server 0: shares 1-10
733+        # server 1: share 1, read-only
734+        # server 2: share 2, read-only
735+        # server 3: share 3, read-only
736+        d.addCallback(_change_basedir)
737+        d.addCallback(lambda ign:
738+            self._setup_and_upload())
739+        d.addCallback(lambda ign:
740+            self._add_server_with_share(server_number=2, share_number=0))
741+        d.addCallback(lambda ign:
742+            self._add_server_with_share(server_number=3, share_number=1,
743+                                        readonly=True))
744+        d.addCallback(lambda ign:
745+            self._add_server_with_share(server_number=1, share_number=2,
746+                                        readonly=True))
747+        # Copy all of the other shares to server number 2
748+        d.addCallback(_copy_shares)
749+        # Remove server 0, and add another in its place
750+        d.addCallback(lambda ign:
751+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
752+        d.addCallback(lambda ign:
753+            self._add_server_with_share(server_number=0, share_number=0,
754+                                        readonly=True))
755+        d.addCallback(_reset_encoding_parameters)
756+        d.addCallback(lambda client:
757+            client.upload(upload.Data("data" * 10000, convergence="")))
758         return d
759 
760 
761}
762[Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
763Kevan Carstensen <kevan@isnotajoke.com>**20091109003735
764 Ignore-this: 12f9b4cff5752fca7ed32a6ebcff6446
765] hunk ./src/allmydata/test/test_upload.py 1125
766         self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
767 
768 
769+    def test_existing_share_detection(self):
770+        self.basedir = self.mktemp()
771+        d = self._setup_and_upload()
772+        # Our final setup should look like this:
773+        # server 1: shares 1 - 10, read-only
774+        # server 2: empty
775+        # server 3: empty
776+        # server 4: empty
777+        # The purpose of this test is to make sure that the peer selector
778+        # knows about the shares on server 1, even though it is read-only.
779+        # It used to simply filter these out, which would cause the test
780+        # to fail when servers_of_happiness = 4.
781+        d.addCallback(lambda ign:
782+            self._add_server_with_share(1, 0, True))
783+        d.addCallback(lambda ign:
784+            self._add_server_with_share(2))
785+        d.addCallback(lambda ign:
786+            self._add_server_with_share(3))
787+        d.addCallback(lambda ign:
788+            self._add_server_with_share(4))
789+        def _copy_shares(ign):
790+            for i in xrange(1, 10):
791+                self._copy_share_to_server(i, 1)
792+        d.addCallback(_copy_shares)
793+        d.addCallback(lambda ign:
794+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
795+        def _prepare_client(ign):
796+            client = self.g.clients[0]
797+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
798+            return client
799+        d.addCallback(_prepare_client)
800+        d.addCallback(lambda client:
801+            client.upload(upload.Data("data" * 10000, convergence="")))
802+        return d
803+
804+
805     def _set_up_nodes_extra_config(self, clientdir):
806         cfgfn = os.path.join(clientdir, "tahoe.cfg")
807         oldcfg = open(cfgfn, "r").read()
808[Re-work 'test_upload.py' to be more readable; add more tests for #778
809Kevan Carstensen <kevan@isnotajoke.com>**20091116192334
810 Ignore-this: 7e8565f92fe51dece5ae28daf442d659
811] {
812hunk ./src/allmydata/test/test_upload.py 722
813         d.addCallback(_have_shareholders)
814         return d
815 
816+
817     def _add_server(self, server_number, readonly=False):
818         assert self.g, "I tried to find a grid at self.g, but failed"
819         assert self.shares, "I tried to find shares at self.shares, but failed"
820hunk ./src/allmydata/test/test_upload.py 729
821         ss = self.g.make_server(server_number, readonly)
822         self.g.add_server(server_number, ss)
823 
824+
825     def _add_server_with_share(self, server_number, share_number=None,
826                                readonly=False):
827         self._add_server(server_number, readonly)
828hunk ./src/allmydata/test/test_upload.py 733
829-        if share_number:
830+        if share_number is not None:
831             self._copy_share_to_server(share_number, server_number)
832 
833hunk ./src/allmydata/test/test_upload.py 736
834+
835     def _copy_share_to_server(self, share_number, server_number):
836         ss = self.g.servers_by_number[server_number]
837         # Copy share i from the directory associated with the first
838hunk ./src/allmydata/test/test_upload.py 752
839             os.makedirs(new_share_location)
840         new_share_location = os.path.join(new_share_location,
841                                           str(share_number))
842-        shutil.copy(old_share_location, new_share_location)
843+        if old_share_location != new_share_location:
844+            shutil.copy(old_share_location, new_share_location)
845         shares = self.find_shares(self.uri)
846         # Make sure that the storage server has the share.
847         self.failUnless((share_number, ss.my_nodeid, new_share_location)
848hunk ./src/allmydata/test/test_upload.py 782
849         d.addCallback(_store_shares)
850         return d
851 
852+
853     def test_configure_parameters(self):
854         self.basedir = self.mktemp()
855         hooks = {0: self._set_up_nodes_extra_config}
856hunk ./src/allmydata/test/test_upload.py 802
857         d.addCallback(_check)
858         return d
859 
860+
861     def _setUp(self, ns):
862         # Used by test_happy_semantics and test_prexisting_share_behavior
863         # to set up the grid.
864hunk ./src/allmydata/test/test_upload.py 811
865         self.u.running = True
866         self.u.parent = self.node
867 
868+
869     def test_happy_semantics(self):
870         self._setUp(2)
871         DATA = upload.Data("kittens" * 10000, convergence="")
872hunk ./src/allmydata/test/test_upload.py 844
873             self.u.upload(DATA))
874         return d
875 
876-    def test_problem_layouts(self):
877-        self.basedir = self.mktemp()
878+
879+    def test_problem_layout_comment_52(self):
880+        def _basedir():
881+            self.basedir = self.mktemp()
882+        _basedir()
883         # This scenario is at
884         # http://allmydata.org/trac/tahoe/ticket/778#comment:52
885         #
886hunk ./src/allmydata/test/test_upload.py 890
887         # Uploading data should fail
888         d.addCallback(lambda client:
889             self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
890-                            "shares could only be placed on 1 servers "
891+                            "shares could only be placed on 2 servers "
892                             "(4 were requested)",
893                             client.upload, upload.Data("data" * 10000,
894                                                        convergence="")))
895hunk ./src/allmydata/test/test_upload.py 895
896 
897+        # Do comment:52, but like this:
898+        # server 2: empty
899+        # server 3: share 0, read-only
900+        # server 1: share 0, read-only
901+        # server 0: shares 0-9
902+        d.addCallback(lambda ign:
903+            _basedir())
904+        d.addCallback(lambda ign:
905+            self._setup_and_upload())
906+        d.addCallback(lambda ign:
907+            self._add_server_with_share(server_number=2))
908+        d.addCallback(lambda ign:
909+            self._add_server_with_share(server_number=3, share_number=0,
910+                                        readonly=True))
911+        d.addCallback(lambda ign:
912+            self._add_server_with_share(server_number=1, share_number=0,
913+                                        readonly=True))
914+        def _prepare2():
915+            client = self.g.clients[0]
916+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
917+            return client
918+        d.addCallback(lambda ign:
919+            _prepare2())
920+        d.addCallback(lambda client:
921+            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
922+                            "shares could only be placed on 2 servers "
923+                            "(3 were requested)",
924+                            client.upload, upload.Data("data" * 10000,
925+                                                       convergence="")))
926+        return d
927+
928 
929hunk ./src/allmydata/test/test_upload.py 927
930+    def test_problem_layout_comment_53(self):
931         # This scenario is at
932         # http://allmydata.org/trac/tahoe/ticket/778#comment:53
933         #
934hunk ./src/allmydata/test/test_upload.py 934
935         # Set up the grid to have one server
936         def _change_basedir(ign):
937             self.basedir = self.mktemp()
938-        d.addCallback(_change_basedir)
939-        d.addCallback(lambda ign:
940-            self._setup_and_upload())
941-        # We want to have a layout like this:
942-        # server 1: share 1
943-        # server 2: share 2
944-        # server 3: share 3
945-        # server 4: shares 1 - 10
946-        # (this is an expansion of Zooko's example because it is easier
947-        #  to code, but it will fail in the same way)
948-        # To start, we'll create a server with shares 1-10 of the data
949-        # we're about to upload.
950+        _change_basedir(None)
951+        d = self._setup_and_upload()
952+        # We start by uploading all of the shares to one server (which has
953+        # already been done above).
954         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
955         # one share from our initial upload to each of these.
956         # The counterintuitive ordering of the share numbers is to deal with
957hunk ./src/allmydata/test/test_upload.py 952
958             self._add_server_with_share(server_number=3, share_number=1))
959         # So, we now have the following layout:
960         # server 0: shares 0 - 9
961-        # server 1: share 0
962-        # server 2: share 1
963-        # server 3: share 2
964+        # server 1: share 2
965+        # server 2: share 0
966+        # server 3: share 1
967         # We want to change the 'happy' parameter in the client to 4.
968hunk ./src/allmydata/test/test_upload.py 956
969-        # We then want to feed the upload process a list of peers that
970-        # server 0 is at the front of, so we trigger Zooko's scenario.
971+        # The Tahoe2PeerSelector will see the peers permuted as:
972+        # 2, 3, 1, 0
973         # Ideally, a reupload of our original data should work.
974hunk ./src/allmydata/test/test_upload.py 959
975-        def _reset_encoding_parameters(ign):
976+        def _reset_encoding_parameters(ign, happy=4):
977             client = self.g.clients[0]
978hunk ./src/allmydata/test/test_upload.py 961
979-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
980+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
981             return client
982         d.addCallback(_reset_encoding_parameters)
983hunk ./src/allmydata/test/test_upload.py 964
984-        # We need this to get around the fact that the old Data
985-        # instance already has a happy parameter set.
986         d.addCallback(lambda client:
987             client.upload(upload.Data("data" * 10000, convergence="")))
988 
989hunk ./src/allmydata/test/test_upload.py 970
990 
991         # This scenario is basically comment:53, but with the order reversed;
992         # this means that the Tahoe2PeerSelector sees
993-        # server 0: shares 1-10
994-        # server 1: share 1
995-        # server 2: share 2
996-        # server 3: share 3
997+        # server 2: shares 1-10
998+        # server 3: share 1
999+        # server 1: share 2
1000+        # server 4: share 3
1001         d.addCallback(_change_basedir)
1002         d.addCallback(lambda ign:
1003             self._setup_and_upload())
1004hunk ./src/allmydata/test/test_upload.py 992
1005         d.addCallback(lambda ign:
1006             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1007         d.addCallback(lambda ign:
1008-            self._add_server_with_share(server_number=0, share_number=0))
1009+            self._add_server_with_share(server_number=4, share_number=0))
1010         # Now try uploading.
1011         d.addCallback(_reset_encoding_parameters)
1012         d.addCallback(lambda client:
1013hunk ./src/allmydata/test/test_upload.py 1013
1014         d.addCallback(lambda ign:
1015             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1016         d.addCallback(lambda ign:
1017-            self._add_server(server_number=0))
1018+            self._add_server(server_number=4))
1019         d.addCallback(_reset_encoding_parameters)
1020         d.addCallback(lambda client:
1021             client.upload(upload.Data("data" * 10000, convergence="")))
1022hunk ./src/allmydata/test/test_upload.py 1017
1023+        return d
1024+
1025+
1026+    def test_happiness_with_some_readonly_peers(self):
1027         # Try the following layout
1028hunk ./src/allmydata/test/test_upload.py 1022
1029-        # server 0: shares 1-10
1030-        # server 1: share 1, read-only
1031-        # server 2: share 2, read-only
1032-        # server 3: share 3, read-only
1033-        d.addCallback(_change_basedir)
1034-        d.addCallback(lambda ign:
1035-            self._setup_and_upload())
1036+        # server 2: shares 0-9
1037+        # server 4: share 0, read-only
1038+        # server 3: share 1, read-only
1039+        # server 1: share 2, read-only
1040+        self.basedir = self.mktemp()
1041+        d = self._setup_and_upload()
1042         d.addCallback(lambda ign:
1043             self._add_server_with_share(server_number=2, share_number=0))
1044         d.addCallback(lambda ign:
1045hunk ./src/allmydata/test/test_upload.py 1037
1046             self._add_server_with_share(server_number=1, share_number=2,
1047                                         readonly=True))
1048         # Copy all of the other shares to server number 2
1049+        def _copy_shares(ign):
1050+            for i in xrange(1, 10):
1051+                self._copy_share_to_server(i, 2)
1052         d.addCallback(_copy_shares)
1053         # Remove server 0, and add another in its place
1054         d.addCallback(lambda ign:
1055hunk ./src/allmydata/test/test_upload.py 1045
1056             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1057         d.addCallback(lambda ign:
1058-            self._add_server_with_share(server_number=0, share_number=0,
1059+            self._add_server_with_share(server_number=4, share_number=0,
1060                                         readonly=True))
1061hunk ./src/allmydata/test/test_upload.py 1047
1062+        def _reset_encoding_parameters(ign, happy=4):
1063+            client = self.g.clients[0]
1064+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1065+            return client
1066+        d.addCallback(_reset_encoding_parameters)
1067+        d.addCallback(lambda client:
1068+            client.upload(upload.Data("data" * 10000, convergence="")))
1069+        return d
1070+
1071+
1072+    def test_happiness_with_all_readonly_peers(self):
1073+        # server 3: share 1, read-only
1074+        # server 1: share 2, read-only
1075+        # server 2: shares 0-9, read-only
1076+        # server 4: share 0, read-only
1077+        # The idea with this test is to make sure that the survey of
1078+        # read-only peers doesn't undercount servers of happiness
1079+        self.basedir = self.mktemp()
1080+        d = self._setup_and_upload()
1081+        d.addCallback(lambda ign:
1082+            self._add_server_with_share(server_number=4, share_number=0,
1083+                                        readonly=True))
1084+        d.addCallback(lambda ign:
1085+            self._add_server_with_share(server_number=3, share_number=1,
1086+                                        readonly=True))
1087+        d.addCallback(lambda ign:
1088+            self._add_server_with_share(server_number=1, share_number=2,
1089+                                        readonly=True))
1090+        d.addCallback(lambda ign:
1091+            self._add_server_with_share(server_number=2, share_number=0,
1092+                                        readonly=True))
1093+        def _copy_shares(ign):
1094+            for i in xrange(1, 10):
1095+                self._copy_share_to_server(i, 2)
1096+        d.addCallback(_copy_shares)
1097+        d.addCallback(lambda ign:
1098+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1099+        def _reset_encoding_parameters(ign, happy=4):
1100+            client = self.g.clients[0]
1101+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1102+            return client
1103         d.addCallback(_reset_encoding_parameters)
1104         d.addCallback(lambda client:
1105             client.upload(upload.Data("data" * 10000, convergence="")))
1106hunk ./src/allmydata/test/test_upload.py 1099
1107             self.basedir = self.mktemp()
1108         _set_basedir()
1109         d = self._setup_and_upload();
1110-        # Add 5 servers, with one share each from the original
1111+        # Add 5 servers
1112         def _do_server_setup(ign):
1113hunk ./src/allmydata/test/test_upload.py 1101
1114-            self._add_server_with_share(1, 1)
1115+            self._add_server_with_share(1)
1116             self._add_server_with_share(2)
1117             self._add_server_with_share(3)
1118             self._add_server_with_share(4)
1119hunk ./src/allmydata/test/test_upload.py 1126
1120         d.addCallback(_remove_server)
1121         d.addCallback(lambda ign:
1122             self.shouldFail(NotEnoughSharesError,
1123-                            "test_dropped_server_in_encoder", "",
1124+                            "test_dropped_servers_in_encoder",
1125+                            "lost too many servers during upload "
1126+                            "(still have 3, want 4)",
1127+                            self._do_upload_with_broken_servers, 2))
1128+        # Now do the same thing over again, but make some of the servers
1129+        # readonly, break some of the ones that aren't, and make sure that
1130+        # happiness accounting is preserved.
1131+        d.addCallback(_set_basedir)
1132+        d.addCallback(lambda ign:
1133+            self._setup_and_upload())
1134+        def _do_server_setup_2(ign):
1135+            self._add_server_with_share(1)
1136+            self._add_server_with_share(2)
1137+            self._add_server_with_share(3)
1138+            self._add_server_with_share(4, 7, readonly=True)
1139+            self._add_server_with_share(5, 8, readonly=True)
1140+        d.addCallback(_do_server_setup_2)
1141+        d.addCallback(_remove_server)
1142+        d.addCallback(lambda ign:
1143+            self._do_upload_with_broken_servers(1))
1144+        d.addCallback(_set_basedir)
1145+        d.addCallback(lambda ign:
1146+            self._setup_and_upload())
1147+        d.addCallback(_do_server_setup_2)
1148+        d.addCallback(_remove_server)
1149+        d.addCallback(lambda ign:
1150+            self.shouldFail(NotEnoughSharesError,
1151+                            "test_dropped_servers_in_encoder",
1152+                            "lost too many servers during upload "
1153+                            "(still have 3, want 4)",
1154                             self._do_upload_with_broken_servers, 2))
1155         return d
1156 
1157hunk ./src/allmydata/test/test_upload.py 1179
1158         self.failUnlessEqual(3, len(unique_servers))
1159         for server in ["server1", "server2", "server3"]:
1160             self.failUnlessIn(server, unique_servers)
1161-        # servers_with_unique_shares expects a set of PeerTracker
1162-        # instances as a used_peers argument, but only uses the peerid
1163-        # instance variable to assess uniqueness. So we feed it some fake
1164-        # PeerTrackers whose only important characteristic is that they
1165-        # have peerid set to something.
1166+        # servers_with_unique_shares expects to receive some object with
1167+        # a peerid attribute. So we make a FakePeerTracker whose only
1168+        # job is to have a peerid attribute.
1169         class FakePeerTracker:
1170             pass
1171         trackers = []
1172hunk ./src/allmydata/test/test_upload.py 1185
1173-        for server in ["server5", "server6", "server7", "server8"]:
1174+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
1175             t = FakePeerTracker()
1176             t.peerid = server
1177hunk ./src/allmydata/test/test_upload.py 1188
1178+            t.buckets = [i]
1179             trackers.append(t)
1180         # Recall that there are 3 unique servers in test1. Since none of
1181         # those overlap with the ones in trackers, we should get 7 back
1182hunk ./src/allmydata/test/test_upload.py 1201
1183         # Now add an overlapping server to trackers.
1184         t = FakePeerTracker()
1185         t.peerid = "server1"
1186+        t.buckets = [1]
1187         trackers.append(t)
1188         unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
1189         self.failUnlessEqual(7, len(unique_servers))
1190hunk ./src/allmydata/test/test_upload.py 1207
1191         for server in expected_servers:
1192             self.failUnlessIn(server, unique_servers)
1193+        test = {}
1194+        unique_servers = upload.servers_with_unique_shares(test)
1195+        self.failUnlessEqual(0, len(test))
1196 
1197 
1198     def test_shares_by_server(self):
1199hunk ./src/allmydata/test/test_upload.py 1213
1200-        test = {
1201-                    1 : "server1",
1202-                    2 : "server2",
1203-                    3 : "server3",
1204-                    4 : "server4"
1205-               }
1206+        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
1207         shares_by_server = upload.shares_by_server(test)
1208         self.failUnlessEqual(set([1]), shares_by_server["server1"])
1209         self.failUnlessEqual(set([2]), shares_by_server["server2"])
1210hunk ./src/allmydata/test/test_upload.py 1267
1211         return d
1212 
1213 
1214+    def test_should_add_server(self):
1215+        shares = dict([(i, "server%d" % i) for i in xrange(10)])
1216+        self.failIf(upload.should_add_server(shares, "server1", 4))
1217+        shares[4] = "server1"
1218+        self.failUnless(upload.should_add_server(shares, "server4", 4))
1219+        shares = {}
1220+        self.failUnless(upload.should_add_server(shares, "server1", 1))
1221+
1222+
1223     def _set_up_nodes_extra_config(self, clientdir):
1224         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1225         oldcfg = open(cfgfn, "r").read()
1226}
1227[Add tests for the behavior described in #834.
1228Kevan Carstensen <kevan@isnotajoke.com>**20091123012008
1229 Ignore-this: d8e0aa0f3f7965ce9b5cea843c6d6f9f
1230] {
1231hunk ./src/allmydata/test/test_encode.py 12
1232 from allmydata.util.assertutil import _assert
1233 from allmydata.util.consumer import MemoryConsumer
1234 from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
1235-     NotEnoughSharesError, IStorageBroker
1236+     NotEnoughSharesError, IStorageBroker, UploadHappinessError
1237 from allmydata.monitor import Monitor
1238 import common_util as testutil
1239 
1240hunk ./src/allmydata/test/test_encode.py 794
1241         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1242         def _done(res):
1243             self.failUnless(isinstance(res, Failure))
1244-            self.failUnless(res.check(NotEnoughSharesError), res)
1245+            self.failUnless(res.check(UploadHappinessError), res)
1246         d.addBoth(_done)
1247         return d
1248 
1249hunk ./src/allmydata/test/test_encode.py 805
1250         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1251         def _done(res):
1252             self.failUnless(isinstance(res, Failure))
1253-            self.failUnless(res.check(NotEnoughSharesError))
1254+            self.failUnless(res.check(UploadHappinessError))
1255         d.addBoth(_done)
1256         return d
1257hunk ./src/allmydata/test/test_upload.py 13
1258 import allmydata # for __full_version__
1259 from allmydata import uri, monitor, client
1260 from allmydata.immutable import upload, encode
1261-from allmydata.interfaces import FileTooLargeError, NoSharesError, \
1262-     NotEnoughSharesError
1263+from allmydata.interfaces import FileTooLargeError, UploadHappinessError
1264 from allmydata.util.assertutil import precondition
1265 from allmydata.util.deferredutil import DeferredListShouldSucceed
1266 from no_network import GridTestMixin
1267hunk ./src/allmydata/test/test_upload.py 402
1268 
1269     def test_first_error_all(self):
1270         self.make_node("first-fail")
1271-        d = self.shouldFail(NoSharesError, "first_error_all",
1272+        d = self.shouldFail(UploadHappinessError, "first_error_all",
1273                             "peer selection failed",
1274                             upload_data, self.u, DATA)
1275         def _check((f,)):
1276hunk ./src/allmydata/test/test_upload.py 434
1277 
1278     def test_second_error_all(self):
1279         self.make_node("second-fail")
1280-        d = self.shouldFail(NotEnoughSharesError, "second_error_all",
1281+        d = self.shouldFail(UploadHappinessError, "second_error_all",
1282                             "peer selection failed",
1283                             upload_data, self.u, DATA)
1284         def _check((f,)):
1285hunk ./src/allmydata/test/test_upload.py 452
1286         self.u.parent = self.node
1287 
1288     def _should_fail(self, f):
1289-        self.failUnless(isinstance(f, Failure) and f.check(NoSharesError), f)
1290+        self.failUnless(isinstance(f, Failure) and f.check(UploadHappinessError), f)
1291 
1292     def test_data_large(self):
1293         data = DATA
1294hunk ./src/allmydata/test/test_upload.py 817
1295         # These parameters are unsatisfiable with the client that we've made
1296         # -- we'll use them to test that the semnatics work correctly.
1297         self.set_encoding_parameters(k=3, happy=5, n=10)
1298-        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1299+        d = self.shouldFail(UploadHappinessError, "test_happy_semantics",
1300                             "shares could only be placed on 2 servers "
1301                             "(5 were requested)",
1302                             self.u.upload, DATA)
1303hunk ./src/allmydata/test/test_upload.py 888
1304             _prepare())
1305         # Uploading data should fail
1306         d.addCallback(lambda client:
1307-            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1308+            self.shouldFail(UploadHappinessError, "test_happy_semantics",
1309                             "shares could only be placed on 2 servers "
1310                             "(4 were requested)",
1311                             client.upload, upload.Data("data" * 10000,
1312hunk ./src/allmydata/test/test_upload.py 918
1313         d.addCallback(lambda ign:
1314             _prepare2())
1315         d.addCallback(lambda client:
1316-            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
1317+            self.shouldFail(UploadHappinessError, "test_happy_sematics",
1318                             "shares could only be placed on 2 servers "
1319                             "(3 were requested)",
1320                             client.upload, upload.Data("data" * 10000,
1321hunk ./src/allmydata/test/test_upload.py 1124
1322         d.addCallback(_do_server_setup)
1323         d.addCallback(_remove_server)
1324         d.addCallback(lambda ign:
1325-            self.shouldFail(NotEnoughSharesError,
1326+            self.shouldFail(UploadHappinessError,
1327                             "test_dropped_servers_in_encoder",
1328                             "lost too many servers during upload "
1329                             "(still have 3, want 4)",
1330hunk ./src/allmydata/test/test_upload.py 1151
1331         d.addCallback(_do_server_setup_2)
1332         d.addCallback(_remove_server)
1333         d.addCallback(lambda ign:
1334-            self.shouldFail(NotEnoughSharesError,
1335+            self.shouldFail(UploadHappinessError,
1336                             "test_dropped_servers_in_encoder",
1337                             "lost too many servers during upload "
1338                             "(still have 3, want 4)",
1339hunk ./src/allmydata/test/test_upload.py 1275
1340         self.failUnless(upload.should_add_server(shares, "server1", 1))
1341 
1342 
1343+    def test_exception_messages_during_peer_selection(self):
1344+        # server 1: readonly, no shares
1345+        # server 2: readonly, no shares
1346+        # server 3: readonly, no shares
1347+        # server 4: readonly, no shares
1348+        # server 5: readonly, no shares
1349+        # This will fail, but we want to make sure that the log messages
1350+        # are informative about why it has failed.
1351+        self.basedir = self.mktemp()
1352+        d = self._setup_and_upload()
1353+        d.addCallback(lambda ign:
1354+            self._add_server_with_share(server_number=1, readonly=True))
1355+        d.addCallback(lambda ign:
1356+            self._add_server_with_share(server_number=2, readonly=True))
1357+        d.addCallback(lambda ign:
1358+            self._add_server_with_share(server_number=3, readonly=True))
1359+        d.addCallback(lambda ign:
1360+            self._add_server_with_share(server_number=4, readonly=True))
1361+        d.addCallback(lambda ign:
1362+            self._add_server_with_share(server_number=5, readonly=True))
1363+        d.addCallback(lambda ign:
1364+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1365+        def _reset_encoding_parameters(ign):
1366+            client = self.g.clients[0]
1367+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1368+            return client
1369+        d.addCallback(_reset_encoding_parameters)
1370+        d.addCallback(lambda client:
1371+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1372+                            "peer selection failed for <Tahoe2PeerSelector "
1373+                            "for upload dglev>: placed 0 shares out of 10 "
1374+                            "total (10 homeless), want to place on 4 servers,"
1375+                            " sent 5 queries to 5 peers, 0 queries placed "
1376+                            "some shares, 5 placed none "
1377+                            "(of which 5 placed none due to the server being "
1378+                            "full and 0 placed none due to an error)",
1379+                            client.upload,
1380+                            upload.Data("data" * 10000, convergence="")))
1381+
1382+
1383+        # server 1: readonly, no shares
1384+        # server 2: broken, no shares
1385+        # server 3: readonly, no shares
1386+        # server 4: readonly, no shares
1387+        # server 5: readonly, no shares
1388+        def _reset(ign):
1389+            self.basedir = self.mktemp()
1390+        d.addCallback(_reset)
1391+        d.addCallback(lambda ign:
1392+            self._setup_and_upload())
1393+        d.addCallback(lambda ign:
1394+            self._add_server_with_share(server_number=1, readonly=True))
1395+        d.addCallback(lambda ign:
1396+            self._add_server_with_share(server_number=2))
1397+        def _break_server_2(ign):
1398+            server = self.g.servers_by_number[2].my_nodeid
1399+            # We have to break the server in servers_by_id,
1400+            # because the ones in servers_by_number isn't wrapped,
1401+            # and doesn't look at its broken attribute
1402+            self.g.servers_by_id[server].broken = True
1403+        d.addCallback(_break_server_2)
1404+        d.addCallback(lambda ign:
1405+            self._add_server_with_share(server_number=3, readonly=True))
1406+        d.addCallback(lambda ign:
1407+            self._add_server_with_share(server_number=4, readonly=True))
1408+        d.addCallback(lambda ign:
1409+            self._add_server_with_share(server_number=5, readonly=True))
1410+        d.addCallback(lambda ign:
1411+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1412+        def _reset_encoding_parameters(ign):
1413+            client = self.g.clients[0]
1414+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1415+            return client
1416+        d.addCallback(_reset_encoding_parameters)
1417+        d.addCallback(lambda client:
1418+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1419+                            "peer selection failed for <Tahoe2PeerSelector "
1420+                            "for upload dglev>: placed 0 shares out of 10 "
1421+                            "total (10 homeless), want to place on 4 servers,"
1422+                            " sent 5 queries to 5 peers, 0 queries placed "
1423+                            "some shares, 5 placed none "
1424+                            "(of which 4 placed none due to the server being "
1425+                            "full and 1 placed none due to an error)",
1426+                            client.upload,
1427+                            upload.Data("data" * 10000, convergence="")))
1428+        return d
1429+
1430+
1431     def _set_up_nodes_extra_config(self, clientdir):
1432         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1433         oldcfg = open(cfgfn, "r").read()
1434}
1435[Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
1436Kevan Carstensen <kevan@isnotajoke.com>**20091205043453
1437 Ignore-this: 83f4bc50c697d21b5f4e2a4cd91862ca
1438] {
1439replace ./src/allmydata/test/test_encode.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1440replace ./src/allmydata/test/test_upload.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1441}
1442[Alter various unit tests to work with the new happy behavior
1443Kevan Carstensen <kevan@isnotajoke.com>**20100107181325
1444 Ignore-this: 132032bbf865e63a079f869b663be34a
1445] {
1446hunk ./src/allmydata/test/common.py 941
1447             # We need multiple segments to test crypttext hash trees that are
1448             # non-trivial (i.e. they have more than just one hash in them).
1449             cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
1450+            # Tests that need to test servers of happiness using this should
1451+            # set their own value for happy -- the default (7) breaks stuff.
1452+            cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1453             d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence=""))
1454             def _after_upload(u):
1455                 filecap = u.uri
1456hunk ./src/allmydata/test/test_checker.py 283
1457         self.basedir = "checker/AddLease/875"
1458         self.set_up_grid(num_servers=1)
1459         c0 = self.g.clients[0]
1460+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1461         self.uris = {}
1462         DATA = "data" * 100
1463         d = c0.upload(Data(DATA, convergence=""))
1464hunk ./src/allmydata/test/test_system.py 93
1465         d = self.set_up_nodes()
1466         def _check_connections(res):
1467             for c in self.clients:
1468+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1469                 all_peerids = c.get_storage_broker().get_all_serverids()
1470                 self.failUnlessEqual(len(all_peerids), self.numclients)
1471                 sb = c.storage_broker
1472hunk ./src/allmydata/test/test_system.py 205
1473                                                       add_to_sparent=True))
1474         def _added(extra_node):
1475             self.extra_node = extra_node
1476+            self.extra_node.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1477         d.addCallback(_added)
1478 
1479         HELPER_DATA = "Data that needs help to upload" * 1000
1480hunk ./src/allmydata/test/test_system.py 705
1481         self.basedir = "system/SystemTest/test_filesystem"
1482         self.data = LARGE_DATA
1483         d = self.set_up_nodes(use_stats_gatherer=True)
1484+        def _new_happy_semantics(ign):
1485+            for c in self.clients:
1486+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1487+        d.addCallback(_new_happy_semantics)
1488         d.addCallback(self._test_introweb)
1489         d.addCallback(self.log, "starting publish")
1490         d.addCallback(self._do_publish1)
1491hunk ./src/allmydata/test/test_system.py 1129
1492         d.addCallback(self.failUnlessEqual, "new.txt contents")
1493         # and again with something large enough to use multiple segments,
1494         # and hopefully trigger pauseProducing too
1495+        def _new_happy_semantics(ign):
1496+            for c in self.clients:
1497+                # these get reset somewhere? Whatever.
1498+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1499+        d.addCallback(_new_happy_semantics)
1500         d.addCallback(lambda res: self.PUT(public + "/subdir3/big.txt",
1501                                            "big" * 500000)) # 1.5MB
1502         d.addCallback(lambda res: self.GET(public + "/subdir3/big.txt"))
1503hunk ./src/allmydata/test/test_upload.py 178
1504 
1505 class FakeClient:
1506     DEFAULT_ENCODING_PARAMETERS = {"k":25,
1507-                                   "happy": 75,
1508+                                   "happy": 25,
1509                                    "n": 100,
1510                                    "max_segment_size": 1*MiB,
1511                                    }
1512hunk ./src/allmydata/test/test_upload.py 316
1513         data = self.get_data(SIZE_LARGE)
1514         segsize = int(SIZE_LARGE / 2.5)
1515         # we want 3 segments, since that's not a power of two
1516-        self.set_encoding_parameters(25, 75, 100, segsize)
1517+        self.set_encoding_parameters(25, 25, 100, segsize)
1518         d = upload_data(self.u, data)
1519         d.addCallback(extract_uri)
1520         d.addCallback(self._check_large, SIZE_LARGE)
1521hunk ./src/allmydata/test/test_upload.py 395
1522     def test_first_error(self):
1523         mode = dict([(0,"good")] + [(i,"first-fail") for i in range(1,10)])
1524         self.make_node(mode)
1525+        self.set_encoding_parameters(k=25, happy=1, n=50)
1526         d = upload_data(self.u, DATA)
1527         d.addCallback(extract_uri)
1528         d.addCallback(self._check_large, SIZE_LARGE)
1529hunk ./src/allmydata/test/test_upload.py 513
1530 
1531         self.make_client()
1532         data = self.get_data(SIZE_LARGE)
1533-        self.set_encoding_parameters(50, 75, 100)
1534+        # if there are 50 peers, then happy needs to be <= 50
1535+        self.set_encoding_parameters(50, 50, 100)
1536         d = upload_data(self.u, data)
1537         d.addCallback(extract_uri)
1538         d.addCallback(self._check_large, SIZE_LARGE)
1539hunk ./src/allmydata/test/test_upload.py 560
1540 
1541         self.make_client()
1542         data = self.get_data(SIZE_LARGE)
1543-        self.set_encoding_parameters(100, 150, 200)
1544+        # if there are 50 peers, then happy should be no more than 50 if
1545+        # we want this to work.
1546+        self.set_encoding_parameters(100, 50, 200)
1547         d = upload_data(self.u, data)
1548         d.addCallback(extract_uri)
1549         d.addCallback(self._check_large, SIZE_LARGE)
1550hunk ./src/allmydata/test/test_upload.py 580
1551 
1552         self.make_client(3)
1553         data = self.get_data(SIZE_LARGE)
1554-        self.set_encoding_parameters(3, 5, 10)
1555+        self.set_encoding_parameters(3, 3, 10)
1556         d = upload_data(self.u, data)
1557         d.addCallback(extract_uri)
1558         d.addCallback(self._check_large, SIZE_LARGE)
1559hunk ./src/allmydata/test/test_web.py 4073
1560         self.basedir = "web/Grid/exceptions"
1561         self.set_up_grid(num_clients=1, num_servers=2)
1562         c0 = self.g.clients[0]
1563+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 2
1564         self.fileurls = {}
1565         DATA = "data" * 100
1566         d = c0.create_dirnode()
1567}
1568[Revisions of the #778 tests, per reviewers' comments
1569Kevan Carstensen <kevan@isnotajoke.com>**20100514012542
1570 Ignore-this: 735bbc7f663dce633caeb3b66a53cf6e
1571 
1572 - Fix comments and confusing naming.
1573 - Add tests for the new error messages suggested by David-Sarah
1574   and Zooko.
1575 - Alter existing tests for new error messages.
1576 - Make sure that the tests continue to work with the trunk.
1577 - Add a test for a mutual disjointedness assertion that I added to
1578   upload.servers_of_happiness.
1579 - Fix the comments to correctly reflect read-onlyness
1580 - Add a test for an edge case in should_add_server
1581 - Add an assertion to make sure that share redistribution works as it
1582   should
1583 - Alter tests to work with revised servers_of_happiness semantics
1584 - Remove tests for should_add_server, since that function no longer exists.
1585 - Alter tests to know about merge_peers, and to use it before calling
1586   servers_of_happiness.
1587 - Add tests for merge_peers.
1588 - Add Zooko's puzzles to the tests.
1589 - Edit encoding tests to expect the new kind of failure message.
1590 - Edit tests to expect error messages with the word "only" moved as far
1591   to the right as possible.
1592 - Extended and cleaned up some helper functions.
1593 - Changed some tests to call more appropriate helper functions.
1594 - Added a test for the failing redistribution algorithm
1595 - Added a test for the progress message
1596 - Added a test for the upper bound on readonly peer share discovery.
1597 
1598] {
1599hunk ./src/allmydata/test/test_encode.py 28
1600 class FakeBucketReaderWriterProxy:
1601     implements(IStorageBucketWriter, IStorageBucketReader)
1602     # these are used for both reading and writing
1603-    def __init__(self, mode="good"):
1604+    def __init__(self, mode="good", peerid="peer"):
1605         self.mode = mode
1606         self.blocks = {}
1607         self.plaintext_hashes = []
1608hunk ./src/allmydata/test/test_encode.py 36
1609         self.block_hashes = None
1610         self.share_hashes = None
1611         self.closed = False
1612+        self.peerid = peerid
1613 
1614     def get_peerid(self):
1615hunk ./src/allmydata/test/test_encode.py 39
1616-        return "peerid"
1617+        return self.peerid
1618 
1619     def _start(self):
1620         if self.mode == "lost-early":
1621hunk ./src/allmydata/test/test_encode.py 306
1622             for shnum in range(NUM_SHARES):
1623                 peer = FakeBucketReaderWriterProxy()
1624                 shareholders[shnum] = peer
1625-                servermap[shnum] = str(shnum)
1626+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
1627                 all_shareholders.append(peer)
1628             e.set_shareholders(shareholders, servermap)
1629             return e.start()
1630hunk ./src/allmydata/test/test_encode.py 463
1631         def _ready(res):
1632             k,happy,n = e.get_param("share_counts")
1633             assert n == NUM_SHARES # else we'll be completely confused
1634-            all_peers = []
1635+            servermap = {}
1636             for shnum in range(NUM_SHARES):
1637                 mode = bucket_modes.get(shnum, "good")
1638hunk ./src/allmydata/test/test_encode.py 466
1639-                peer = FakeBucketReaderWriterProxy(mode)
1640+                peer = FakeBucketReaderWriterProxy(mode, "peer%d" % shnum)
1641                 shareholders[shnum] = peer
1642hunk ./src/allmydata/test/test_encode.py 468
1643-                servermap[shnum] = str(shnum)
1644+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
1645             e.set_shareholders(shareholders, servermap)
1646             return e.start()
1647         d.addCallback(_ready)
1648hunk ./src/allmydata/test/test_upload.py 16
1649 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
1650 from allmydata.util.assertutil import precondition
1651 from allmydata.util.deferredutil import DeferredListShouldSucceed
1652+from allmydata.util.happinessutil import servers_of_happiness, \
1653+                                         shares_by_server, merge_peers
1654 from no_network import GridTestMixin
1655 from common_util import ShouldFailMixin
1656 from allmydata.storage_client import StorageFarmBroker
1657hunk ./src/allmydata/test/test_upload.py 708
1658         num_segments = encoder.get_param("num_segments")
1659         d = selector.get_shareholders(broker, sh, storage_index,
1660                                       share_size, block_size, num_segments,
1661-                                      10, 4)
1662+                                      10, 3, 4)
1663         def _have_shareholders((used_peers, already_peers)):
1664             assert servers_to_break <= len(used_peers)
1665             for index in xrange(servers_to_break):
1666hunk ./src/allmydata/test/test_upload.py 720
1667             for peer in used_peers:
1668                 buckets.update(peer.buckets)
1669                 for bucket in peer.buckets:
1670-                    servermap[bucket] = peer.peerid
1671+                    servermap.setdefault(bucket, set()).add(peer.peerid)
1672             encoder.set_shareholders(buckets, servermap)
1673             d = encoder.start()
1674             return d
1675hunk ./src/allmydata/test/test_upload.py 730
1676 
1677     def _add_server(self, server_number, readonly=False):
1678         assert self.g, "I tried to find a grid at self.g, but failed"
1679-        assert self.shares, "I tried to find shares at self.shares, but failed"
1680         ss = self.g.make_server(server_number, readonly)
1681         self.g.add_server(server_number, ss)
1682 
1683hunk ./src/allmydata/test/test_upload.py 763
1684         self.failUnless((share_number, ss.my_nodeid, new_share_location)
1685                         in shares)
1686 
1687+    def _setup_grid(self):
1688+        """
1689+        I set up a NoNetworkGrid with a single server and client.
1690+        """
1691+        self.set_up_grid(num_clients=1, num_servers=1)
1692 
1693hunk ./src/allmydata/test/test_upload.py 769
1694-    def _setup_and_upload(self):
1695+    def _setup_and_upload(self, **kwargs):
1696         """
1697         I set up a NoNetworkGrid with a single server and client,
1698         upload a file to it, store its uri in self.uri, and store its
1699hunk ./src/allmydata/test/test_upload.py 775
1700         sharedata in self.shares.
1701         """
1702-        self.set_up_grid(num_clients=1, num_servers=1)
1703+        self._setup_grid()
1704         client = self.g.clients[0]
1705         client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1706hunk ./src/allmydata/test/test_upload.py 778
1707+        if "n" in kwargs and "k" in kwargs:
1708+            client.DEFAULT_ENCODING_PARAMETERS['k'] = kwargs['k']
1709+            client.DEFAULT_ENCODING_PARAMETERS['n'] = kwargs['n']
1710         data = upload.Data("data" * 10000, convergence="")
1711         self.data = data
1712         d = client.upload(data)
1713hunk ./src/allmydata/test/test_upload.py 816
1714 
1715 
1716     def _setUp(self, ns):
1717-        # Used by test_happy_semantics and test_prexisting_share_behavior
1718+        # Used by test_happy_semantics and test_preexisting_share_behavior
1719         # to set up the grid.
1720         self.node = FakeClient(mode="good", num_servers=ns)
1721         self.u = upload.Uploader()
1722hunk ./src/allmydata/test/test_upload.py 827
1723     def test_happy_semantics(self):
1724         self._setUp(2)
1725         DATA = upload.Data("kittens" * 10000, convergence="")
1726-        # These parameters are unsatisfiable with the client that we've made
1727-        # -- we'll use them to test that the semnatics work correctly.
1728+        # These parameters are unsatisfiable with only 2 servers.
1729         self.set_encoding_parameters(k=3, happy=5, n=10)
1730         d = self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1731hunk ./src/allmydata/test/test_upload.py 830
1732-                            "shares could only be placed on 2 servers "
1733-                            "(5 were requested)",
1734+                            "shares could be placed or found on only 2 "
1735+                            "server(s). We were asked to place shares on "
1736+                            "at least 5 server(s) such that any 3 of them "
1737+                            "have enough shares to recover the file",
1738                             self.u.upload, DATA)
1739         # Let's reset the client to have 10 servers
1740         d.addCallback(lambda ign:
1741hunk ./src/allmydata/test/test_upload.py 838
1742             self._setUp(10))
1743-        # These parameters are satisfiable with the client we've made.
1744+        # These parameters are satisfiable with 10 servers.
1745         d.addCallback(lambda ign:
1746             self.set_encoding_parameters(k=3, happy=5, n=10))
1747hunk ./src/allmydata/test/test_upload.py 841
1748-        # this should work
1749         d.addCallback(lambda ign:
1750             self.u.upload(DATA))
1751         # Let's reset the client to have 7 servers
1752hunk ./src/allmydata/test/test_upload.py 847
1753         # (this is less than n, but more than h)
1754         d.addCallback(lambda ign:
1755             self._setUp(7))
1756-        # These encoding parameters should still be satisfiable with our
1757-        # client setup
1758+        # These parameters are satisfiable with 7 servers.
1759         d.addCallback(lambda ign:
1760             self.set_encoding_parameters(k=3, happy=5, n=10))
1761hunk ./src/allmydata/test/test_upload.py 850
1762-        # This, then, should work.
1763         d.addCallback(lambda ign:
1764             self.u.upload(DATA))
1765         return d
1766hunk ./src/allmydata/test/test_upload.py 864
1767         #
1768         # The scenario in comment:52 proposes that we have a layout
1769         # like:
1770-        # server 1: share 1
1771-        # server 2: share 1
1772-        # server 3: share 1
1773-        # server 4: shares 2 - 10
1774+        # server 0: shares 1 - 9
1775+        # server 1: share 0, read-only
1776+        # server 2: share 0, read-only
1777+        # server 3: share 0, read-only
1778         # To get access to the shares, we will first upload to one
1779hunk ./src/allmydata/test/test_upload.py 869
1780-        # server, which will then have shares 1 - 10. We'll then
1781+        # server, which will then have shares 0 - 9. We'll then
1782         # add three new servers, configure them to not accept any new
1783hunk ./src/allmydata/test/test_upload.py 871
1784-        # shares, then write share 1 directly into the serverdir of each.
1785-        # Then each of servers 1 - 3 will report that they have share 1,
1786-        # and will not accept any new share, while server 4 will report that
1787-        # it has shares 2 - 10 and will accept new shares.
1788+        # shares, then write share 0 directly into the serverdir of each,
1789+        # and then remove share 0 from server 0 in the same way.
1790+        # Then each of servers 1 - 3 will report that they have share 0,
1791+        # and will not accept any new share, while server 0 will report that
1792+        # it has shares 1 - 9 and will accept new shares.
1793         # We'll then set 'happy' = 4, and see that an upload fails
1794         # (as it should)
1795         d = self._setup_and_upload()
1796hunk ./src/allmydata/test/test_upload.py 880
1797         d.addCallback(lambda ign:
1798-            self._add_server_with_share(1, 0, True))
1799+            self._add_server_with_share(server_number=1, share_number=0,
1800+                                        readonly=True))
1801         d.addCallback(lambda ign:
1802hunk ./src/allmydata/test/test_upload.py 883
1803-            self._add_server_with_share(2, 0, True))
1804+            self._add_server_with_share(server_number=2, share_number=0,
1805+                                        readonly=True))
1806         d.addCallback(lambda ign:
1807hunk ./src/allmydata/test/test_upload.py 886
1808-            self._add_server_with_share(3, 0, True))
1809+            self._add_server_with_share(server_number=3, share_number=0,
1810+                                        readonly=True))
1811         # Remove the first share from server 0.
1812hunk ./src/allmydata/test/test_upload.py 889
1813-        def _remove_share_0():
1814+        def _remove_share_0_from_server_0():
1815             share_location = self.shares[0][2]
1816             os.remove(share_location)
1817         d.addCallback(lambda ign:
1818hunk ./src/allmydata/test/test_upload.py 893
1819-            _remove_share_0())
1820+            _remove_share_0_from_server_0())
1821         # Set happy = 4 in the client.
1822         def _prepare():
1823             client = self.g.clients[0]
1824hunk ./src/allmydata/test/test_upload.py 903
1825             _prepare())
1826         # Uploading data should fail
1827         d.addCallback(lambda client:
1828-            self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1829-                            "shares could only be placed on 2 servers "
1830-                            "(4 were requested)",
1831+            self.shouldFail(UploadUnhappinessError,
1832+                            "test_problem_layout_comment_52_test_1",
1833+                            "shares could be placed or found on 4 server(s), "
1834+                            "but they are not spread out evenly enough to "
1835+                            "ensure that any 3 of these servers would have "
1836+                            "enough shares to recover the file. "
1837+                            "We were asked to place shares on at "
1838+                            "least 4 servers such that any 3 of them have "
1839+                            "enough shares to recover the file",
1840                             client.upload, upload.Data("data" * 10000,
1841                                                        convergence="")))
1842 
1843hunk ./src/allmydata/test/test_upload.py 925
1844         d.addCallback(lambda ign:
1845             self._setup_and_upload())
1846         d.addCallback(lambda ign:
1847-            self._add_server_with_share(server_number=2))
1848+            self._add_server(server_number=2))
1849         d.addCallback(lambda ign:
1850             self._add_server_with_share(server_number=3, share_number=0,
1851                                         readonly=True))
1852hunk ./src/allmydata/test/test_upload.py 934
1853                                         readonly=True))
1854         def _prepare2():
1855             client = self.g.clients[0]
1856-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
1857+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1858             return client
1859         d.addCallback(lambda ign:
1860             _prepare2())
1861hunk ./src/allmydata/test/test_upload.py 939
1862         d.addCallback(lambda client:
1863-            self.shouldFail(UploadUnhappinessError, "test_happy_sematics",
1864-                            "shares could only be placed on 2 servers "
1865-                            "(3 were requested)",
1866+            self.shouldFail(UploadUnhappinessError,
1867+                            "test_problem_layout_comment_52_test_2",
1868+                            "shares could be placed on only 3 server(s) such "
1869+                            "that any 3 of them have enough shares to recover "
1870+                            "the file, but we were asked to place shares on "
1871+                            "at least 4 such servers.",
1872                             client.upload, upload.Data("data" * 10000,
1873                                                        convergence="")))
1874         return d
1875hunk ./src/allmydata/test/test_upload.py 958
1876         def _change_basedir(ign):
1877             self.basedir = self.mktemp()
1878         _change_basedir(None)
1879-        d = self._setup_and_upload()
1880-        # We start by uploading all of the shares to one server (which has
1881-        # already been done above).
1882+        # We start by uploading all of the shares to one server.
1883         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
1884         # one share from our initial upload to each of these.
1885         # The counterintuitive ordering of the share numbers is to deal with
1886hunk ./src/allmydata/test/test_upload.py 964
1887         # the permuting of these servers -- distributing the shares this
1888         # way ensures that the Tahoe2PeerSelector sees them in the order
1889-        # described above.
1890+        # described below.
1891+        d = self._setup_and_upload()
1892         d.addCallback(lambda ign:
1893             self._add_server_with_share(server_number=1, share_number=2))
1894         d.addCallback(lambda ign:
1895hunk ./src/allmydata/test/test_upload.py 977
1896         # server 1: share 2
1897         # server 2: share 0
1898         # server 3: share 1
1899-        # We want to change the 'happy' parameter in the client to 4.
1900+        # We change the 'happy' parameter in the client to 4.
1901         # The Tahoe2PeerSelector will see the peers permuted as:
1902         # 2, 3, 1, 0
1903         # Ideally, a reupload of our original data should work.
1904hunk ./src/allmydata/test/test_upload.py 990
1905             client.upload(upload.Data("data" * 10000, convergence="")))
1906 
1907 
1908-        # This scenario is basically comment:53, but with the order reversed;
1909-        # this means that the Tahoe2PeerSelector sees
1910-        # server 2: shares 1-10
1911-        # server 3: share 1
1912-        # server 1: share 2
1913-        # server 4: share 3
1914+        # This scenario is basically comment:53, but changed so that the
1915+        # Tahoe2PeerSelector sees the server with all of the shares before
1916+        # any of the other servers.
1917+        # The layout is:
1918+        # server 2: shares 0 - 9
1919+        # server 3: share 0
1920+        # server 1: share 1
1921+        # server 4: share 2
1922+        # The Tahoe2PeerSelector sees the peers permuted as:
1923+        # 2, 3, 1, 4
1924+        # Note that server 0 has been replaced by server 4; this makes it
1925+        # easier to ensure that the last server seen by Tahoe2PeerSelector
1926+        # has only one share.
1927         d.addCallback(_change_basedir)
1928         d.addCallback(lambda ign:
1929             self._setup_and_upload())
1930hunk ./src/allmydata/test/test_upload.py 1014
1931             self._add_server_with_share(server_number=1, share_number=2))
1932         # Copy all of the other shares to server number 2
1933         def _copy_shares(ign):
1934-            for i in xrange(1, 10):
1935+            for i in xrange(0, 10):
1936                 self._copy_share_to_server(i, 2)
1937         d.addCallback(_copy_shares)
1938         # Remove the first server, and add a placeholder with share 0
1939hunk ./src/allmydata/test/test_upload.py 1026
1940         d.addCallback(_reset_encoding_parameters)
1941         d.addCallback(lambda client:
1942             client.upload(upload.Data("data" * 10000, convergence="")))
1943+
1944+
1945         # Try the same thing, but with empty servers after the first one
1946         # We want to make sure that Tahoe2PeerSelector will redistribute
1947         # shares as necessary, not simply discover an existing layout.
1948hunk ./src/allmydata/test/test_upload.py 1031
1949+        # The layout is:
1950+        # server 2: shares 0 - 9
1951+        # server 3: empty
1952+        # server 1: empty
1953+        # server 4: empty
1954         d.addCallback(_change_basedir)
1955         d.addCallback(lambda ign:
1956             self._setup_and_upload())
1957hunk ./src/allmydata/test/test_upload.py 1045
1958             self._add_server(server_number=3))
1959         d.addCallback(lambda ign:
1960             self._add_server(server_number=1))
1961+        d.addCallback(lambda ign:
1962+            self._add_server(server_number=4))
1963         d.addCallback(_copy_shares)
1964         d.addCallback(lambda ign:
1965             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1966hunk ./src/allmydata/test/test_upload.py 1050
1967-        d.addCallback(lambda ign:
1968-            self._add_server(server_number=4))
1969         d.addCallback(_reset_encoding_parameters)
1970         d.addCallback(lambda client:
1971             client.upload(upload.Data("data" * 10000, convergence="")))
1972hunk ./src/allmydata/test/test_upload.py 1053
1973+        # Make sure that only as many shares as necessary to satisfy
1974+        # servers of happiness were pushed.
1975+        d.addCallback(lambda results:
1976+            self.failUnlessEqual(results.pushed_shares, 3))
1977         return d
1978 
1979 
1980hunk ./src/allmydata/test/test_upload.py 1135
1981 
1982 
1983     def test_dropped_servers_in_encoder(self):
1984+        # The Encoder does its own "servers_of_happiness" check if it
1985+        # happens to lose a bucket during an upload (it assumes that
1986+        # the layout presented to it satisfies "servers_of_happiness"
1987+        # until a failure occurs)
1988+        #
1989+        # This test simulates an upload where servers break after peer
1990+        # selection, but before they are written to.
1991         def _set_basedir(ign=None):
1992             self.basedir = self.mktemp()
1993         _set_basedir()
1994hunk ./src/allmydata/test/test_upload.py 1148
1995         d = self._setup_and_upload();
1996         # Add 5 servers
1997         def _do_server_setup(ign):
1998-            self._add_server_with_share(1)
1999-            self._add_server_with_share(2)
2000-            self._add_server_with_share(3)
2001-            self._add_server_with_share(4)
2002-            self._add_server_with_share(5)
2003+            self._add_server(server_number=1)
2004+            self._add_server(server_number=2)
2005+            self._add_server(server_number=3)
2006+            self._add_server(server_number=4)
2007+            self._add_server(server_number=5)
2008         d.addCallback(_do_server_setup)
2009         # remove the original server
2010         # (necessary to ensure that the Tahoe2PeerSelector will distribute
2011hunk ./src/allmydata/test/test_upload.py 1161
2012             server = self.g.servers_by_number[0]
2013             self.g.remove_server(server.my_nodeid)
2014         d.addCallback(_remove_server)
2015-        # This should succeed.
2016+        # This should succeed; we still have 4 servers, and the
2017+        # happiness of the upload is 4.
2018         d.addCallback(lambda ign:
2019             self._do_upload_with_broken_servers(1))
2020         # Now, do the same thing over again, but drop 2 servers instead
2021hunk ./src/allmydata/test/test_upload.py 1166
2022-        # of 1. This should fail.
2023+        # of 1. This should fail, because servers_of_happiness is 4 and
2024+        # we can't satisfy that.
2025         d.addCallback(_set_basedir)
2026         d.addCallback(lambda ign:
2027             self._setup_and_upload())
2028hunk ./src/allmydata/test/test_upload.py 1176
2029         d.addCallback(lambda ign:
2030             self.shouldFail(UploadUnhappinessError,
2031                             "test_dropped_servers_in_encoder",
2032-                            "lost too many servers during upload "
2033-                            "(still have 3, want 4)",
2034+                            "shares could be placed on only 3 server(s) "
2035+                            "such that any 3 of them have enough shares to "
2036+                            "recover the file, but we were asked to place "
2037+                            "shares on at least 4",
2038                             self._do_upload_with_broken_servers, 2))
2039         # Now do the same thing over again, but make some of the servers
2040         # readonly, break some of the ones that aren't, and make sure that
2041hunk ./src/allmydata/test/test_upload.py 1188
2042         d.addCallback(lambda ign:
2043             self._setup_and_upload())
2044         def _do_server_setup_2(ign):
2045-            self._add_server_with_share(1)
2046-            self._add_server_with_share(2)
2047-            self._add_server_with_share(3)
2048+            self._add_server(1)
2049+            self._add_server(2)
2050+            self._add_server(3)
2051             self._add_server_with_share(4, 7, readonly=True)
2052             self._add_server_with_share(5, 8, readonly=True)
2053         d.addCallback(_do_server_setup_2)
2054hunk ./src/allmydata/test/test_upload.py 1205
2055         d.addCallback(lambda ign:
2056             self.shouldFail(UploadUnhappinessError,
2057                             "test_dropped_servers_in_encoder",
2058-                            "lost too many servers during upload "
2059-                            "(still have 3, want 4)",
2060+                            "shares could be placed on only 3 server(s) "
2061+                            "such that any 3 of them have enough shares to "
2062+                            "recover the file, but we were asked to place "
2063+                            "shares on at least 4",
2064                             self._do_upload_with_broken_servers, 2))
2065         return d
2066 
2067hunk ./src/allmydata/test/test_upload.py 1213
2068 
2069-    def test_servers_with_unique_shares(self):
2070-        # servers_with_unique_shares expects a dict of
2071-        # shnum => peerid as a preexisting shares argument.
2072+    def test_merge_peers(self):
2073+        # merge_peers merges a list of used_peers and a dict of
2074+        # shareid -> peerid mappings.
2075+        shares = {
2076+                    1 : set(["server1"]),
2077+                    2 : set(["server2"]),
2078+                    3 : set(["server3"]),
2079+                    4 : set(["server4", "server5"]),
2080+                    5 : set(["server1", "server2"]),
2081+                 }
2082+        # if not provided with a used_peers argument, it should just
2083+        # return the first argument unchanged.
2084+        self.failUnlessEqual(shares, merge_peers(shares, set([])))
2085+        class FakePeerTracker:
2086+            pass
2087+        trackers = []
2088+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
2089+            t = FakePeerTracker()
2090+            t.peerid = server
2091+            t.buckets = [i]
2092+            trackers.append(t)
2093+        expected = {
2094+                    1 : set(["server1"]),
2095+                    2 : set(["server2"]),
2096+                    3 : set(["server3"]),
2097+                    4 : set(["server4", "server5"]),
2098+                    5 : set(["server1", "server2", "server5"]),
2099+                    6 : set(["server6"]),
2100+                    7 : set(["server7"]),
2101+                    8 : set(["server8"]),
2102+                   }
2103+        self.failUnlessEqual(expected, merge_peers(shares, set(trackers)))
2104+        shares2 = {}
2105+        expected = {
2106+                    5 : set(["server5"]),
2107+                    6 : set(["server6"]),
2108+                    7 : set(["server7"]),
2109+                    8 : set(["server8"]),
2110+                   }
2111+        self.failUnlessEqual(expected, merge_peers(shares2, set(trackers)))
2112+        shares3 = {}
2113+        trackers = []
2114+        expected = {}
2115+        for (i, server) in [(i, "server%d" % i) for i in xrange(10)]:
2116+            shares3[i] = set([server])
2117+            t = FakePeerTracker()
2118+            t.peerid = server
2119+            t.buckets = [i]
2120+            trackers.append(t)
2121+            expected[i] = set([server])
2122+        self.failUnlessEqual(expected, merge_peers(shares3, set(trackers)))
2123+
2124+
2125+    def test_servers_of_happiness_utility_function(self):
2126+        # These tests are concerned with the servers_of_happiness()
2127+        # utility function, and its underlying matching algorithm. Other
2128+        # aspects of the servers_of_happiness behavior are tested
2129+        # elsehwere These tests exist to ensure that
2130+        # servers_of_happiness doesn't under or overcount the happiness
2131+        # value for given inputs.
2132+
2133+        # servers_of_happiness expects a dict of
2134+        # shnum => set(peerids) as a preexisting shares argument.
2135         test1 = {
2136hunk ./src/allmydata/test/test_upload.py 1277
2137-                 1 : "server1",
2138-                 2 : "server2",
2139-                 3 : "server3",
2140-                 4 : "server4"
2141+                 1 : set(["server1"]),
2142+                 2 : set(["server2"]),
2143+                 3 : set(["server3"]),
2144+                 4 : set(["server4"])
2145                 }
2146hunk ./src/allmydata/test/test_upload.py 1282
2147-        unique_servers = upload.servers_with_unique_shares(test1)
2148-        self.failUnlessEqual(4, len(unique_servers))
2149-        for server in ["server1", "server2", "server3", "server4"]:
2150-            self.failUnlessIn(server, unique_servers)
2151-        test1[4] = "server1"
2152-        # Now there should only be 3 unique servers.
2153-        unique_servers = upload.servers_with_unique_shares(test1)
2154-        self.failUnlessEqual(3, len(unique_servers))
2155-        for server in ["server1", "server2", "server3"]:
2156-            self.failUnlessIn(server, unique_servers)
2157-        # servers_with_unique_shares expects to receive some object with
2158-        # a peerid attribute. So we make a FakePeerTracker whose only
2159-        # job is to have a peerid attribute.
2160+        happy = servers_of_happiness(test1)
2161+        self.failUnlessEqual(4, happy)
2162+        test1[4] = set(["server1"])
2163+        # We've added a duplicate server, so now servers_of_happiness
2164+        # should be 3 instead of 4.
2165+        happy = servers_of_happiness(test1)
2166+        self.failUnlessEqual(3, happy)
2167+        # The second argument of merge_peers should be a set of
2168+        # objects with peerid and buckets as attributes. In actual use,
2169+        # these will be PeerTracker instances, but for testing it is fine
2170+        # to make a FakePeerTracker whose job is to hold those instance
2171+        # variables to test that part.
2172         class FakePeerTracker:
2173             pass
2174         trackers = []
2175hunk ./src/allmydata/test/test_upload.py 1302
2176             t.peerid = server
2177             t.buckets = [i]
2178             trackers.append(t)
2179-        # Recall that there are 3 unique servers in test1. Since none of
2180-        # those overlap with the ones in trackers, we should get 7 back
2181-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
2182-        self.failUnlessEqual(7, len(unique_servers))
2183-        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
2184-        expected_servers.remove("server4")
2185-        for server in expected_servers:
2186-            self.failUnlessIn(server, unique_servers)
2187-        # Now add an overlapping server to trackers.
2188+        # Recall that test1 is a server layout with servers_of_happiness
2189+        # = 3.  Since there isn't any overlap between the shnum ->
2190+        # set([peerid]) correspondences in test1 and those in trackers,
2191+        # the result here should be 7.
2192+        test2 = merge_peers(test1, set(trackers))
2193+        happy = servers_of_happiness(test2)
2194+        self.failUnlessEqual(7, happy)
2195+        # Now add an overlapping server to trackers. This is redundant,
2196+        # so it should not cause the previously reported happiness value
2197+        # to change.
2198         t = FakePeerTracker()
2199         t.peerid = "server1"
2200         t.buckets = [1]
2201hunk ./src/allmydata/test/test_upload.py 1316
2202         trackers.append(t)
2203-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
2204-        self.failUnlessEqual(7, len(unique_servers))
2205-        for server in expected_servers:
2206-            self.failUnlessIn(server, unique_servers)
2207+        test2 = merge_peers(test1, set(trackers))
2208+        happy = servers_of_happiness(test2)
2209+        self.failUnlessEqual(7, happy)
2210         test = {}
2211hunk ./src/allmydata/test/test_upload.py 1320
2212-        unique_servers = upload.servers_with_unique_shares(test)
2213-        self.failUnlessEqual(0, len(test))
2214+        happy = servers_of_happiness(test)
2215+        self.failUnlessEqual(0, happy)
2216+        # Test a more substantial overlap between the trackers and the
2217+        # existing assignments.
2218+        test = {
2219+            1 : set(['server1']),
2220+            2 : set(['server2']),
2221+            3 : set(['server3']),
2222+            4 : set(['server4']),
2223+        }
2224+        trackers = []
2225+        t = FakePeerTracker()
2226+        t.peerid = 'server5'
2227+        t.buckets = [4]
2228+        trackers.append(t)
2229+        t = FakePeerTracker()
2230+        t.peerid = 'server6'
2231+        t.buckets = [3, 5]
2232+        trackers.append(t)
2233+        # The value returned by servers_of_happiness is the size
2234+        # of a maximum matching in the bipartite graph that
2235+        # servers_of_happiness() makes between peerids and share
2236+        # numbers. It should find something like this:
2237+        # (server 1, share 1)
2238+        # (server 2, share 2)
2239+        # (server 3, share 3)
2240+        # (server 5, share 4)
2241+        # (server 6, share 5)
2242+        #
2243+        # and, since there are 5 edges in this matching, it should
2244+        # return 5.
2245+        test2 = merge_peers(test, set(trackers))
2246+        happy = servers_of_happiness(test2)
2247+        self.failUnlessEqual(5, happy)
2248+        # Zooko's first puzzle:
2249+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:156)
2250+        #
2251+        # server 1: shares 0, 1
2252+        # server 2: shares 1, 2
2253+        # server 3: share 2
2254+        #
2255+        # This should yield happiness of 3.
2256+        test = {
2257+            0 : set(['server1']),
2258+            1 : set(['server1', 'server2']),
2259+            2 : set(['server2', 'server3']),
2260+        }
2261+        self.failUnlessEqual(3, servers_of_happiness(test))
2262+        # Zooko's second puzzle:       
2263+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:158)
2264+        #
2265+        # server 1: shares 0, 1
2266+        # server 2: share 1
2267+        #
2268+        # This should yield happiness of 2.
2269+        test = {
2270+            0 : set(['server1']),
2271+            1 : set(['server1', 'server2']),
2272+        }
2273+        self.failUnlessEqual(2, servers_of_happiness(test))
2274 
2275 
2276     def test_shares_by_server(self):
2277hunk ./src/allmydata/test/test_upload.py 1383
2278-        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
2279-        shares_by_server = upload.shares_by_server(test)
2280-        self.failUnlessEqual(set([1]), shares_by_server["server1"])
2281-        self.failUnlessEqual(set([2]), shares_by_server["server2"])
2282-        self.failUnlessEqual(set([3]), shares_by_server["server3"])
2283-        self.failUnlessEqual(set([4]), shares_by_server["server4"])
2284+        test = dict([(i, set(["server%d" % i])) for i in xrange(1, 5)])
2285+        sbs = shares_by_server(test)
2286+        self.failUnlessEqual(set([1]), sbs["server1"])
2287+        self.failUnlessEqual(set([2]), sbs["server2"])
2288+        self.failUnlessEqual(set([3]), sbs["server3"])
2289+        self.failUnlessEqual(set([4]), sbs["server4"])
2290         test1 = {
2291hunk ./src/allmydata/test/test_upload.py 1390
2292-                    1 : "server1",
2293-                    2 : "server1",
2294-                    3 : "server1",
2295-                    4 : "server2",
2296-                    5 : "server2"
2297+                    1 : set(["server1"]),
2298+                    2 : set(["server1"]),
2299+                    3 : set(["server1"]),
2300+                    4 : set(["server2"]),
2301+                    5 : set(["server2"])
2302                 }
2303hunk ./src/allmydata/test/test_upload.py 1396
2304-        shares_by_server = upload.shares_by_server(test1)
2305-        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
2306-        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
2307+        sbs = shares_by_server(test1)
2308+        self.failUnlessEqual(set([1, 2, 3]), sbs["server1"])
2309+        self.failUnlessEqual(set([4, 5]), sbs["server2"])
2310+        # This should fail unless the peerid part of the mapping is a set
2311+        test2 = {1: "server1"}
2312+        self.shouldFail(AssertionError,
2313+                       "test_shares_by_server",
2314+                       "",
2315+                       shares_by_server, test2)
2316 
2317 
2318     def test_existing_share_detection(self):
2319hunk ./src/allmydata/test/test_upload.py 1411
2320         self.basedir = self.mktemp()
2321         d = self._setup_and_upload()
2322         # Our final setup should look like this:
2323-        # server 1: shares 1 - 10, read-only
2324+        # server 1: shares 0 - 9, read-only
2325         # server 2: empty
2326         # server 3: empty
2327         # server 4: empty
2328hunk ./src/allmydata/test/test_upload.py 1422
2329         d.addCallback(lambda ign:
2330             self._add_server_with_share(1, 0, True))
2331         d.addCallback(lambda ign:
2332-            self._add_server_with_share(2))
2333+            self._add_server(2))
2334         d.addCallback(lambda ign:
2335hunk ./src/allmydata/test/test_upload.py 1424
2336-            self._add_server_with_share(3))
2337+            self._add_server(3))
2338         d.addCallback(lambda ign:
2339hunk ./src/allmydata/test/test_upload.py 1426
2340-            self._add_server_with_share(4))
2341+            self._add_server(4))
2342         def _copy_shares(ign):
2343             for i in xrange(1, 10):
2344                 self._copy_share_to_server(i, 1)
2345hunk ./src/allmydata/test/test_upload.py 1443
2346         return d
2347 
2348 
2349-    def test_should_add_server(self):
2350-        shares = dict([(i, "server%d" % i) for i in xrange(10)])
2351-        self.failIf(upload.should_add_server(shares, "server1", 4))
2352-        shares[4] = "server1"
2353-        self.failUnless(upload.should_add_server(shares, "server4", 4))
2354-        shares = {}
2355-        self.failUnless(upload.should_add_server(shares, "server1", 1))
2356+    def test_query_counting(self):
2357+        # If peer selection fails, Tahoe2PeerSelector prints out a lot
2358+        # of helpful diagnostic information, including query stats.
2359+        # This test helps make sure that that information is accurate.
2360+        self.basedir = self.mktemp()
2361+        d = self._setup_and_upload()
2362+        def _setup(ign):
2363+            for i in xrange(1, 11):
2364+                self._add_server(server_number=i)
2365+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
2366+            c = self.g.clients[0]
2367+            # We set happy to an unsatisfiable value so that we can check the
2368+            # counting in the exception message. The same progress message
2369+            # is also used when the upload is successful, but in that case it
2370+            # only gets written to a log, so we can't see what it says.
2371+            c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
2372+            return c
2373+        d.addCallback(_setup)
2374+        d.addCallback(lambda c:
2375+            self.shouldFail(UploadUnhappinessError, "test_query_counting",
2376+                            "10 queries placed some shares",
2377+                            c.upload, upload.Data("data" * 10000,
2378+                                                  convergence="")))
2379+        # Now try with some readonly servers. We want to make sure that
2380+        # the readonly peer share discovery phase is counted correctly.
2381+        def _reset(ign):
2382+            self.basedir = self.mktemp()
2383+            self.g = None
2384+        d.addCallback(_reset)
2385+        d.addCallback(lambda ign:
2386+            self._setup_and_upload())
2387+        def _then(ign):
2388+            for i in xrange(1, 11):
2389+                self._add_server(server_number=i)
2390+            self._add_server(server_number=11, readonly=True)
2391+            self._add_server(server_number=12, readonly=True)
2392+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
2393+            c = self.g.clients[0]
2394+            c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
2395+            return c
2396+        d.addCallback(_then)
2397+        d.addCallback(lambda c:
2398+            self.shouldFail(UploadUnhappinessError, "test_query_counting",
2399+                            "2 placed none (of which 2 placed none due to "
2400+                            "the server being full",
2401+                            c.upload, upload.Data("data" * 10000,
2402+                                                  convergence="")))
2403+        # Now try the case where the upload process finds a bunch of the
2404+        # shares that it wants to place on the first server, including
2405+        # the one that it wanted to allocate there. Though no shares will
2406+        # be allocated in this request, it should still be called
2407+        # productive, since it caused some homeless shares to be
2408+        # removed.
2409+        d.addCallback(_reset)
2410+        d.addCallback(lambda ign:
2411+            self._setup_and_upload())
2412+
2413+        def _next(ign):
2414+            for i in xrange(1, 11):
2415+                self._add_server(server_number=i)
2416+            # Copy all of the shares to server 9, since that will be
2417+            # the first one that the selector sees.
2418+            for i in xrange(10):
2419+                self._copy_share_to_server(i, 9)
2420+            # Remove server 0, and its contents
2421+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
2422+            # Make happiness unsatisfiable
2423+            c = self.g.clients[0]
2424+            c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
2425+            return c
2426+        d.addCallback(_next)
2427+        d.addCallback(lambda c:
2428+            self.shouldFail(UploadUnhappinessError, "test_query_counting",
2429+                            "1 queries placed some shares",
2430+                            c.upload, upload.Data("data" * 10000,
2431+                                                  convergence="")))
2432+        return d
2433+
2434+
2435+    def test_upper_limit_on_readonly_queries(self):
2436+        self.basedir = self.mktemp()
2437+        d = self._setup_and_upload()
2438+        def _then(ign):
2439+            for i in xrange(1, 11):
2440+                self._add_server(server_number=i, readonly=True)
2441+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
2442+            c = self.g.clients[0]
2443+            c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
2444+            c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
2445+            c.DEFAULT_ENCODING_PARAMETERS['n'] = 4
2446+            return c
2447+        d.addCallback(_then)
2448+        d.addCallback(lambda client:
2449+            self.shouldFail(UploadUnhappinessError,
2450+                            "test_upper_limit_on_readonly_queries",
2451+                            "sent 8 queries to 8 peers",
2452+                            client.upload,
2453+                            upload.Data('data' * 10000, convergence="")))
2454+        return d
2455 
2456 
2457     def test_exception_messages_during_peer_selection(self):
2458hunk ./src/allmydata/test/test_upload.py 1545
2459-        # server 1: readonly, no shares
2460-        # server 2: readonly, no shares
2461-        # server 3: readonly, no shares
2462-        # server 4: readonly, no shares
2463-        # server 5: readonly, no shares
2464+        # server 1: read-only, no shares
2465+        # server 2: read-only, no shares
2466+        # server 3: read-only, no shares
2467+        # server 4: read-only, no shares
2468+        # server 5: read-only, no shares
2469         # This will fail, but we want to make sure that the log messages
2470         # are informative about why it has failed.
2471         self.basedir = self.mktemp()
2472hunk ./src/allmydata/test/test_upload.py 1555
2473         d = self._setup_and_upload()
2474         d.addCallback(lambda ign:
2475-            self._add_server_with_share(server_number=1, readonly=True))
2476+            self._add_server(server_number=1, readonly=True))
2477         d.addCallback(lambda ign:
2478hunk ./src/allmydata/test/test_upload.py 1557
2479-            self._add_server_with_share(server_number=2, readonly=True))
2480+            self._add_server(server_number=2, readonly=True))
2481         d.addCallback(lambda ign:
2482hunk ./src/allmydata/test/test_upload.py 1559
2483-            self._add_server_with_share(server_number=3, readonly=True))
2484+            self._add_server(server_number=3, readonly=True))
2485         d.addCallback(lambda ign:
2486hunk ./src/allmydata/test/test_upload.py 1561
2487-            self._add_server_with_share(server_number=4, readonly=True))
2488+            self._add_server(server_number=4, readonly=True))
2489         d.addCallback(lambda ign:
2490hunk ./src/allmydata/test/test_upload.py 1563
2491-            self._add_server_with_share(server_number=5, readonly=True))
2492+            self._add_server(server_number=5, readonly=True))
2493         d.addCallback(lambda ign:
2494             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
2495         def _reset_encoding_parameters(ign):
2496hunk ./src/allmydata/test/test_upload.py 1573
2497         d.addCallback(_reset_encoding_parameters)
2498         d.addCallback(lambda client:
2499             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2500-                            "peer selection failed for <Tahoe2PeerSelector "
2501-                            "for upload dglev>: placed 0 shares out of 10 "
2502-                            "total (10 homeless), want to place on 4 servers,"
2503-                            " sent 5 queries to 5 peers, 0 queries placed "
2504+                            "placed 0 shares out of 10 "
2505+                            "total (10 homeless), want to place shares on at "
2506+                            "least 4 servers such that any 3 of them have "
2507+                            "enough shares to recover the file, "
2508+                            "sent 5 queries to 5 peers, 0 queries placed "
2509                             "some shares, 5 placed none "
2510                             "(of which 5 placed none due to the server being "
2511                             "full and 0 placed none due to an error)",
2512hunk ./src/allmydata/test/test_upload.py 1585
2513                             upload.Data("data" * 10000, convergence="")))
2514 
2515 
2516-        # server 1: readonly, no shares
2517+        # server 1: read-only, no shares
2518         # server 2: broken, no shares
2519hunk ./src/allmydata/test/test_upload.py 1587
2520-        # server 3: readonly, no shares
2521-        # server 4: readonly, no shares
2522-        # server 5: readonly, no shares
2523+        # server 3: read-only, no shares
2524+        # server 4: read-only, no shares
2525+        # server 5: read-only, no shares
2526         def _reset(ign):
2527             self.basedir = self.mktemp()
2528         d.addCallback(_reset)
2529hunk ./src/allmydata/test/test_upload.py 1596
2530         d.addCallback(lambda ign:
2531             self._setup_and_upload())
2532         d.addCallback(lambda ign:
2533-            self._add_server_with_share(server_number=1, readonly=True))
2534+            self._add_server(server_number=1, readonly=True))
2535         d.addCallback(lambda ign:
2536hunk ./src/allmydata/test/test_upload.py 1598
2537-            self._add_server_with_share(server_number=2))
2538+            self._add_server(server_number=2))
2539         def _break_server_2(ign):
2540             server = self.g.servers_by_number[2].my_nodeid
2541             # We have to break the server in servers_by_id,
2542hunk ./src/allmydata/test/test_upload.py 1602
2543-            # because the ones in servers_by_number isn't wrapped,
2544-            # and doesn't look at its broken attribute
2545+            # because the one in servers_by_number isn't wrapped,
2546+            # and doesn't look at its broken attribute when answering
2547+            # queries.
2548             self.g.servers_by_id[server].broken = True
2549         d.addCallback(_break_server_2)
2550         d.addCallback(lambda ign:
2551hunk ./src/allmydata/test/test_upload.py 1608
2552-            self._add_server_with_share(server_number=3, readonly=True))
2553+            self._add_server(server_number=3, readonly=True))
2554         d.addCallback(lambda ign:
2555hunk ./src/allmydata/test/test_upload.py 1610
2556-            self._add_server_with_share(server_number=4, readonly=True))
2557+            self._add_server(server_number=4, readonly=True))
2558         d.addCallback(lambda ign:
2559hunk ./src/allmydata/test/test_upload.py 1612
2560-            self._add_server_with_share(server_number=5, readonly=True))
2561+            self._add_server(server_number=5, readonly=True))
2562         d.addCallback(lambda ign:
2563             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
2564hunk ./src/allmydata/test/test_upload.py 1615
2565-        def _reset_encoding_parameters(ign):
2566+        def _reset_encoding_parameters(ign, happy=4):
2567             client = self.g.clients[0]
2568hunk ./src/allmydata/test/test_upload.py 1617
2569-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
2570+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
2571             return client
2572         d.addCallback(_reset_encoding_parameters)
2573         d.addCallback(lambda client:
2574hunk ./src/allmydata/test/test_upload.py 1622
2575             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2576-                            "peer selection failed for <Tahoe2PeerSelector "
2577-                            "for upload dglev>: placed 0 shares out of 10 "
2578-                            "total (10 homeless), want to place on 4 servers,"
2579-                            " sent 5 queries to 5 peers, 0 queries placed "
2580+                            "placed 0 shares out of 10 "
2581+                            "total (10 homeless), want to place shares on at "
2582+                            "least 4 servers such that any 3 of them have "
2583+                            "enough shares to recover the file, "
2584+                            "sent 5 queries to 5 peers, 0 queries placed "
2585                             "some shares, 5 placed none "
2586                             "(of which 4 placed none due to the server being "
2587                             "full and 1 placed none due to an error)",
2588hunk ./src/allmydata/test/test_upload.py 1632
2589                             client.upload,
2590                             upload.Data("data" * 10000, convergence="")))
2591+        # server 0, server 1 = empty, accepting shares
2592+        # This should place all of the shares, but still fail with happy=4.
2593+        # We want to make sure that the exception message is worded correctly.
2594+        d.addCallback(_reset)
2595+        d.addCallback(lambda ign:
2596+            self._setup_grid())
2597+        d.addCallback(lambda ign:
2598+            self._add_server(server_number=1))
2599+        d.addCallback(_reset_encoding_parameters)
2600+        d.addCallback(lambda client:
2601+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2602+                            "shares could be placed or found on only 2 "
2603+                            "server(s). We were asked to place shares on at "
2604+                            "least 4 server(s) such that any 3 of them have "
2605+                            "enough shares to recover the file.",
2606+                            client.upload, upload.Data("data" * 10000,
2607+                                                       convergence="")))
2608+        # servers 0 - 4 = empty, accepting shares
2609+        # This too should place all the shares, and this too should fail,
2610+        # but since the effective happiness is more than the k encoding
2611+        # parameter, it should trigger a different error message than the one
2612+        # above.
2613+        d.addCallback(_reset)
2614+        d.addCallback(lambda ign:
2615+            self._setup_grid())
2616+        d.addCallback(lambda ign:
2617+            self._add_server(server_number=1))
2618+        d.addCallback(lambda ign:
2619+            self._add_server(server_number=2))
2620+        d.addCallback(lambda ign:
2621+            self._add_server(server_number=3))
2622+        d.addCallback(lambda ign:
2623+            self._add_server(server_number=4))
2624+        d.addCallback(_reset_encoding_parameters, happy=7)
2625+        d.addCallback(lambda client:
2626+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2627+                            "shares could be placed on only 5 server(s) such "
2628+                            "that any 3 of them have enough shares to recover "
2629+                            "the file, but we were asked to place shares on "
2630+                            "at least 7 such servers.",
2631+                            client.upload, upload.Data("data" * 10000,
2632+                                                       convergence="")))
2633+        # server 0: shares 0 - 9
2634+        # server 1: share 0, read-only
2635+        # server 2: share 0, read-only
2636+        # server 3: share 0, read-only
2637+        # This should place all of the shares, but fail with happy=4.
2638+        # Since the number of servers with shares is more than the number
2639+        # necessary to reconstitute the file, this will trigger a different
2640+        # error message than either of those above.
2641+        d.addCallback(_reset)
2642+        d.addCallback(lambda ign:
2643+            self._setup_and_upload())
2644+        d.addCallback(lambda ign:
2645+            self._add_server_with_share(server_number=1, share_number=0,
2646+                                        readonly=True))
2647+        d.addCallback(lambda ign:
2648+            self._add_server_with_share(server_number=2, share_number=0,
2649+                                        readonly=True))
2650+        d.addCallback(lambda ign:
2651+            self._add_server_with_share(server_number=3, share_number=0,
2652+                                        readonly=True))
2653+        d.addCallback(_reset_encoding_parameters, happy=7)
2654+        d.addCallback(lambda client:
2655+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2656+                            "shares could be placed or found on 4 server(s), "
2657+                            "but they are not spread out evenly enough to "
2658+                            "ensure that any 3 of these servers would have "
2659+                            "enough shares to recover the file. We were asked "
2660+                            "to place shares on at least 7 servers such that "
2661+                            "any 3 of them have enough shares to recover the "
2662+                            "file",
2663+                            client.upload, upload.Data("data" * 10000,
2664+                                                       convergence="")))
2665         return d
2666 
2667 
2668hunk ./src/allmydata/test/test_upload.py 1709
2669+    def test_problem_layout_comment_187(self):
2670+        # #778 comment 187 broke an initial attempt at a share
2671+        # redistribution algorithm. This test is here to demonstrate the
2672+        # breakage, and to test that subsequent algorithms don't also
2673+        # break in the same way.
2674+        self.basedir = self.mktemp()
2675+        d = self._setup_and_upload(k=2, n=3)
2676+
2677+        # server 1: shares 0, 1, 2, readonly
2678+        # server 2: share 0, readonly
2679+        # server 3: share 0
2680+        def _setup(ign):
2681+            self._add_server_with_share(server_number=1, share_number=0,
2682+                                        readonly=True)
2683+            self._add_server_with_share(server_number=2, share_number=0,
2684+                                        readonly=True)
2685+            self._add_server_with_share(server_number=3, share_number=0)
2686+            # Copy shares
2687+            self._copy_share_to_server(1, 1)
2688+            self._copy_share_to_server(2, 1)
2689+            # Remove server 0
2690+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
2691+            client = self.g.clients[0]
2692+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
2693+            return client
2694+
2695+        d.addCallback(_setup)
2696+        d.addCallback(lambda client:
2697+            client.upload(upload.Data("data" * 10000, convergence="")))
2698+        return d
2699+    test_problem_layout_comment_187.todo = "this isn't fixed yet"
2700+
2701+
2702     def _set_up_nodes_extra_config(self, clientdir):
2703         cfgfn = os.path.join(clientdir, "tahoe.cfg")
2704         oldcfg = open(cfgfn, "r").read()
2705}
2706
2707Context:
2708
2709[Dependency on Windmill test framework is not needed yet.
2710david-sarah@jacaranda.org**20100504161043
2711 Ignore-this: be088712bec650d4ef24766c0026ebc8
2712]
2713[tests: pass z to tar so that BSD tar will know to ungzip
2714zooko@zooko.com**20100504090628
2715 Ignore-this: 1339e493f255e8fc0b01b70478f23a09
2716]
2717[setup: update comments and URLs in setup.cfg
2718zooko@zooko.com**20100504061653
2719 Ignore-this: f97692807c74bcab56d33100c899f829
2720]
2721[setup: reorder and extend the show-tool-versions script, the better to glean information about our new buildslaves
2722zooko@zooko.com**20100504045643
2723 Ignore-this: 836084b56b8d4ee8f1de1f4efb706d36
2724]
2725[CLI: Support for https url in option --node-url
2726Francois Deppierraz <francois@ctrlaltdel.ch>**20100430185609
2727 Ignore-this: 1717176b4d27c877e6bc67a944d9bf34
2728 
2729 This patch modifies the regular expression used for verifying of '--node-url'
2730 parameter.  Support for accessing a Tahoe gateway over HTTPS was already
2731 present, thanks to Python's urllib.
2732 
2733]
2734[backupdb.did_create_directory: use REPLACE INTO, not INSERT INTO + ignore error
2735Brian Warner <warner@lothar.com>**20100428050803
2736 Ignore-this: 1fca7b8f364a21ae413be8767161e32f
2737 
2738 This handles the case where we upload a new tahoe directory for a
2739 previously-processed local directory, possibly creating a new dircap (if the
2740 metadata had changed). Now we replace the old dirhash->dircap record. The
2741 previous behavior left the old record in place (with the old dircap and
2742 timestamps), so we'd never stop creating new directories and never converge
2743 on a null backup.
2744]
2745["tahoe webopen": add --info flag, to get ?t=info
2746Brian Warner <warner@lothar.com>**20100424233003
2747 Ignore-this: 126b0bb6db340fabacb623d295eb45fa
2748 
2749 Also fix some trailing whitespace.
2750]
2751[docs: install.html http-equiv refresh to quickstart.html
2752zooko@zooko.com**20100421165708
2753 Ignore-this: 52b4b619f9dde5886ae2cd7f1f3b734b
2754]
2755[docs: install.html -> quickstart.html
2756zooko@zooko.com**20100421155757
2757 Ignore-this: 6084e203909306bed93efb09d0e6181d
2758 It is not called "installing" because that implies that it is going to change the configuration of your operating system. It is not called "building" because that implies that you need developer tools like a compiler. Also I added a stern warning against looking at the "InstallDetails" wiki page, which I have renamed to "AdvancedInstall".
2759]
2760[Fix another typo in tahoe_storagespace munin plugin
2761david-sarah@jacaranda.org**20100416220935
2762 Ignore-this: ad1f7aa66b554174f91dfb2b7a3ea5f3
2763]
2764[Add dependency on windmill >= 1.3
2765david-sarah@jacaranda.org**20100416190404
2766 Ignore-this: 4437a7a464e92d6c9012926b18676211
2767]
2768[licensing: phrase the OpenSSL-exemption in the vocabulary of copyright instead of computer technology, and replicate the exemption from the GPL to the TGPPL
2769zooko@zooko.com**20100414232521
2770 Ignore-this: a5494b2f582a295544c6cad3f245e91
2771]
2772[munin-tahoe_storagespace
2773freestorm77@gmail.com**20100221203626
2774 Ignore-this: 14d6d6a587afe1f8883152bf2e46b4aa
2775 
2776 Plugin configuration rename
2777 
2778]
2779[setup: add licensing declaration for setuptools (noticed by the FSF compliance folks)
2780zooko@zooko.com**20100309184415
2781 Ignore-this: 2dfa7d812d65fec7c72ddbf0de609ccb
2782]
2783[setup: fix error in licensing declaration from Shawn Willden, as noted by the FSF compliance division
2784zooko@zooko.com**20100309163736
2785 Ignore-this: c0623d27e469799d86cabf67921a13f8
2786]
2787[CREDITS to Jacob Appelbaum
2788zooko@zooko.com**20100304015616
2789 Ignore-this: 70db493abbc23968fcc8db93f386ea54
2790]
2791[desert-island-build-with-proper-versions
2792jacob@appelbaum.net**20100304013858]
2793[docs: a few small edits to try to guide newcomers through the docs
2794zooko@zooko.com**20100303231902
2795 Ignore-this: a6aab44f5bf5ad97ea73e6976bc4042d
2796 These edits were suggested by my watching over Jake Appelbaum's shoulder as he completely ignored/skipped/missed install.html and also as he decided that debian.txt wouldn't help him with basic installation. Then I threw in a few docs edits that have been sitting around in my sandbox asking to be committed for months.
2797]
2798[TAG allmydata-tahoe-1.6.1
2799david-sarah@jacaranda.org**20100228062314
2800 Ignore-this: eb5f03ada8ea953ee7780e7fe068539
2801]
2802Patch bundle hash:
28038c8377e1fbce72355a095d6c942eb17e6e04ae2f