Ticket #928: dontwait8.darcspatch.txt

File dontwait8.darcspatch.txt, 87.9 KB (added by davidsarah, at 2010-01-30T06:58:14Z)

All patches for #928

Line 
1Wed Jan 27 23:34:17 GMT Standard Time 2010  zooko@zooko.com
2  * immutable: download from the first servers which provide at least K buckets instead of waiting for all servers to reply
3  This should put an end to the phenomenon I've been seeing that a single hung server can cause all downloads on a grid to hang.  Also it should speed up all downloads by (a) not-waiting for responses to queries that it doesn't need, and (b) downloading shares from the servers which answered the initial query the fastest.
4  Also, do not count how many buckets you've gotten when deciding whether the download has enough shares or not -- instead count how many buckets to *unique* shares that you've gotten.  This appears to improve a slightly weird behavior in the current download code in which receiving >= K different buckets all to the same sharenumber would make it think it had enough to download the file when in fact it hadn't.
5  This patch needs tests before it is actually ready for trunk.
6
7Fri Jan 29 12:38:45 GMT Standard Time 2010  david-sarah@jacaranda.org
8  * New tests for #928
9
10Fri Jan 29 18:42:37 GMT Standard Time 2010  zooko@zooko.com
11  * immutable: fix bug in tests, change line-endings to unix style, add comment
12
13Sat Jan 30 06:43:03 GMT Standard Time 2010  david-sarah@jacaranda.org
14  * Improvements to test_hung_server, and fix for status updates in download.py
15
16New patches:
17
18[immutable: download from the first servers which provide at least K buckets instead of waiting for all servers to reply
19zooko@zooko.com**20100127233417
20 Ignore-this: c855355a40d96827e1d0c469a8d8ab3f
21 This should put an end to the phenomenon I've been seeing that a single hung server can cause all downloads on a grid to hang.  Also it should speed up all downloads by (a) not-waiting for responses to queries that it doesn't need, and (b) downloading shares from the servers which answered the initial query the fastest.
22 Also, do not count how many buckets you've gotten when deciding whether the download has enough shares or not -- instead count how many buckets to *unique* shares that you've gotten.  This appears to improve a slightly weird behavior in the current download code in which receiving >= K different buckets all to the same sharenumber would make it think it had enough to download the file when in fact it hadn't.
23 This patch needs tests before it is actually ready for trunk.
24] {
25hunk ./src/allmydata/immutable/download.py 791
26         self._opened = False
27 
28         self.active_buckets = {} # k: shnum, v: bucket
29-        self._share_buckets = [] # list of (sharenum, bucket) tuples
30+        self._share_buckets = {} # k: sharenum, v: list of buckets
31         self._share_vbuckets = {} # k: shnum, v: set of ValidatedBuckets
32 
33         self._fetch_failures = {"uri_extension": 0, "crypttext_hash_tree": 0, }
34hunk ./src/allmydata/immutable/download.py 872
35         return d
36 
37     def _get_all_shareholders(self):
38-        dl = []
39+        """ Once the number of buckets that I know about is >= K then I
40+        callback the Deferred that I return.
41+
42+        If all of the get_buckets deferreds have fired (whether callback or
43+        errback) and I still don't have enough buckets then I'll callback the
44+        Deferred that I return.
45+        """
46+        self._wait_for_enough_buckets_d = defer.Deferred()
47+
48+        self._queries_sent = 0
49+        self._responses_received = 0
50+        self._queries_failed = 0
51         sb = self._storage_broker
52         servers = sb.get_servers_for_index(self._storage_index)
53         if not servers:
54hunk ./src/allmydata/immutable/download.py 892
55             self.log(format="sending DYHB to [%(peerid)s]",
56                      peerid=idlib.shortnodeid_b2a(peerid),
57                      level=log.NOISY, umid="rT03hg")
58+            self._queries_sent += 1
59             d = ss.callRemote("get_buckets", self._storage_index)
60             d.addCallbacks(self._got_response, self._got_error,
61                            callbackArgs=(peerid,))
62hunk ./src/allmydata/immutable/download.py 896
63-            dl.append(d)
64-        self._responses_received = 0
65-        self._queries_sent = len(dl)
66         if self._status:
67             self._status.set_status("Locating Shares (%d/%d)" %
68                                     (self._responses_received,
69hunk ./src/allmydata/immutable/download.py 900
70                                      self._queries_sent))
71-        return defer.DeferredList(dl)
72+        return self._wait_for_enough_buckets_d
73 
74     def _got_response(self, buckets, peerid):
75         self.log(format="got results from [%(peerid)s]: shnums %(shnums)s",
76hunk ./src/allmydata/immutable/download.py 918
77         for sharenum, bucket in buckets.iteritems():
78             b = layout.ReadBucketProxy(bucket, peerid, self._storage_index)
79             self.add_share_bucket(sharenum, b)
80+            # If we just got enough buckets for the first time, then fire the
81+            # deferred. Then remove it from self so that we don't fire it
82+            # again.
83+            if self._wait_for_enough_buckets_d and len(self._share_buckets) >= self._verifycap.needed_shares:
84+                self._wait_for_enough_buckets_d.callback(True)
85+                self._wait_for_enough_buckets_d = None
86+
87+            # Else, if we ran out of outstanding requests then fire it and
88+            # remove it from self.
89+            assert (self._responses_received+self._queries_failed) <= self._queries_sent
90+            if self._wait_for_enough_buckets_d and (self._responses_received+self._queries_failed) == self._queries_sent:
91+                self._wait_for_enough_buckets_d.callback(False)
92+                self._wait_for_enough_buckets_d = None
93 
94             if self._results:
95                 if peerid not in self._results.servermap:
96hunk ./src/allmydata/immutable/download.py 939
97 
98     def add_share_bucket(self, sharenum, bucket):
99         # this is split out for the benefit of test_encode.py
100-        self._share_buckets.append( (sharenum, bucket) )
101+        self._share_buckets.setdefault(sharenum, []).append(bucket)
102 
103     def _got_error(self, f):
104         level = log.WEIRD
105hunk ./src/allmydata/immutable/download.py 947
106             level = log.UNUSUAL
107         self.log("Error during get_buckets", failure=f, level=level,
108                          umid="3uuBUQ")
109+        # If we ran out of outstanding requests then errback it and remove it
110+        # from self.
111+        self._queries_failed += 1
112+        assert (self._responses_received+self._queries_failed) <= self._queries_sent
113+        if self._wait_for_enough_buckets_d and self._responses_received == self._queries_sent:
114+            self._wait_for_enough_buckets_d.errback()
115+            self._wait_for_enough_buckets_d = None
116 
117     def bucket_failed(self, vbucket):
118         shnum = vbucket.sharenum
119hunk ./src/allmydata/immutable/download.py 996
120         uri_extension_fetch_started = time.time()
121 
122         vups = []
123-        for sharenum, bucket in self._share_buckets:
124-            vups.append(ValidatedExtendedURIProxy(bucket, self._verifycap, self._fetch_failures))
125+        for sharenum, buckets in self._share_buckets.iteritems():
126+            for bucket in buckets:
127+                vups.append(ValidatedExtendedURIProxy(bucket, self._verifycap, self._fetch_failures))
128         vto = ValidatedThingObtainer(vups, debugname="vups", log_id=self._parentmsgid)
129         d = vto.start()
130 
131hunk ./src/allmydata/immutable/download.py 1034
132 
133     def _get_crypttext_hash_tree(self, res):
134         vchtps = []
135-        for sharenum, bucket in self._share_buckets:
136-            vchtp = ValidatedCrypttextHashTreeProxy(bucket, self._crypttext_hash_tree, self._vup.num_segments, self._fetch_failures)
137-            vchtps.append(vchtp)
138+        for sharenum, buckets in self._share_buckets.iteritems():
139+            for bucket in buckets:
140+                vchtp = ValidatedCrypttextHashTreeProxy(bucket, self._crypttext_hash_tree, self._vup.num_segments, self._fetch_failures)
141+                vchtps.append(vchtp)
142 
143         _get_crypttext_hash_tree_started = time.time()
144         if self._status:
145hunk ./src/allmydata/immutable/download.py 1088
146 
147 
148     def _download_all_segments(self, res):
149-        for sharenum, bucket in self._share_buckets:
150-            vbucket = ValidatedReadBucketProxy(sharenum, bucket, self._share_hash_tree, self._vup.num_segments, self._vup.block_size, self._vup.share_size)
151-            self._share_vbuckets.setdefault(sharenum, set()).add(vbucket)
152+        for sharenum, buckets in self._share_buckets.iteritems():
153+            for bucket in buckets:
154+                vbucket = ValidatedReadBucketProxy(sharenum, bucket, self._share_hash_tree, self._vup.num_segments, self._vup.block_size, self._vup.share_size)
155+                self._share_vbuckets.setdefault(sharenum, set()).add(vbucket)
156 
157         # after the above code, self._share_vbuckets contains enough
158         # buckets to complete the download, and some extra ones to
159}
160[New tests for #928
161david-sarah@jacaranda.org**20100129123845
162 Ignore-this: 5c520f40141f0d9c000ffb05a4698995
163] {
164hunk ./src/allmydata/immutable/download.py 3
165 import random, weakref, itertools, time
166 from zope.interface import implements
167-from twisted.internet import defer
168+from twisted.internet import defer, reactor
169 from twisted.internet.interfaces import IPushProducer, IConsumer
170 from foolscap.api import DeadReferenceError, RemoteException, eventually
171 
172hunk ./src/allmydata/immutable/download.py 838
173 
174         # first step: who should we download from?
175         d = defer.maybeDeferred(self._get_all_shareholders)
176-        d.addCallback(self._got_all_shareholders)
177+        d.addBoth(self._got_all_shareholders)
178         # now get the uri_extension block from somebody and integrity check
179         # it and parse and validate its contents
180         d.addCallback(self._obtain_uri_extension)
181hunk ./src/allmydata/immutable/download.py 875
182         """ Once the number of buckets that I know about is >= K then I
183         callback the Deferred that I return.
184 
185-        If all of the get_buckets deferreds have fired (whether callback or
186-        errback) and I still don't have enough buckets then I'll callback the
187-        Deferred that I return.
188+        If all of the get_buckets deferreds have fired (whether callback
189+        or errback) and I still don't have enough buckets then I'll also
190+        callback -- not errback -- the Deferred that I return.
191         """
192hunk ./src/allmydata/immutable/download.py 879
193-        self._wait_for_enough_buckets_d = defer.Deferred()
194+        wait_for_enough_buckets_d = defer.Deferred()
195+        self._wait_for_enough_buckets_d = wait_for_enough_buckets_d
196 
197hunk ./src/allmydata/immutable/download.py 882
198-        self._queries_sent = 0
199-        self._responses_received = 0
200-        self._queries_failed = 0
201         sb = self._storage_broker
202         servers = sb.get_servers_for_index(self._storage_index)
203         if not servers:
204hunk ./src/allmydata/immutable/download.py 886
205             raise NoServersError("broker gave us no servers!")
206+
207+        self._total_queries = len(servers)
208+        self._responses_received = 0
209+        self._queries_failed = 0
210         for (peerid,ss) in servers:
211             self.log(format="sending DYHB to [%(peerid)s]",
212                      peerid=idlib.shortnodeid_b2a(peerid),
213hunk ./src/allmydata/immutable/download.py 894
214                      level=log.NOISY, umid="rT03hg")
215-            self._queries_sent += 1
216             d = ss.callRemote("get_buckets", self._storage_index)
217             d.addCallbacks(self._got_response, self._got_error,
218                            callbackArgs=(peerid,))
219hunk ./src/allmydata/immutable/download.py 897
220+            d.addBoth(self._check_got_all_responses)
221+
222         if self._status:
223             self._status.set_status("Locating Shares (%d/%d)" %
224hunk ./src/allmydata/immutable/download.py 901
225-                                    (self._responses_received,
226-                                     self._queries_sent))
227-        return self._wait_for_enough_buckets_d
228+                                    (len(self._share_buckets),
229+                                     self._verifycap.needed_shares))
230+        return wait_for_enough_buckets_d
231+
232+    def _check_got_all_responses(self, ignored=None):
233+        assert (self._responses_received+self._queries_failed) <= self._total_queries
234+        if self._wait_for_enough_buckets_d and (self._responses_received+self._queries_failed) == self._total_queries:
235+            reactor.callLater(0, self._wait_for_enough_buckets_d.callback, False)
236+            self._wait_for_enough_buckets_d = None
237 
238     def _got_response(self, buckets, peerid):
239hunk ./src/allmydata/immutable/download.py 912
240+        self._responses_received += 1
241         self.log(format="got results from [%(peerid)s]: shnums %(shnums)s",
242                  peerid=idlib.shortnodeid_b2a(peerid),
243                  shnums=sorted(buckets.keys()),
244hunk ./src/allmydata/immutable/download.py 917
245                  level=log.NOISY, umid="o4uwFg")
246-        self._responses_received += 1
247         if self._results:
248             elapsed = time.time() - self._started
249             self._results.timings["servers_peer_selection"][peerid] = elapsed
250hunk ./src/allmydata/immutable/download.py 923
251         if self._status:
252             self._status.set_status("Locating Shares (%d/%d)" %
253                                     (self._responses_received,
254-                                     self._queries_sent))
255+                                     self._total_queries))
256         for sharenum, bucket in buckets.iteritems():
257             b = layout.ReadBucketProxy(bucket, peerid, self._storage_index)
258             self.add_share_bucket(sharenum, b)
259hunk ./src/allmydata/immutable/download.py 931
260             # deferred. Then remove it from self so that we don't fire it
261             # again.
262             if self._wait_for_enough_buckets_d and len(self._share_buckets) >= self._verifycap.needed_shares:
263-                self._wait_for_enough_buckets_d.callback(True)
264-                self._wait_for_enough_buckets_d = None
265-
266-            # Else, if we ran out of outstanding requests then fire it and
267-            # remove it from self.
268-            assert (self._responses_received+self._queries_failed) <= self._queries_sent
269-            if self._wait_for_enough_buckets_d and (self._responses_received+self._queries_failed) == self._queries_sent:
270-                self._wait_for_enough_buckets_d.callback(False)
271+                reactor.callLater(0, self._wait_for_enough_buckets_d.callback, True)
272                 self._wait_for_enough_buckets_d = None
273 
274             if self._results:
275hunk ./src/allmydata/immutable/download.py 944
276         self._share_buckets.setdefault(sharenum, []).append(bucket)
277 
278     def _got_error(self, f):
279+        self._queries_failed += 1
280         level = log.WEIRD
281         if f.check(DeadReferenceError):
282             level = log.UNUSUAL
283hunk ./src/allmydata/immutable/download.py 950
284         self.log("Error during get_buckets", failure=f, level=level,
285                          umid="3uuBUQ")
286-        # If we ran out of outstanding requests then errback it and remove it
287-        # from self.
288-        self._queries_failed += 1
289-        assert (self._responses_received+self._queries_failed) <= self._queries_sent
290-        if self._wait_for_enough_buckets_d and self._responses_received == self._queries_sent:
291-            self._wait_for_enough_buckets_d.errback()
292-            self._wait_for_enough_buckets_d = None
293 
294     def bucket_failed(self, vbucket):
295         shnum = vbucket.sharenum
296hunk ./src/allmydata/test/no_network.py 19
297 import os.path
298 from zope.interface import implements
299 from twisted.application import service
300-from twisted.internet import reactor
301+from twisted.internet import defer, reactor
302 from twisted.python.failure import Failure
303 from foolscap.api import Referenceable, fireEventually, RemoteException
304 from base64 import b32encode
305hunk ./src/allmydata/test/no_network.py 41
306     def __init__(self, original):
307         self.original = original
308         self.broken = False
309+        self.hung_until = None
310         self.post_call_notifier = None
311         self.disconnectors = {}
312 
313hunk ./src/allmydata/test/no_network.py 61
314                 return a
315         args = tuple([wrap(a) for a in args])
316         kwargs = dict([(k,wrap(kwargs[k])) for k in kwargs])
317+
318+        def _really_call():
319+            meth = getattr(self.original, "remote_" + methname)
320+            return meth(*args, **kwargs)
321+
322         def _call():
323             if self.broken:
324                 raise IntentionalError("I was asked to break")
325hunk ./src/allmydata/test/no_network.py 69
326-            meth = getattr(self.original, "remote_" + methname)
327-            return meth(*args, **kwargs)
328+            if self.hung_until:
329+                d2 = defer.Deferred()
330+                self.hung_until.addCallback(lambda ign: _really_call())
331+                self.hung_until.addCallback(lambda res: d2.callback(res))
332+                def _err(res):
333+                    d2.errback(res)
334+                    return res
335+                self.hung_until.addErrback(_err)
336+                return d2
337+            return _really_call()
338+
339         d = fireEventually()
340         d.addCallback(lambda res: _call())
341         def _wrap_exception(f):
342hunk ./src/allmydata/test/no_network.py 258
343         # asked to hold a share
344         self.servers_by_id[serverid].broken = True
345 
346+    def hang_server(self, serverid, until=defer.Deferred()):
347+        # hang the given server until 'until' fires
348+        self.servers_by_id[serverid].hung_until = until
349+
350+
351 class GridTestMixin:
352     def setUp(self):
353         self.s = service.MultiService()
354addfile ./src/allmydata/test/test_hung_server.py
355hunk ./src/allmydata/test/test_hung_server.py 1
356+
357+import os, shutil
358+from twisted.trial import unittest
359+from twisted.internet import defer, reactor
360+from twisted.python import failure
361+from allmydata import uri
362+from allmydata.util.consumer import download_to_data
363+from allmydata.immutable import upload
364+from allmydata.storage.common import storage_index_to_dir
365+from allmydata.test.no_network import GridTestMixin
366+from allmydata.test.common import ShouldFailMixin
367+from allmydata.interfaces import NotEnoughSharesError
368+
369+immutable_plaintext = "data" * 10000
370+mutable_plaintext = "muta" * 10000
371+
372+class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, unittest.TestCase):
373+    timeout = 30
374+
375+    def _break(self, servers):
376+        for (id, ss) in servers:
377+            self.g.break_server(id)
378+
379+    def _hang(self, servers, **kwargs):
380+        for (id, ss) in servers:
381+            self.g.hang_server(id, **kwargs)
382+
383+    def _delete_all_shares_from(self, servers):
384+        serverids = [id for (id, ss) in servers]
385+        for (i_shnum, i_serverid, i_sharefile) in self.shares:
386+            if i_serverid in serverids:
387+                os.unlink(i_sharefile)
388+
389+    # untested
390+    def _pick_a_share_from(self, server):
391+        (id, ss) = server
392+        for (i_shnum, i_serverid, i_sharefile) in self.shares:
393+            if i_serverid == id:
394+                return (i_shnum, i_sharefile)
395+        raise AssertionError("server %r had no shares" % server)
396+
397+    # untested
398+    def _copy_all_shares_from(self, from_servers, to_server):
399+        serverids = [id for (id, ss) in from_servers]
400+        for (i_shnum, i_serverid, i_sharefile) in self.shares:
401+            if i_serverid in serverids:
402+                self._copy_share((i_shnum, i_sharefile), to_server)
403+
404+    # untested
405+    def _copy_share(self, share, to_server):
406+         (sharenum, sharefile) = share
407+         (id, ss) = to_server
408+         # FIXME: this doesn't work because we only have a LocalWrapper
409+         shares_dir = os.path.join(ss.storedir, "shares")
410+         si = uri.from_string(self.uri).get_storage_index()
411+         si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
412+         if not os.path.exists(si_dir):
413+             os.makedirs(si_dir)
414+         new_sharefile = os.path.join(si_dir, str(sharenum))
415+         shutil.copy(sharefile, new_sharefile)
416+         self.shares = self.find_shares(self.uri)
417+         # Make sure that the storage server has the share.
418+         self.failUnless((sharenum, ss.my_nodeid, new_sharefile)
419+                         in self.shares)
420+
421+    # untested
422+    def _add_server(self, server_number, readonly=False):
423+        ss = self.g.make_server(server_number, readonly)
424+        self.g.add_server(server_number, ss)
425+        self.shares = self.find_shares(self.uri)
426+
427+    def _set_up(self, testdir, num_clients=1, num_servers=10):
428+        self.basedir = "download/" + testdir
429+        self.set_up_grid(num_clients=num_clients, num_servers=num_servers)
430+
431+        self.c0 = self.g.clients[0]
432+        sb = self.c0.nodemaker.storage_broker
433+        self.servers = [(id, ss) for (id, ss) in sb.get_all_servers()]
434+
435+        data = upload.Data(immutable_plaintext, convergence="")
436+        d = self.c0.upload(data)
437+        def _uploaded(ur):
438+            self.uri = ur.uri
439+            self.shares = self.find_shares(self.uri)
440+        d.addCallback(_uploaded)
441+        return d
442+
443+    def test_10_good_sanity_check(self):
444+        d = self._set_up("test_10_good_sanity_check")
445+        d.addCallback(lambda ign: self.download_immutable())
446+        return d
447+
448+    def test_3_good_7_hung(self):
449+        d = self._set_up("test_3_good_7_hung")
450+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
451+        d.addCallback(lambda ign: self.download_immutable())
452+        return d
453+
454+    def test_3_good_7_noshares(self):
455+        d = self._set_up("test_3_good_7_noshares")
456+        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[3:]))
457+        d.addCallback(lambda ign: self.download_immutable())
458+        return d
459+
460+    def test_2_good_8_broken_fail(self):
461+        d = self._set_up("test_2_good_8_broken_fail")
462+        d.addCallback(lambda ign: self._break(self.servers[2:]))
463+        d.addCallback(lambda ign:
464+                      self.shouldFail(NotEnoughSharesError, "test_2_good_8_broken_fail",
465+                                      "Failed to get enough shareholders: have 2, need 3",
466+                                      self.download_immutable))
467+        return d
468+
469+    def test_2_good_8_noshares_fail(self):
470+        d = self._set_up("test_2_good_8_noshares_fail")
471+        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[2:]))
472+        d.addCallback(lambda ign:
473+                      self.shouldFail(NotEnoughSharesError, "test_2_good_8_noshares_fail",
474+                                      "Failed to get enough shareholders: have 2, need 3",
475+                                      self.download_immutable))
476+        return d
477+
478+    def test_2_good_8_hung_then_1_recovers(self):
479+        recovered = defer.Deferred()
480+        d = self._set_up("test_2_good_8_hung_then_1_recovers")
481+        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
482+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
483+        d.addCallback(lambda ign: self.download_immutable())
484+        reactor.callLater(5, recovered.callback, None)
485+        return d
486+
487+    def test_2_good_8_hung_then_1_recovers_with_2_shares(self):
488+        recovered = defer.Deferred()
489+        d = self._set_up("test_2_good_8_hung_then_1_recovers_with_2_shares")
490+        d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[0:1], self.servers[2]))
491+        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
492+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
493+        d.addCallback(lambda ign: self.download_immutable())
494+        reactor.callLater(5, recovered.callback, None)
495+        return d
496+
497+    def download_immutable(self):
498+        n = self.c0.create_node_from_uri(self.uri)
499+        d = download_to_data(n)
500+        def _got_data(data):
501+            self.failUnlessEqual(data, immutable_plaintext)
502+        d.addCallback(_got_data)
503+        return d
504+
505+    # unused
506+    def download_mutable(self):
507+        n = self.c0.create_node_from_uri(self.uri)
508+        d = n.download_best_version()
509+        def _got_data(data):
510+            self.failUnlessEqual(data, mutable_plaintext)
511+        d.addCallback(_got_data)
512+        return d
513}
514[immutable: fix bug in tests, change line-endings to unix style, add comment
515zooko@zooko.com**20100129184237
516 Ignore-this: f6bd875fe974c55c881e05eddf8d3436
517] {
518hunk ./src/allmydata/immutable/download.py 807
519         # self._current_segnum = 0
520         # self._vup # ValidatedExtendedURIProxy
521 
522+        # _get_all_shareholders() will create the following:
523+        # self._total_queries
524+        # self._responses_received = 0
525+        # self._queries_failed = 0
526+
527     def pauseProducing(self):
528         if self._paused:
529             return
530hunk ./src/allmydata/test/test_hung_server.py 48
531         for (i_shnum, i_serverid, i_sharefile) in self.shares:
532             if i_serverid in serverids:
533                 self._copy_share((i_shnum, i_sharefile), to_server)
534-
535+
536     # untested
537hunk ./src/allmydata/test/test_hung_server.py 50
538-    def _copy_share(self, share, to_server):
539-         (sharenum, sharefile) = share
540-         (id, ss) = to_server
541-         # FIXME: this doesn't work because we only have a LocalWrapper
542-         shares_dir = os.path.join(ss.storedir, "shares")
543-         si = uri.from_string(self.uri).get_storage_index()
544-         si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
545-         if not os.path.exists(si_dir):
546-             os.makedirs(si_dir)
547-         new_sharefile = os.path.join(si_dir, str(sharenum))
548-         shutil.copy(sharefile, new_sharefile)
549-         self.shares = self.find_shares(self.uri)
550-         # Make sure that the storage server has the share.
551-         self.failUnless((sharenum, ss.my_nodeid, new_sharefile)
552-                         in self.shares)
553-
554+    def _copy_share(self, share, to_server):
555+         (sharenum, sharefile) = share
556+         (id, ss) = to_server
557+         # FIXME: this doesn't work because we only have a LocalWrapper
558+         shares_dir = os.path.join(ss.original.storedir, "shares")
559+         si = uri.from_string(self.uri).get_storage_index()
560+         si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
561+         if not os.path.exists(si_dir):
562+             os.makedirs(si_dir)
563+         new_sharefile = os.path.join(si_dir, str(sharenum))
564+         shutil.copy(sharefile, new_sharefile)
565+         self.shares = self.find_shares(self.uri)
566+         # Make sure that the storage server has the share.
567+         self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
568+                         in self.shares)
569+
570     # untested
571hunk ./src/allmydata/test/test_hung_server.py 67
572-    def _add_server(self, server_number, readonly=False):
573-        ss = self.g.make_server(server_number, readonly)
574-        self.g.add_server(server_number, ss)
575-        self.shares = self.find_shares(self.uri)
576+    def _add_server(self, server_number, readonly=False):
577+        ss = self.g.make_server(server_number, readonly)
578+        self.g.add_server(server_number, ss)
579+        self.shares = self.find_shares(self.uri)
580 
581     def _set_up(self, testdir, num_clients=1, num_servers=10):
582         self.basedir = "download/" + testdir
583hunk ./src/allmydata/test/test_hung_server.py 75
584         self.set_up_grid(num_clients=num_clients, num_servers=num_servers)
585-
586-        self.c0 = self.g.clients[0]
587+
588+        self.c0 = self.g.clients[0]
589         sb = self.c0.nodemaker.storage_broker
590         self.servers = [(id, ss) for (id, ss) in sb.get_all_servers()]
591hunk ./src/allmydata/test/test_hung_server.py 79
592-
593-        data = upload.Data(immutable_plaintext, convergence="")
594-        d = self.c0.upload(data)
595-        def _uploaded(ur):
596-            self.uri = ur.uri
597-            self.shares = self.find_shares(self.uri)
598-        d.addCallback(_uploaded)
599-        return d
600+
601+        data = upload.Data(immutable_plaintext, convergence="")
602+        d = self.c0.upload(data)
603+        def _uploaded(ur):
604+            self.uri = ur.uri
605+            self.shares = self.find_shares(self.uri)
606+        d.addCallback(_uploaded)
607+        return d
608 
609     def test_10_good_sanity_check(self):
610         d = self._set_up("test_10_good_sanity_check")
611hunk ./src/allmydata/test/test_hung_server.py 95
612 
613     def test_3_good_7_hung(self):
614         d = self._set_up("test_3_good_7_hung")
615-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
616+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
617         d.addCallback(lambda ign: self.download_immutable())
618         return d
619 
620hunk ./src/allmydata/test/test_hung_server.py 101
621     def test_3_good_7_noshares(self):
622         d = self._set_up("test_3_good_7_noshares")
623-        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[3:]))
624+        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[3:]))
625         d.addCallback(lambda ign: self.download_immutable())
626         return d
627 
628hunk ./src/allmydata/test/test_hung_server.py 107
629     def test_2_good_8_broken_fail(self):
630         d = self._set_up("test_2_good_8_broken_fail")
631-        d.addCallback(lambda ign: self._break(self.servers[2:]))
632+        d.addCallback(lambda ign: self._break(self.servers[2:]))
633         d.addCallback(lambda ign:
634                       self.shouldFail(NotEnoughSharesError, "test_2_good_8_broken_fail",
635                                       "Failed to get enough shareholders: have 2, need 3",
636hunk ./src/allmydata/test/test_hung_server.py 116
637 
638     def test_2_good_8_noshares_fail(self):
639         d = self._set_up("test_2_good_8_noshares_fail")
640-        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[2:]))
641+        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[2:]))
642         d.addCallback(lambda ign:
643                       self.shouldFail(NotEnoughSharesError, "test_2_good_8_noshares_fail",
644                                       "Failed to get enough shareholders: have 2, need 3",
645hunk ./src/allmydata/test/test_hung_server.py 126
646     def test_2_good_8_hung_then_1_recovers(self):
647         recovered = defer.Deferred()
648         d = self._set_up("test_2_good_8_hung_then_1_recovers")
649-        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
650-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
651+        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
652+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
653         d.addCallback(lambda ign: self.download_immutable())
654         reactor.callLater(5, recovered.callback, None)
655         return d
656hunk ./src/allmydata/test/test_hung_server.py 135
657     def test_2_good_8_hung_then_1_recovers_with_2_shares(self):
658         recovered = defer.Deferred()
659         d = self._set_up("test_2_good_8_hung_then_1_recovers_with_2_shares")
660-        d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[0:1], self.servers[2]))
661-        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
662-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
663+        d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[0:1], self.servers[2]))
664+        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
665+        d.addCallback(lambda ign: self._hang(self.servers[3:]))
666         d.addCallback(lambda ign: self.download_immutable())
667         reactor.callLater(5, recovered.callback, None)
668         return d
669}
670[Improvements to test_hung_server, and fix for status updates in download.py
671david-sarah@jacaranda.org**20100130064303
672 Ignore-this: dd889c643afdcf0f86d55855aafda6ad
673] {
674hunk ./src/allmydata/immutable/download.py 906
675 
676         if self._status:
677             self._status.set_status("Locating Shares (%d/%d)" %
678-                                    (len(self._share_buckets),
679-                                     self._verifycap.needed_shares))
680+                                    (self._responses_received,
681+                                     self._total_queries))
682         return wait_for_enough_buckets_d
683 
684     def _check_got_all_responses(self, ignored=None):
685hunk ./src/allmydata/immutable/download.py 917
686             self._wait_for_enough_buckets_d = None
687 
688     def _got_response(self, buckets, peerid):
689+        # Note that this can continue to receive responses after _wait_for_enough_buckets_d
690+        # has fired.
691         self._responses_received += 1
692         self.log(format="got results from [%(peerid)s]: shnums %(shnums)s",
693                  peerid=idlib.shortnodeid_b2a(peerid),
694hunk ./src/allmydata/test/test_hung_server.py 5
695 import os, shutil
696 from twisted.trial import unittest
697 from twisted.internet import defer, reactor
698-from twisted.python import failure
699 from allmydata import uri
700 from allmydata.util.consumer import download_to_data
701 from allmydata.immutable import upload
702hunk ./src/allmydata/test/test_hung_server.py 8
703+from allmydata.mutable.common import UnrecoverableFileError
704 from allmydata.storage.common import storage_index_to_dir
705 from allmydata.test.no_network import GridTestMixin
706 from allmydata.test.common import ShouldFailMixin
707hunk ./src/allmydata/test/test_hung_server.py 34
708             if i_serverid in serverids:
709                 os.unlink(i_sharefile)
710 
711-    # untested
712-    def _pick_a_share_from(self, server):
713-        (id, ss) = server
714-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
715-            if i_serverid == id:
716-                return (i_shnum, i_sharefile)
717-        raise AssertionError("server %r had no shares" % server)
718-
719-    # untested
720     def _copy_all_shares_from(self, from_servers, to_server):
721         serverids = [id for (id, ss) in from_servers]
722         for (i_shnum, i_serverid, i_sharefile) in self.shares:
723hunk ./src/allmydata/test/test_hung_server.py 40
724             if i_serverid in serverids:
725                 self._copy_share((i_shnum, i_sharefile), to_server)
726 
727-    # untested
728     def _copy_share(self, share, to_server):
729          (sharenum, sharefile) = share
730          (id, ss) = to_server
731hunk ./src/allmydata/test/test_hung_server.py 43
732-         # FIXME: this doesn't work because we only have a LocalWrapper
733          shares_dir = os.path.join(ss.original.storedir, "shares")
734          si = uri.from_string(self.uri).get_storage_index()
735          si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
736hunk ./src/allmydata/test/test_hung_server.py 55
737          self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
738                          in self.shares)
739 
740-    # untested
741-    def _add_server(self, server_number, readonly=False):
742-        ss = self.g.make_server(server_number, readonly)
743-        self.g.add_server(server_number, ss)
744-        self.shares = self.find_shares(self.uri)
745+    def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
746+        self.mutable = mutable
747+        if mutable:
748+            self.basedir = "hung_server/mutable_" + testdir
749+        else:
750+            self.basedir = "hung_server/immutable_" + testdir
751 
752hunk ./src/allmydata/test/test_hung_server.py 62
753-    def _set_up(self, testdir, num_clients=1, num_servers=10):
754-        self.basedir = "download/" + testdir
755         self.set_up_grid(num_clients=num_clients, num_servers=num_servers)
756 
757         self.c0 = self.g.clients[0]
758hunk ./src/allmydata/test/test_hung_server.py 65
759-        sb = self.c0.nodemaker.storage_broker
760-        self.servers = [(id, ss) for (id, ss) in sb.get_all_servers()]
761+        nm = self.c0.nodemaker
762+        self.servers = [(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()]
763 
764hunk ./src/allmydata/test/test_hung_server.py 68
765-        data = upload.Data(immutable_plaintext, convergence="")
766-        d = self.c0.upload(data)
767-        def _uploaded(ur):
768-            self.uri = ur.uri
769-            self.shares = self.find_shares(self.uri)
770-        d.addCallback(_uploaded)
771+        if mutable:
772+            d = nm.create_mutable_file(mutable_plaintext)
773+            def _uploaded_mutable(node):
774+                self.uri = node.get_uri()
775+                self.shares = self.find_shares(self.uri)
776+            d.addCallback(_uploaded_mutable)
777+        else:
778+            data = upload.Data(immutable_plaintext, convergence="")
779+            d = self.c0.upload(data)
780+            def _uploaded_immutable(upload_res):
781+                self.uri = upload_res.uri
782+                self.shares = self.find_shares(self.uri)
783+            d.addCallback(_uploaded_immutable)
784         return d
785 
786hunk ./src/allmydata/test/test_hung_server.py 83
787-    def test_10_good_sanity_check(self):
788-        d = self._set_up("test_10_good_sanity_check")
789-        d.addCallback(lambda ign: self.download_immutable())
790+    def _check_download(self):
791+        n = self.c0.create_node_from_uri(self.uri)
792+        if self.mutable:
793+            d = n.download_best_version()
794+            expected_plaintext = mutable_plaintext
795+        else:
796+            d = download_to_data(n)
797+            expected_plaintext = immutable_plaintext
798+        def _got_data(data):
799+            self.failUnlessEqual(data, expected_plaintext)
800+        d.addCallback(_got_data)
801         return d
802 
803hunk ./src/allmydata/test/test_hung_server.py 96
804-    def test_3_good_7_hung(self):
805-        d = self._set_up("test_3_good_7_hung")
806-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
807-        d.addCallback(lambda ign: self.download_immutable())
808+    def _should_fail_download(self):
809+        if self.mutable:
810+            return self.shouldFail(UnrecoverableFileError, self.basedir,
811+                                   "no recoverable versions",
812+                                   self._check_download)
813+        else:
814+            return self.shouldFail(NotEnoughSharesError, self.basedir,
815+                                   "Failed to get enough shareholders",
816+                                   self._check_download)
817+
818+
819+    def test_10_good_sanity_check(self):
820+        d = defer.succeed(None)
821+        for mutable in [False, True]:
822+            d.addCallback(lambda ign: self._set_up(mutable, "test_10_good_sanity_check"))
823+            d.addCallback(lambda ign: self._check_download())
824         return d
825 
826hunk ./src/allmydata/test/test_hung_server.py 114
827+    def test_10_good_copied_share(self):
828+        d = defer.succeed(None)
829+        for mutable in [False, True]:
830+            d.addCallback(lambda ign: self._set_up(mutable, "test_10_good_copied_share"))
831+            d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[2:3], self.servers[0]))
832+            d.addCallback(lambda ign: self._check_download())
833+            return d
834+
835     def test_3_good_7_noshares(self):
836hunk ./src/allmydata/test/test_hung_server.py 123
837-        d = self._set_up("test_3_good_7_noshares")
838-        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[3:]))
839-        d.addCallback(lambda ign: self.download_immutable())
840+        d = defer.succeed(None)
841+        for mutable in [False, True]:
842+            d.addCallback(lambda ign: self._set_up(mutable, "test_3_good_7_noshares"))
843+            d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[3:]))
844+            d.addCallback(lambda ign: self._check_download())
845         return d
846 
847     def test_2_good_8_broken_fail(self):
848hunk ./src/allmydata/test/test_hung_server.py 131
849-        d = self._set_up("test_2_good_8_broken_fail")
850-        d.addCallback(lambda ign: self._break(self.servers[2:]))
851-        d.addCallback(lambda ign:
852-                      self.shouldFail(NotEnoughSharesError, "test_2_good_8_broken_fail",
853-                                      "Failed to get enough shareholders: have 2, need 3",
854-                                      self.download_immutable))
855+        d = defer.succeed(None)
856+        for mutable in [False, True]:
857+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_broken_fail"))
858+            d.addCallback(lambda ign: self._break(self.servers[2:]))
859+            d.addCallback(lambda ign: self._should_fail_download())
860         return d
861 
862     def test_2_good_8_noshares_fail(self):
863hunk ./src/allmydata/test/test_hung_server.py 139
864-        d = self._set_up("test_2_good_8_noshares_fail")
865-        d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[2:]))
866-        d.addCallback(lambda ign:
867-                      self.shouldFail(NotEnoughSharesError, "test_2_good_8_noshares_fail",
868-                                      "Failed to get enough shareholders: have 2, need 3",
869-                                      self.download_immutable))
870+        d = defer.succeed(None)
871+        for mutable in [False, True]:
872+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_noshares_fail"))
873+            d.addCallback(lambda ign: self._delete_all_shares_from(self.servers[2:]))
874+            d.addCallback(lambda ign: self._should_fail_download())
875         return d
876 
877hunk ./src/allmydata/test/test_hung_server.py 146
878-    def test_2_good_8_hung_then_1_recovers(self):
879-        recovered = defer.Deferred()
880-        d = self._set_up("test_2_good_8_hung_then_1_recovers")
881-        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
882-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
883-        d.addCallback(lambda ign: self.download_immutable())
884-        reactor.callLater(5, recovered.callback, None)
885+    def test_2_good_8_broken_copied_share(self):
886+        d = defer.succeed(None)
887+        for mutable in [False, True]:
888+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_broken_copied_share"))
889+            d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[2:3], self.servers[0]))
890+            d.addCallback(lambda ign: self._break(self.servers[2:]))
891+            d.addCallback(lambda ign: self._check_download())
892         return d
893 
894hunk ./src/allmydata/test/test_hung_server.py 155
895-    def test_2_good_8_hung_then_1_recovers_with_2_shares(self):
896-        recovered = defer.Deferred()
897-        d = self._set_up("test_2_good_8_hung_then_1_recovers_with_2_shares")
898-        d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[0:1], self.servers[2]))
899-        d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
900-        d.addCallback(lambda ign: self._hang(self.servers[3:]))
901-        d.addCallback(lambda ign: self.download_immutable())
902-        reactor.callLater(5, recovered.callback, None)
903+    def test_2_good_8_broken_duplicate_share_fail(self):
904+        d = defer.succeed(None)
905+        for mutable in [False, True]:
906+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_broken_duplicate_share_fail"))
907+            d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[1:2], self.servers[0]))
908+            d.addCallback(lambda ign: self._break(self.servers[2:]))
909+            d.addCallback(lambda ign: self._should_fail_download())
910         return d
911 
912hunk ./src/allmydata/test/test_hung_server.py 164
913-    def download_immutable(self):
914-        n = self.c0.create_node_from_uri(self.uri)
915-        d = download_to_data(n)
916-        def _got_data(data):
917-            self.failUnlessEqual(data, immutable_plaintext)
918-        d.addCallback(_got_data)
919+    # The tests below do not currently pass for mutable files.
920+
921+    def test_3_good_7_hung(self):
922+        d = defer.succeed(None)
923+        for mutable in [False]:
924+            d.addCallback(lambda ign: self._set_up(mutable, "test_3_good_7_hung"))
925+            d.addCallback(lambda ign: self._hang(self.servers[3:]))
926+            d.addCallback(lambda ign: self._check_download())
927         return d
928 
929hunk ./src/allmydata/test/test_hung_server.py 174
930-    # unused
931-    def download_mutable(self):
932-        n = self.c0.create_node_from_uri(self.uri)
933-        d = n.download_best_version()
934-        def _got_data(data):
935-            self.failUnlessEqual(data, mutable_plaintext)
936-        d.addCallback(_got_data)
937+    def test_2_good_8_hung_then_1_recovers(self):
938+        d = defer.succeed(None)
939+        for mutable in [False]:
940+            recovered = defer.Deferred()
941+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_hung_then_1_recovers"))
942+            d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
943+            d.addCallback(lambda ign: self._hang(self.servers[3:]))
944+            d.addCallback(lambda ign: reactor.callLater(5, recovered.callback, None))
945+            d.addCallback(lambda ign: self._check_download())
946+        return d
947+
948+    def test_2_good_8_hung_then_1_recovers_with_2_shares(self):
949+        d = defer.succeed(None)
950+        for mutable in [False]:
951+            recovered = defer.Deferred()
952+            d.addCallback(lambda ign: self._set_up(mutable, "test_2_good_8_hung_then_1_recovers_with_2_shares"))
953+            d.addCallback(lambda ign: self._copy_all_shares_from(self.servers[0:1], self.servers[2]))
954+            d.addCallback(lambda ign: self._hang(self.servers[2:3], until=recovered))
955+            d.addCallback(lambda ign: self._hang(self.servers[3:]))
956+            d.addCallback(lambda ign: reactor.callLater(5, recovered.callback, None))
957+            d.addCallback(lambda ign: self._check_download())
958         return d
959}
960
961Context:
962
963[docs: update relnotes.txt for Tahoe-LAFS v1.6
964zooko@zooko.com**20100128171257
965 Ignore-this: 920df92152aead69ef861b9b2e8ff218
966]
967[Address comments by Kevan on 833 and add test for stripping spaces
968david-sarah@jacaranda.org**20100127230642
969 Ignore-this: de36aeaf4afb3ba05dbeb49a5e9a6b26
970]
971[Eliminate 'foo if test else bar' syntax that isn't supported by Python 2.4
972david-sarah@jacaranda.org**20100129035210
973 Ignore-this: 70eafd487b4b6299beedd63b4a54a0c
974]
975[Fix example JSON in webapi.txt that cannot occur in practice
976david-sarah@jacaranda.org**20100129032742
977 Ignore-this: 361a1ba663d77169aeef93caef870097
978]
979[Add mutable field to t=json output for unknown nodes, when mutability is known
980david-sarah@jacaranda.org**20100129031424
981 Ignore-this: 1516d63559bdfeb6355485dff0f5c04e
982]
983[Show -IMM and -RO suffixes for types of immutable and read-only unknown nodes in directory listings
984david-sarah@jacaranda.org**20100128220800
985 Ignore-this: dc5c17c0a566398f88e4303c41321e66
986]
987[Fix inaccurate comment in test_mutant_dirnodes_are_omitted
988david-sarah@jacaranda.org**20100128202456
989 Ignore-this: 9fa17ed7feac9e4d084f1b2338c76fca
990]
991[test_runner: cleanup, refactor common code into a non-executable method
992Brian Warner <warner@lothar.com>**20100127224040
993 Ignore-this: 4cb4aada87777771f688edfd8129ffca
994 
995 Having both test_node() and test_client() (one of which calls the other) felt
996 confusing to me, so I changed it to have test_node(), test_client(), and a
997 common do_create() helper method.
998]
999[scripts/runner.py: simplify David-Sarah's clever grouped-commands usage trick
1000Brian Warner <warner@lothar.com>**20100127223758
1001 Ignore-this: 70877ebf06ae59f32960b0aa4ce1d1ae
1002]
1003[tahoe backup: skip all symlinks, with warning. Fixes #850, addresses #641.
1004Brian Warner <warner@lothar.com>**20100127223517
1005 Ignore-this: ab5cf05158d32a575ca8efc0f650033f
1006]
1007[NEWS: update with all recent user-visible changes
1008Brian Warner <warner@lothar.com>**20100127222209
1009 Ignore-this: 277d24568018bf4f3fb7736fda64eceb
1010]
1011["tahoe backup": fix --exclude-vcs docs to include Git
1012Brian Warner <warner@lothar.com>**20100127201044
1013 Ignore-this: 756a58dde21bdc65aa62b81803605b5
1014]
1015[docs: fix references to --no-storage, explanation of [storage] section
1016Brian Warner <warner@lothar.com>**20100127200956
1017 Ignore-this: f4be1763a585e1ac6299a4f1b94a59e0
1018]
1019[cli: merge the better version of David-Sarah's split-usage-and-help patch with the earlier version that I mistakenly committed
1020zooko@zooko.com**20100126044559
1021 Ignore-this: 284d188e13b7901013cbb650168e6447
1022]
1023[Split tahoe --help options into groups.
1024david-sarah@jacaranda.org**20100112043935
1025 Ignore-this: 610f9c41b00e6863e3cd047379733e3a
1026]
1027[Miscellaneous documentation, test, and code formatting tweaks.
1028david-sarah@jacaranda.org**20100127070309
1029 Ignore-this: 84ca7e4bb7c64221ae2c61144ef5edef
1030]
1031[Prevent mutable objects from being retrieved from an immutable directory, and associated forward-compatibility improvements.
1032david-sarah@jacaranda.org**20100127064430
1033 Ignore-this: 5ef6a3554cf6bef0bf0712cc7d6c0252
1034]
1035[docs: further CREDITS level-ups for Nils, Kevan, David-Sarah
1036zooko@zooko.com**20100126170021
1037 Ignore-this: 1e513e85cf7b7abf57f056e6d7544b38
1038]
1039[ftpd: clearer error message if Twisted needs a patch (by Nils Durner)
1040zooko@zooko.com**20100126143411
1041 Ignore-this: 440e6831ae6da5135c1edd081c93871f
1042]
1043[Add 'docs/performance.txt', which (for the moment) describes mutable file performance issues
1044Kevan Carstensen <kevan@isnotajoke.com>**20100115204500
1045 Ignore-this: ade4e500217db2509aee35aacc8c5dbf
1046]
1047[docs: more CREDITS for François, Kevan, and David-Sarah
1048zooko@zooko.com**20100126132133
1049 Ignore-this: f37d4977c13066fcac088ba98a31b02e
1050]
1051[tahoe_backup.py: display warnings on errors instead of stopping the whole backup. Fix #729.
1052francois@ctrlaltdel.ch**20100120094249
1053 Ignore-this: 7006ea4b0910b6d29af6ab4a3997a8f9
1054 
1055 This patch displays a warning to the user in two cases:
1056   
1057   1. When special files like symlinks, fifos, devices, etc. are found in the
1058      local source.
1059   
1060   2. If files or directories are not readables by the user running the 'tahoe
1061      backup' command.
1062 
1063 In verbose mode, the number of skipped files and directories is printed at the
1064 end of the backup.
1065 
1066 Exit status returned by 'tahoe backup':
1067 
1068   - 0 everything went fine
1069   - 1 the backup failed
1070   - 2 files were skipped during the backup
1071 
1072]
1073[Message saying that we couldn't find bin/tahoe should say where we looked
1074david-sarah@jacaranda.org**20100116204556
1075 Ignore-this: 1068576fd59ea470f1e19196315d1bb
1076]
1077[Change running.html to describe 'tahoe run'
1078david-sarah@jacaranda.org**20100112044409
1079 Ignore-this: 23ad0114643ce31b56e19bb14e011e4f
1080]
1081[cli: split usage strings into groups (patch by David-Sarah Hopwood)
1082zooko@zooko.com**20100126043921
1083 Ignore-this: 51928d266a7292b873f87f7d53c9a01e
1084]
1085[Add create-node CLI command, and make create-client equivalent to create-node --no-storage (fixes #760)
1086david-sarah@jacaranda.org**20100116052055
1087 Ignore-this: 47d08b18c69738685e13ff365738d5a
1088]
1089[contrib/fuse/runtests.py: Fix #888, configure settings in tahoe.cfg and don't treat warnings as failure
1090francois@ctrlaltdel.ch**20100109123010
1091 Ignore-this: 2590d44044acd7dfa3690c416cae945c
1092 
1093 Fix a few bitrotten pieces in the FUSE test script.  It now configures tahoe
1094 node settings by editing tahoe.cfg which is the new supported method.
1095 
1096 It alos tolerate warnings issued by the mount command, the cause of these
1097 warnings is the same as in #876 (contrib/fuse/runtests.py doesn't tolerate
1098 deprecations warnings).
1099 
1100]
1101[Fix webapi t=mkdir with multpart/form-data, as on the Welcome page. Closes #919.
1102Brian Warner <warner@lothar.com>**20100121065052
1103 Ignore-this: 1f20ea0a0f1f6d6c1e8e14f193a92c87
1104]
1105[Fix boodlegrid use of set_children
1106david-sarah@jacaranda.org**20100126063414
1107 Ignore-this: 3aa2d4836f76303b2bacecd09611f999
1108]
1109[Remove replace= parameter to mkdir-immutable and mkdir-with-children
1110david-sarah@jacaranda.org**20100124224325
1111 Ignore-this: 25207bcc946c0c43d9528718e76ba7b
1112]
1113[Warn about test failures due to setting FLOG* env vars
1114david-sarah@jacaranda.org**20100124220629
1115 Ignore-this: 1c25247ca0f0840390a1b7259a9f4a3c
1116]
1117[Patch to accept t=set-children as well as t=set_children
1118david-sarah@jacaranda.org**20100124030020
1119 Ignore-this: 2c061f12af817cdf77feeeb64098ec3a
1120]
1121[tahoe_add_alias.py: minor refactoring
1122Brian Warner <warner@lothar.com>**20100115064220
1123 Ignore-this: 29910e81ad11209c9e493d65fd2dab9b
1124]
1125[test_dirnode.py: reduce scope of a Client instance, suggested by Kevan.
1126Brian Warner <warner@lothar.com>**20100115062713
1127 Ignore-this: b35efd9e6027e43de6c6f509bfb4ccaa
1128]
1129[test_provisioning: STAN is not always a list. Fix by David-Sarah Hopwood.
1130Brian Warner <warner@lothar.com>**20100115014632
1131 Ignore-this: 9989de7f1e00907706d2b63153138219
1132]
1133[web/directory.py mkdir-immutable: hush pyflakes, add TODO for #903 behavior
1134Brian Warner <warner@lothar.com>**20100114222804
1135 Ignore-this: 717cd3b9a1c8aeee76938c9641db7356
1136]
1137[hush pyflakes-0.4.0 warnings: slightly less-trivial fixes. Closes #900.
1138Brian Warner <warner@lothar.com>**20100114221719
1139 Ignore-this: f774f4637e256ad55502659413a811a8
1140 
1141 This includes one fix (in test_web) which was testing the wrong thing.
1142]
1143[hush pyflakes-0.4.0 warnings: remove trivial unused variables. For #900.
1144Brian Warner <warner@lothar.com>**20100114221529
1145 Ignore-this: e96106c8f1a99fbf93306fbfe9a294cf
1146]
1147[tahoe add-alias/create-alias: don't corrupt non-newline-terminated alias
1148Brian Warner <warner@lothar.com>**20100114210246
1149 Ignore-this: 9c994792e53a85159d708760a9b1b000
1150 file. Closes #741.
1151]
1152[change docs and --help to use "grid" instead of "virtual drive": closes #892.
1153Brian Warner <warner@lothar.com>**20100114201119
1154 Ignore-this: a20d4a4dcc4de4e3b404ff72d40fc29b
1155 
1156 Thanks to David-Sarah Hopwood for the patch.
1157]
1158[backupdb.txt: fix ST_CTIME reference
1159Brian Warner <warner@lothar.com>**20100114194052
1160 Ignore-this: 5a189c7a1181b07dd87f0a08ea31b6d3
1161]
1162[client.py: fix/update comments on KeyGenerator
1163Brian Warner <warner@lothar.com>**20100113004226
1164 Ignore-this: 2208adbb3fd6a911c9f44e814583cabd
1165]
1166[Clean up log.err calls, for one of the issues in #889.
1167Brian Warner <warner@lothar.com>**20100112013343
1168 Ignore-this: f58455ce15f1fda647c5fb25d234d2db
1169 
1170 allmydata.util.log.err() either takes a Failure as the first positional
1171 argument, or takes no positional arguments and must be invoked in an
1172 exception handler. Fixed its signature to match both foolscap.logging.log.err
1173 and twisted.python.log.err . Included a brief unit test.
1174]
1175[tidy up DeadReferenceError handling, ignore them in add_lease calls
1176Brian Warner <warner@lothar.com>**20100112000723
1177 Ignore-this: 72f1444e826fd0b9db6d318f89603c38
1178 
1179 Stop checking separately for ConnectionDone/ConnectionLost, since those have
1180 been folded into DeadReferenceError since foolscap-0.3.1 . Write
1181 rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve
1182 code coverage.
1183]
1184[NEWS: improve "tahoe backup" notes, mention first-backup-after-upgrade duration
1185Brian Warner <warner@lothar.com>**20100111190132
1186 Ignore-this: 10347c590b3375964579ba6c2b0edb4f
1187 
1188 Thanks to Francois Deppierraz for the suggestion.
1189]
1190[test_repairer: add (commented-out) test_each_byte, to see exactly what the
1191Brian Warner <warner@lothar.com>**20100110203552
1192 Ignore-this: 8e84277d5304752edeff052b97821815
1193 Verifier misses
1194 
1195 The results (described in #819) match our expectations: it misses corruption
1196 in unused share fields and in most container fields (which are only visible
1197 to the storage server, not the client). 1265 bytes of a 2753 byte
1198 share (hosting a 56-byte file with an artifically small segment size) are
1199 unused, mostly in the unused tail of the overallocated UEB space (765 bytes),
1200 and the allocated-but-unwritten plaintext_hash_tree (480 bytes).
1201]
1202[repairer: fix some wrong offsets in the randomized verifier tests, debugged by Brian
1203zooko@zooko.com**20100110203721
1204 Ignore-this: 20604a609db8706555578612c1c12feb
1205 fixes #819
1206]
1207[test_repairer: fix colliding basedir names, which caused test inconsistencies
1208Brian Warner <warner@lothar.com>**20100110084619
1209 Ignore-this: b1d56dd27e6ab99a7730f74ba10abd23
1210]
1211[repairer: add deterministic test for #819, mark as TODO
1212zooko@zooko.com**20100110013619
1213 Ignore-this: 4cb8bb30b25246de58ed2b96fa447d68
1214]
1215[contrib/fuse/runtests.py: Tolerate the tahoe CLI returning deprecation warnings
1216francois@ctrlaltdel.ch**20100109175946
1217 Ignore-this: 419c354d9f2f6eaec03deb9b83752aee
1218 
1219 Depending on the versions of external libraries such as Twisted of Foolscap,
1220 the tahoe CLI can display deprecation warnings on stdout.  The tests should
1221 not interpret those warnings as a failure if the node is in fact correctly
1222 started.
1223   
1224 See http://allmydata.org/trac/tahoe/ticket/859 for an example of deprecation
1225 warnings.
1226 
1227 fixes #876
1228]
1229[contrib: fix fuse_impl_c to use new Python API
1230zooko@zooko.com**20100109174956
1231 Ignore-this: 51ca1ec7c2a92a0862e9b99e52542179
1232 original patch by Thomas Delaet, fixed by François, reviewed by Brian, committed by me
1233]
1234[docs: CREDITS: add David-Sarah to the CREDITS file
1235zooko@zooko.com**20100109060435
1236 Ignore-this: 896062396ad85f9d2d4806762632f25a
1237]
1238[mutable/publish: don't loop() right away upon DeadReferenceError. Closes #877
1239Brian Warner <warner@lothar.com>**20100102220841
1240 Ignore-this: b200e707b3f13aa8251981362b8a3e61
1241 
1242 The bug was that a disconnected server could cause us to re-enter the initial
1243 loop() call, sending multiple queries to a single server, provoking an
1244 incorrect UCWE. To fix it, stall the loop() with an eventual.fireEventually()
1245]
1246[immutable/checker.py: oops, forgot some imports. Also hush pyflakes.
1247Brian Warner <warner@lothar.com>**20091229233909
1248 Ignore-this: 4d61bd3f8113015a4773fd4768176e51
1249]
1250[mutable repair: return successful=False when numshares<k (thus repair fails),
1251Brian Warner <warner@lothar.com>**20091229233746
1252 Ignore-this: d881c3275ff8c8bee42f6a80ca48441e
1253 instead of weird errors. Closes #874 and #786.
1254 
1255 Previously, if the file had 0 shares, this would raise TypeError as it tried
1256 to call download_version(None). If the file had some shares but fewer than
1257 'k', it would incorrectly raise MustForceRepairError.
1258 
1259 Added get_successful() to the IRepairResults API, to give repair() a place to
1260 report non-code-bug problems like this.
1261]
1262[node.py/interfaces.py: minor docs fixes
1263Brian Warner <warner@lothar.com>**20091229230409
1264 Ignore-this: c86ad6342ef0f95d50639b4f99cd4ddf
1265]
1266[NEWS: fix 1.4.1 announcement w.r.t. add-lease behavior in older releases
1267Brian Warner <warner@lothar.com>**20091229230310
1268 Ignore-this: bbbbb9c961f3bbcc6e5dbe0b1594822
1269]
1270[checker: don't let failures in add-lease affect checker results. Closes #875.
1271Brian Warner <warner@lothar.com>**20091229230108
1272 Ignore-this: ef1a367b93e4d01298c2b1e6ca59c492
1273 
1274 Mutable servermap updates and the immutable checker, when run with
1275 add_lease=True, send both the do-you-have-block and add-lease commands in
1276 parallel, to avoid an extra round trip time. Many older servers have problems
1277 with add-lease and raise various exceptions, which don't generally matter.
1278 The client-side code was catching+ignoring some of them, but unrecognized
1279 exceptions were passed through to the DYHB code, concealing the DYHB results
1280 from the checker, making it think the server had no shares.
1281 
1282 The fix is to separate the code paths. Both commands are sent at the same
1283 time, but the errback path from add-lease is handled separately. Known
1284 exceptions are ignored, the others (both unknown-remote and all-local) are
1285 logged (log.WEIRD, which will trigger an Incident), but neither will affect
1286 the DYHB results.
1287 
1288 The add-lease message is sent first, and we know that the server handles them
1289 synchronously. So when the checker is done, we can be sure that all the
1290 add-lease messages have been retired. This makes life easier for unit tests.
1291]
1292[test_cli: verify fix for "tahoe get" not creating empty file on error (#121)
1293Brian Warner <warner@lothar.com>**20091227235444
1294 Ignore-this: 6444d52413b68eb7c11bc3dfdc69c55f
1295]
1296[addendum to "Fix 'tahoe ls' on files (#771)"
1297Brian Warner <warner@lothar.com>**20091227232149
1298 Ignore-this: 6dd5e25f8072a3153ba200b7fdd49491
1299 
1300 tahoe_ls.py: tolerate missing metadata
1301 web/filenode.py: minor cleanups
1302 test_cli.py: test 'tahoe ls FILECAP'
1303]
1304[Fix 'tahoe ls' on files (#771). Patch adapted from Kevan Carstensen.
1305Brian Warner <warner@lothar.com>**20091227225443
1306 Ignore-this: 8bf8c7b1cd14ea4b0ebd453434f4fe07
1307 
1308 web/filenode.py: also serve edge metadata when using t=json on a
1309                  DIRCAP/childname object.
1310 tahoe_ls.py: list file objects as if we were listing one-entry directories.
1311              Show edge metadata if we have it, which will be true when doing
1312              'tahoe ls DIRCAP/filename' and false when doing 'tahoe ls
1313              FILECAP'
1314]
1315[tahoe_get: don't create the output file on error. Closes #121.
1316Brian Warner <warner@lothar.com>**20091227220404
1317 Ignore-this: 58d5e793a77ec6e87d9394ade074b926
1318]
1319[webapi: don't accept zero-length childnames during traversal. Closes #358, #676.
1320Brian Warner <warner@lothar.com>**20091227201043
1321 Ignore-this: a9119dec89e1c7741f2289b0cad6497b
1322 
1323 This forbids operations that would implicitly create a directory with a
1324 zero-length (empty string) name, like what you'd get if you did "tahoe put
1325 local /oops/blah" (#358) or "POST /uri/CAP//?t=mkdir" (#676). The error
1326 message is fairly friendly too.
1327 
1328 Also added code to "tahoe put" to catch this error beforehand and suggest the
1329 correct syntax (i.e. without the leading slash).
1330]
1331[CLI: send 'Accept:' header to ask for text/plain tracebacks. Closes #646.
1332Brian Warner <warner@lothar.com>**20091227195828
1333 Ignore-this: 44c258d4d4c7dac0ed58adb22f73331
1334 
1335 The webapi has been looking for an Accept header since 1.4.0, but it treats a
1336 missing header as equal to */* (to honor RFC2616). This change finally
1337 modifies our CLI tools to ask for "text/plain, application/octet-stream",
1338 which seems roughly correct (we either want a plain-text traceback or error
1339 message, or an uninterpreted chunk of binary data to save to disk). Some day
1340 we'll figure out how JSON fits into this scheme.
1341]
1342[Makefile: upload-tarballs: switch from xfer-client to flappclient, closes #350
1343Brian Warner <warner@lothar.com>**20091227163703
1344 Ignore-this: 3beeecdf2ad9c2438ab57f0e33dcb357
1345 
1346 I've also set up a new flappserver on source@allmydata.org to receive the
1347 tarballs. We still need to replace the gutsy buildslave (which is where the
1348 tarballs used to be generated+uploaded) and give it the new FURL.
1349]
1350[misc/ringsim.py: make it deterministic, more detail about grid-is-full behavior
1351Brian Warner <warner@lothar.com>**20091227024832
1352 Ignore-this: a691cc763fb2e98a4ce1767c36e8e73f
1353]
1354[misc/ringsim.py: tool to discuss #302
1355Brian Warner <warner@lothar.com>**20091226060339
1356 Ignore-this: fc171369b8f0d97afeeb8213e29d10ed
1357]
1358[docs: fix helper.txt to describe new config style
1359zooko@zooko.com**20091224223522
1360 Ignore-this: 102e7692dc414a4b466307f7d78601fe
1361]
1362[docs/stats.txt: add TOC, notes about controlling gatherer's listening port
1363Brian Warner <warner@lothar.com>**20091224202133
1364 Ignore-this: 8eef63b0e18db5aa8249c2eafde02c05
1365 
1366 Thanks to Jody Harris for the suggestions.
1367]
1368[Add docs/stats.py, explaining Tahoe stats, the gatherer, and the munin plugins.
1369Brian Warner <warner@lothar.com>**20091223052400
1370 Ignore-this: 7c9eeb6e5644eceda98b59a67730ccd5
1371]
1372[more #859: avoid deprecation warning for unit tests too, hush pyflakes
1373Brian Warner <warner@lothar.com>**20091215000147
1374 Ignore-this: 193622e24d31077da825a11ed2325fd3
1375 
1376 * factor maybe-import-sha logic into util.hashutil
1377]
1378[use hashlib module if available, thus avoiding a DeprecationWarning for importing the old sha module; fixes #859
1379zooko@zooko.com**20091214212703
1380 Ignore-this: 8d0f230a4bf8581dbc1b07389d76029c
1381]
1382[docs: reflow architecture.txt to 78-char lines
1383zooko@zooko.com**20091208232943
1384 Ignore-this: 88f55166415f15192e39407815141f77
1385]
1386[docs: update the about.html a little
1387zooko@zooko.com**20091208212737
1388 Ignore-this: 3fe2d9653c6de0727d3e82bd70f2a8ed
1389]
1390[docs: remove obsolete doc file "codemap.txt"
1391zooko@zooko.com**20091113163033
1392 Ignore-this: 16bc21a1835546e71d1b344c06c61ebb
1393 I started to update this to reflect the current codebase, but then I thought (a) nobody seemed to notice that it hasn't been updated since December 2007, and (b) it will just bit-rot again, so I'm removing it.
1394]
1395[mutable/retrieve.py: stop reaching into private MutableFileNode attributes
1396Brian Warner <warner@lothar.com>**20091208172921
1397 Ignore-this: 61e548798c1105aed66a792bf26ceef7
1398]
1399[mutable/servermap.py: stop reaching into private MutableFileNode attributes
1400Brian Warner <warner@lothar.com>**20091208172608
1401 Ignore-this: b40a6b62f623f9285ad96fda139c2ef2
1402]
1403[mutable/servermap.py: oops, query N+e servers in MODE_WRITE, not k+e
1404Brian Warner <warner@lothar.com>**20091208171156
1405 Ignore-this: 3497f4ab70dae906759007c3cfa43bc
1406 
1407 under normal conditions, this wouldn't cause any problems, but if the shares
1408 are really sparse (perhaps because new servers were added), then
1409 file-modifies might stop looking too early and leave old shares in place
1410]
1411[control.py: fix speedtest: use download_best_version (not read) on mutable nodes
1412Brian Warner <warner@lothar.com>**20091207060512
1413 Ignore-this: 7125eabfe74837e05f9291dd6414f917
1414]
1415[FTP-and-SFTP.txt: fix ssh-keygen pointer
1416Brian Warner <warner@lothar.com>**20091207052803
1417 Ignore-this: bc2a70ee8c58ec314e79c1262ccb22f7
1418]
1419[setup: ignore _darcs in the "test-clean" test and make the "clean" step remove all .egg's in the root dir
1420zooko@zooko.com**20091206184835
1421 Ignore-this: 6066bd160f0db36d7bf60aba405558d2
1422]
1423[remove MutableFileNode.download(), prefer download_best_version() instead
1424Brian Warner <warner@lothar.com>**20091201225438
1425 Ignore-this: 5733eb373a902063e09fd52cc858dec0
1426]
1427[Simplify immutable download API: use just filenode.read(consumer, offset, size)
1428Brian Warner <warner@lothar.com>**20091201225330
1429 Ignore-this: bdedfb488ac23738bf52ae6d4ab3a3fb
1430 
1431 * remove Downloader.download_to_data/download_to_filename/download_to_filehandle
1432 * remove download.Data/FileName/FileHandle targets
1433 * remove filenode.download/download_to_data/download_to_filename methods
1434 * leave Downloader.download (the whole Downloader will go away eventually)
1435 * add util.consumer.MemoryConsumer/download_to_data, for convenience
1436   (this is mostly used by unit tests, but it gets used by enough non-test
1437    code to warrant putting it in allmydata.util)
1438 * update tests
1439 * removes about 180 lines of code. Yay negative code days!
1440 
1441 Overall plan is to rewrite immutable/download.py and leave filenode.read() as
1442 the sole read-side API.
1443]
1444[server.py: undo my bogus 'correction' of David-Sarah's comment fix
1445Brian Warner <warner@lothar.com>**20091201024607
1446 Ignore-this: ff4bb58f6a9e045b900ac3a89d6f506a
1447 
1448 and move it to a better line
1449]
1450[Implement more coherent behavior when copying with dircaps/filecaps (closes #761). Patch by Kevan Carstensen.
1451"Brian Warner <warner@lothar.com>"**20091130211009]
1452[storage.py: update comment
1453"Brian Warner <warner@lothar.com>"**20091130195913]
1454[storage server: detect disk space usage on Windows too (fixes #637)
1455david-sarah@jacaranda.org**20091121055644
1456 Ignore-this: 20fb30498174ce997befac7701fab056
1457]
1458[make status of finished operations consistently "Finished"
1459david-sarah@jacaranda.org**20091121061543
1460 Ignore-this: 97d483e8536ccfc2934549ceff7055a3
1461]
1462[NEWS: update with all user-visible changes since the last release
1463Brian Warner <warner@lothar.com>**20091127224217
1464 Ignore-this: 741da6cd928e939fb6d21a61ea3daf0b
1465]
1466[update "tahoe backup" docs, and webapi.txt's mkdir-with-children
1467Brian Warner <warner@lothar.com>**20091127055900
1468 Ignore-this: defac1fb9a2335b0af3ef9dbbcc67b7e
1469]
1470[Add dirnodes to backupdb and "tahoe backup", closes #606.
1471Brian Warner <warner@lothar.com>**20091126234257
1472 Ignore-this: fa88796fcad1763c6a2bf81f56103223
1473 
1474 * backups now share dirnodes with any previous backup, in any location,
1475   so renames and moves are handled very efficiently
1476 * "tahoe backup" no longer bothers reading the previous snapshot
1477 * if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite,
1478   to force new uploads of all files and directories
1479]
1480[webapi: fix t=check for DIR2-LIT (i.e. empty immutable directories)
1481Brian Warner <warner@lothar.com>**20091126232731
1482 Ignore-this: 8513c890525c69c1eca0e80d53a231f8
1483]
1484[PipelineError: fix str() on python2.4 . Closes #842.
1485Brian Warner <warner@lothar.com>**20091124212512
1486 Ignore-this: e62c92ea9ede2ab7d11fe63f43b9c942
1487]
1488[test_uri.py: s/NewDirnode/Dirnode/ , now that they aren't "new" anymore
1489Brian Warner <warner@lothar.com>**20091120075553
1490 Ignore-this: 61c8ef5e45a9d966873a610d8349b830
1491]
1492[interface name cleanups: IFileNode, IImmutableFileNode, IMutableFileNode
1493Brian Warner <warner@lothar.com>**20091120075255
1494 Ignore-this: e3d193c229e2463e1d0b0c92306de27f
1495 
1496 The proper hierarchy is:
1497  IFilesystemNode
1498  +IFileNode
1499  ++IMutableFileNode
1500  ++IImmutableFileNode
1501  +IDirectoryNode
1502 
1503 Also expand test_client.py (NodeMaker) to hit all IFilesystemNode types.
1504]
1505[class name cleanups: s/FileNode/ImmutableFileNode/
1506Brian Warner <warner@lothar.com>**20091120072239
1507 Ignore-this: 4b3218f2d0e585c62827e14ad8ed8ac1
1508 
1509 also fix test/bench_dirnode.py for recent dirnode changes
1510]
1511[Use DIR-IMM and t=mkdir-immutable for "tahoe backup", for #828
1512Brian Warner <warner@lothar.com>**20091118192813
1513 Ignore-this: a4720529c9bc6bc8b22a3d3265925491
1514]
1515[web/directory.py: use "DIR-IMM" to describe immutable directories, not DIR-RO
1516Brian Warner <warner@lothar.com>**20091118191832
1517 Ignore-this: aceafd6ab4bf1cc0c2a719ef7319ac03
1518]
1519[web/info.py: hush pyflakes
1520Brian Warner <warner@lothar.com>**20091118191736
1521 Ignore-this: edc5f128a2b8095fb20686a75747c8
1522]
1523[make get_size/get_current_size consistent for all IFilesystemNode classes
1524Brian Warner <warner@lothar.com>**20091118191624
1525 Ignore-this: bd3449cf96e4827abaaf962672c1665a
1526 
1527 * stop caching most_recent_size in dirnode, rely upon backing filenode for it
1528 * start caching most_recent_size in MutableFileNode
1529 * return None when you don't know, not "?"
1530 * only render None as "?" in the web "more info" page
1531 * add get_size/get_current_size to UnknownNode
1532]
1533[ImmutableDirectoryURIVerifier: fix verifycap handling
1534Brian Warner <warner@lothar.com>**20091118164238
1535 Ignore-this: 6bba5c717b54352262eabca6e805d590
1536]
1537[Add t=mkdir-immutable to the webapi. Closes #607.
1538Brian Warner <warner@lothar.com>**20091118070900
1539 Ignore-this: 311e5fab9a5f28b9e8a28d3d08f3c0d
1540 
1541 * change t=mkdir-with-children to not use multipart/form encoding. Instead,
1542   the request body is all JSON. t=mkdir-immutable uses this format too.
1543 * make nodemaker.create_immutable_dirnode() get convergence from SecretHolder,
1544   but let callers override it
1545 * raise NotDeepImmutableError instead of using assert()
1546 * add mutable= argument to DirectoryNode.create_subdirectory(), default True
1547]
1548[move convergence secret into SecretHolder, next to lease secret
1549Brian Warner <warner@lothar.com>**20091118015444
1550 Ignore-this: 312f85978a339f2d04deb5bcb8f511bc
1551]
1552[nodemaker: implement immutable directories (internal interface), for #607
1553Brian Warner <warner@lothar.com>**20091112002233
1554 Ignore-this: d09fccf41813fdf7e0db177ed9e5e130
1555 
1556 * nodemaker.create_from_cap() now handles DIR2-CHK and DIR2-LIT
1557 * client.create_immutable_dirnode() is used to create them
1558 * no webapi yet
1559]
1560[stop using IURI()/etc as an adapter
1561Brian Warner <warner@lothar.com>**20091111224542
1562 Ignore-this: 9611da7ea6a4696de2a3b8c08776e6e0
1563]
1564[clean up uri-vs-cap terminology, emphasize cap instances instead of URI strings
1565Brian Warner <warner@lothar.com>**20091111222619
1566 Ignore-this: 93626385f6e7f039ada71f54feefe267
1567 
1568  * "cap" means a python instance which encapsulates a filecap/dircap (uri.py)
1569  * "uri" means a string with a "URI:" prefix
1570  * FileNode instances are created with (and retain) a cap instance, and
1571    generate uri strings on demand
1572  * .get_cap/get_readcap/get_verifycap/get_repaircap return cap instances
1573  * .get_uri/get_readonly_uri return uri strings
1574 
1575 * add filenode.download_to_filename() for control.py, should find a better way
1576 * use MutableFileNode.init_from_cap, not .init_from_uri
1577 * directory URI instances: use get_filenode_cap, not get_filenode_uri
1578 * update/cleanup bench_dirnode.py to match, add Makefile target to run it
1579]
1580[add parser for immutable directory caps: DIR2-CHK, DIR2-LIT, DIR2-CHK-Verifier
1581Brian Warner <warner@lothar.com>**20091104181351
1582 Ignore-this: 854398cc7a75bada57fa97c367b67518
1583]
1584[wui: s/TahoeLAFS/Tahoe-LAFS/
1585zooko@zooko.com**20091029035050
1586 Ignore-this: 901e64cd862e492ed3132bd298583c26
1587]
1588[tests: bump up the timeout on test_repairer to see if 120 seconds was too short for François's ARM box to do the test even when it was doing it right.
1589zooko@zooko.com**20091027224800
1590 Ignore-this: 95e93dc2e018b9948253c2045d506f56
1591]
1592[dirnode.pack_children(): add deep_immutable= argument
1593Brian Warner <warner@lothar.com>**20091026162809
1594 Ignore-this: d5a2371e47662c4bc6eff273e8181b00
1595 
1596 This will be used by DIR2:CHK to enforce the deep-immutability requirement.
1597]
1598[webapi: use t=mkdir-with-children instead of a children= arg to t=mkdir .
1599Brian Warner <warner@lothar.com>**20091026011321
1600 Ignore-this: 769cab30b6ab50db95000b6c5a524916
1601 
1602 This is safer: in the earlier API, an old webapi server would silently ignore
1603 the initial children, and clients trying to set them would have to fetch the
1604 newly-created directory to discover the incompatibility. In the new API,
1605 clients using t=mkdir-with-children against an old webapi server will get a
1606 clear error.
1607]
1608[nodemaker.create_new_mutable_directory: pack_children() in initial_contents=
1609Brian Warner <warner@lothar.com>**20091020005118
1610 Ignore-this: bd43c4eefe06fd32b7492bcb0a55d07e
1611 instead of creating an empty file and then adding the children later.
1612 
1613 This should speed up mkdir(initial_children) considerably, removing two
1614 roundtrips and an entire read-modify-write cycle, probably bringing it down
1615 to a single roundtrip. A quick test (against the volunteergrid) suggests a
1616 30% speedup.
1617 
1618 test_dirnode: add new tests to enforce the restrictions that interfaces.py
1619 claims for create_new_mutable_directory(): no UnknownNodes, metadata dicts
1620]
1621[test_dirnode.py: add tests of initial_children= args to client.create_dirnode
1622Brian Warner <warner@lothar.com>**20091017194159
1623 Ignore-this: 2e2da28323a4d5d815466387914abc1b
1624 and nodemaker.create_new_mutable_directory
1625]
1626[update many dirnode interfaces to accept dict-of-nodes instead of dict-of-caps
1627Brian Warner <warner@lothar.com>**20091017192829
1628 Ignore-this: b35472285143862a856bf4b361d692f0
1629 
1630 interfaces.py: define INodeMaker, document argument values, change
1631                create_new_mutable_directory() to take dict-of-nodes. Change
1632                dirnode.set_nodes() and dirnode.create_subdirectory() too.
1633 nodemaker.py: use INodeMaker, update create_new_mutable_directory()
1634 client.py: have create_dirnode() delegate initial_children= to nodemaker
1635 dirnode.py (Adder): take dict-of-nodes instead of list-of-nodes, which
1636                     updates set_nodes() and create_subdirectory()
1637 web/common.py (convert_initial_children_json): create dict-of-nodes
1638 web/directory.py: same
1639 web/unlinked.py: same
1640 test_dirnode.py: update tests to match
1641]
1642[dirnode.py: move pack_children() out to a function, for eventual use by others
1643Brian Warner <warner@lothar.com>**20091017180707
1644 Ignore-this: 6a823fb61f2c180fd38d6742d3196a7a
1645]
1646[move dirnode.CachingDict to dictutil.AuxValueDict, generalize method names,
1647Brian Warner <warner@lothar.com>**20091017180005
1648 Ignore-this: b086933cf429df0fcea16a308d2640dd
1649 improve tests. Let dirnode _pack_children accept either dict or AuxValueDict.
1650]
1651[test/common.py: update FakeMutableFileNode to new contents= callable scheme
1652Brian Warner <warner@lothar.com>**20091013052154
1653 Ignore-this: 62f00a76454a2190d1c8641c5993632f
1654]
1655[The initial_children= argument to nodemaker.create_new_mutable_directory is
1656Brian Warner <warner@lothar.com>**20091013031922
1657 Ignore-this: 72e45317c21f9eb9ec3bd79bd4311f48
1658 now enabled.
1659]
1660[client.create_mutable_file(contents=) now accepts a callable, which is
1661Brian Warner <warner@lothar.com>**20091013031232
1662 Ignore-this: 3c89d2f50c1e652b83f20bd3f4f27c4b
1663 invoked with the new MutableFileNode and is supposed to return the initial
1664 contents. This can be used by e.g. a new dirnode which needs the filenode's
1665 writekey to encrypt its initial children.
1666 
1667 create_mutable_file() still accepts a bytestring too, or None for an empty
1668 file.
1669]
1670[webapi: t=mkdir now accepts initial children, using the same JSON that t=json
1671Brian Warner <warner@lothar.com>**20091013023444
1672 Ignore-this: 574a46ed46af4251abf8c9580fd31ef7
1673 emits.
1674 
1675 client.create_dirnode(initial_children=) now works.
1676]
1677[replace dirnode.create_empty_directory() with create_subdirectory(), which
1678Brian Warner <warner@lothar.com>**20091013021520
1679 Ignore-this: 6b57cb51bcfcc6058d0df569fdc8a9cf
1680 takes an initial_children= argument
1681]
1682[dirnode.set_children: change return value: fire with self instead of None
1683Brian Warner <warner@lothar.com>**20091013015026
1684 Ignore-this: f1d14e67e084e4b2a4e25fa849b0e753
1685]
1686[dirnode.set_nodes: change return value: fire with self instead of None
1687Brian Warner <warner@lothar.com>**20091013014546
1688 Ignore-this: b75b3829fb53f7399693f1c1a39aacae
1689]
1690[dirnode.set_children: take a dict, not a list
1691Brian Warner <warner@lothar.com>**20091013002440
1692 Ignore-this: 540ce72ce2727ee053afaae1ff124e21
1693]
1694[dirnode.set_uri/set_children: change signature to take writecap+readcap
1695Brian Warner <warner@lothar.com>**20091012235126
1696 Ignore-this: 5df617b2d379a51c79148a857e6026b1
1697 instead of a single cap. The webapi t=set_children call benefits too.
1698]
1699[replace Client.create_empty_dirnode() with create_dirnode(), in anticipation
1700Brian Warner <warner@lothar.com>**20091012224506
1701 Ignore-this: cbdaa4266ecb3c6496ffceab4f95709d
1702 of adding initial_children= argument.
1703 
1704 Includes stubbed-out initial_children= support.
1705]
1706[test_web.py: use a less-fake client, making test harness smaller
1707Brian Warner <warner@lothar.com>**20091012222808
1708 Ignore-this: 29e95147f8c94282885c65b411d100bb
1709]
1710[webapi.txt: document t=set_children, other small edits
1711Brian Warner <warner@lothar.com>**20091009200446
1712 Ignore-this: 4d7e76b04a7b8eaa0a981879f778ea5d
1713]
1714[Verifier: check the full cryptext-hash tree on each share. Removed .todos
1715Brian Warner <warner@lothar.com>**20091005221849
1716 Ignore-this: 6fb039c5584812017d91725e687323a5
1717 from the last few test_repairer tests that were waiting on this.
1718]
1719[Verifier: check the full block-hash-tree on each share
1720Brian Warner <warner@lothar.com>**20091005214844
1721 Ignore-this: 3f7ccf6d253f32340f1bf1da27803eee
1722 
1723 Removed the .todo from two test_repairer tests that check this. The only
1724 remaining .todos are on the three crypttext-hash-tree tests.
1725]
1726[Verifier: check the full share-hash chain on each share
1727Brian Warner <warner@lothar.com>**20091005213443
1728 Ignore-this: 3d30111904158bec06a4eac22fd39d17
1729 
1730 Removed the .todo from two test_repairer tests that check this.
1731]
1732[test_repairer: rename Verifier test cases to be more precise and less verbose
1733Brian Warner <warner@lothar.com>**20091005201115
1734 Ignore-this: 64be7094e33338c7c2aea9387e138771
1735]
1736[immutable/checker.py: rearrange code a little bit, make it easier to follow
1737Brian Warner <warner@lothar.com>**20091005200252
1738 Ignore-this: 91cc303fab66faf717433a709f785fb5
1739]
1740[test/common.py: wrap docstrings to 80cols so I can read them more easily
1741Brian Warner <warner@lothar.com>**20091005200143
1742 Ignore-this: b180a3a0235cbe309c87bd5e873cbbb3
1743]
1744[immutable/download.py: wrap to 80cols, no functional changes
1745Brian Warner <warner@lothar.com>**20091005192542
1746 Ignore-this: 6b05fe3dc6d78832323e708b9e6a1fe
1747]
1748[CHK-hashes.svg: cross out plaintext hashes, since we don't include
1749Brian Warner <warner@lothar.com>**20091005010803
1750 Ignore-this: bea2e953b65ec7359363aa20de8cb603
1751 them (until we finish #453)
1752]
1753[docs: a few licensing clarifications requested by Ubuntu
1754zooko@zooko.com**20090927033226
1755 Ignore-this: 749fc8c9aeb6dc643669854a3e81baa7
1756]
1757[setup: remove binary WinFUSE modules
1758zooko@zooko.com**20090924211436
1759 Ignore-this: 8aefc571d2ae22b9405fc650f2c2062
1760 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate o
1761 r acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technicall
1762 y, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, bu
1763 t it is easier for now to remove the binaries from the source tree.)
1764 In this case, the binaries are from the tahoe-w32-client project: http://allmydata.org/trac/tahoe-w32-client , from which you can also get the source.
1765]
1766[setup: remove binary _fusemodule.so 's
1767zooko@zooko.com**20090924211130
1768 Ignore-this: 74487bbe27d280762ac5dd5f51e24186
1769 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate or acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technically, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, but it is easier for now to remove the binaries from the source tree.)
1770 In this case, these modules come from the MacFUSE project: http://code.google.com/p/macfuse/
1771]
1772[doc: add a copy of LGPL2 for documentation purposes for ubuntu
1773zooko@zooko.com**20090924054218
1774 Ignore-this: 6a073b48678a7c84dc4fbcef9292ab5b
1775]
1776[setup: remove a convenience copy of figleaf, to ease inclusion into Ubuntu Karmic Koala
1777zooko@zooko.com**20090924053215
1778 Ignore-this: a0b0c990d6e2ee65c53a24391365ac8d
1779 We need to carefully document the licence of figleaf in order to get Tahoe-LAFS into Ubuntu Karmic Koala.  However, figleaf isn't really a part of Tahoe-LAFS per se -- this is just a "convenience copy" of a development tool.  The quickest way to make Tahoe-LAFS acceptable for Karmic then, is to remove figleaf from the Tahoe-LAFS tarball itself.  People who want to run figleaf on Tahoe-LAFS (as everyone should want) can install figleaf themselves.  I haven't tested this -- there may be incompatibilities between upstream figleaf and the copy that we had here...
1780]
1781[setup: shebang for misc/build-deb.py to fail quickly
1782zooko@zooko.com**20090819135626
1783 Ignore-this: 5a1b893234d2d0bb7b7346e84b0a6b4d
1784 Without this patch, when I ran "chmod +x ./misc/build-deb.py && ./misc/build-deb.py" then it hung indefinitely.  (I wonder what it was doing.)
1785]
1786[docs: Shawn Willden grants permission for his contributions under GPL2+|TGPPL1+
1787zooko@zooko.com**20090921164651
1788 Ignore-this: ef1912010d07ff2ffd9678e7abfd0d57
1789]
1790[docs: Csaba Henk granted permission to license fuse.py under the same terms as Tahoe-LAFS itself
1791zooko@zooko.com**20090921154659
1792 Ignore-this: c61ba48dcb7206a89a57ca18a0450c53
1793]
1794[setup: mark setup.py as having utf-8 encoding in it
1795zooko@zooko.com**20090920180343
1796 Ignore-this: 9d3850733700a44ba7291e9c5e36bb91
1797]
1798[doc: licensing cleanups
1799zooko@zooko.com**20090920171631
1800 Ignore-this: 7654f2854bf3c13e6f4d4597633a6630
1801 Use nice utf-8 © instead of "(c)". Remove licensing statements on utility modules that have been assigned to allmydata.com by their original authors. (Nattraverso was not assigned to allmydata.com -- it was LGPL'ed -- but I checked and src/allmydata/util/iputil.py was completely rewritten and doesn't contain any line of code from nattraverso.)  Add notes to misc/debian/copyright about licensing on files that aren't just allmydata.com-licensed.
1802]
1803[build-deb.py: run darcsver early, otherwise we get the wrong version later on
1804Brian Warner <warner@lothar.com>**20090918033620
1805 Ignore-this: 6635c5b85e84f8aed0d8390490c5392a
1806]
1807[new approach for debian packaging, sharing pieces across distributions. Still experimental, still only works for sid.
1808warner@lothar.com**20090818190527
1809 Ignore-this: a75eb63db9106b3269badbfcdd7f5ce1
1810]
1811[new experimental deb-packaging rules. Only works for sid so far.
1812Brian Warner <warner@lothar.com>**20090818014052
1813 Ignore-this: 3a26ad188668098f8f3cc10a7c0c2f27
1814]
1815[setup.py: read _version.py and pass to setup(version=), so more commands work
1816Brian Warner <warner@lothar.com>**20090818010057
1817 Ignore-this: b290eb50216938e19f72db211f82147e
1818 like "setup.py --version" and "setup.py --fullname"
1819]
1820[test/check_speed.py: fix shbang line
1821Brian Warner <warner@lothar.com>**20090818005948
1822 Ignore-this: 7f3a37caf349c4c4de704d0feb561f8d
1823]
1824[setup: remove bundled version of darcsver-1.2.1
1825zooko@zooko.com**20090816233432
1826 Ignore-this: 5357f26d2803db2d39159125dddb963a
1827 That version of darcsver emits a scary error message when the darcs executable or the _darcs subdirectory is not found.
1828 This error is hidden (unless the --loud option is passed) in darcsver >= 1.3.1.
1829 Fixes #788.
1830]
1831[de-Service-ify Helper, pass in storage_broker and secret_holder directly.
1832Brian Warner <warner@lothar.com>**20090815201737
1833 Ignore-this: 86b8ac0f90f77a1036cd604dd1304d8b
1834 This makes it more obvious that the Helper currently generates leases with
1835 the Helper's own secrets, rather than getting values from the client, which
1836 is arguably a bug that will likely be resolved with the Accounting project.
1837]
1838[immutable.Downloader: pass StorageBroker to constructor, stop being a Service
1839Brian Warner <warner@lothar.com>**20090815192543
1840 Ignore-this: af5ab12dbf75377640a670c689838479
1841 child of the client, access with client.downloader instead of
1842 client.getServiceNamed("downloader"). The single "Downloader" instance is
1843 scheduled for demolition anyways, to be replaced by individual
1844 filenode.download calls.
1845]
1846[tests: double the timeout on test_runner.RunNode.test_introducer since feisty hit a timeout
1847zooko@zooko.com**20090815160512
1848 Ignore-this: ca7358bce4bdabe8eea75dedc39c0e67
1849 I'm not sure if this is an actual timing issue (feisty is running on an overloaded VM if I recall correctly), or it there is a deeper bug.
1850]
1851[stop making History be a Service, it wasn't necessary
1852Brian Warner <warner@lothar.com>**20090815114415
1853 Ignore-this: b60449231557f1934a751c7effa93cfe
1854]
1855[Overhaul IFilesystemNode handling, to simplify tests and use POLA internally.
1856Brian Warner <warner@lothar.com>**20090815112846
1857 Ignore-this: 1db1b9c149a60a310228aba04c5c8e5f
1858 
1859 * stop using IURI as an adapter
1860 * pass cap strings around instead of URI instances
1861 * move filenode/dirnode creation duties from Client to new NodeMaker class
1862 * move other Client duties to KeyGenerator, SecretHolder, History classes
1863 * stop passing Client reference to dirnode/filenode constructors
1864   - pass less-powerful references instead, like StorageBroker or Uploader
1865 * always create DirectoryNodes by wrapping a filenode (mutable for now)
1866 * remove some specialized mock classes from unit tests
1867 
1868 Detailed list of changes (done one at a time, then merged together)
1869 
1870 always pass a string to create_node_from_uri(), not an IURI instance
1871 always pass a string to IFilesystemNode constructors, not an IURI instance
1872 stop using IURI() as an adapter, switch on cap prefix in create_node_from_uri()
1873 client.py: move SecretHolder code out to a separate class
1874 test_web.py: hush pyflakes
1875 client.py: move NodeMaker functionality out into a separate object
1876 LiteralFileNode: stop storing a Client reference
1877 immutable Checker: remove Client reference, it only needs a SecretHolder
1878 immutable Upload: remove Client reference, leave SecretHolder and StorageBroker
1879 immutable Repairer: replace Client reference with StorageBroker and SecretHolder
1880 immutable FileNode: remove Client reference
1881 mutable.Publish: stop passing Client
1882 mutable.ServermapUpdater: get StorageBroker in constructor, not by peeking into Client reference
1883 MutableChecker: reference StorageBroker and History directly, not through Client
1884 mutable.FileNode: removed unused indirection to checker classes
1885 mutable.FileNode: remove Client reference
1886 client.py: move RSA key generation into a separate class, so it can be passed to the nodemaker
1887 move create_mutable_file() into NodeMaker
1888 test_dirnode.py: stop using FakeClient mockups, use NoNetworkGrid instead. This simplifies the code, but takes longer to run (17s instead of 6s). This should come down later when other cleanups make it possible to use simpler (non-RSA) fake mutable files for dirnode tests.
1889 test_mutable.py: clean up basedir names
1890 client.py: move create_empty_dirnode() into NodeMaker
1891 dirnode.py: get rid of DirectoryNode.create
1892 remove DirectoryNode.init_from_uri, refactor NodeMaker for customization, simplify test_web's mock Client to match
1893 stop passing Client to DirectoryNode, make DirectoryNode.create_with_mutablefile the normal DirectoryNode constructor, start removing client from NodeMaker
1894 remove Client from NodeMaker
1895 move helper status into History, pass History to web.Status instead of Client
1896 test_mutable.py: fix minor typo
1897]
1898[docs: edits for docs/running.html from Sam Mason
1899zooko@zooko.com**20090809201416
1900 Ignore-this: 2207e80449943ebd4ed50cea57c43143
1901]
1902[docs: install.html: instruct Debian users to use this document and not to go find the DownloadDebianPackages page, ignore the warning at the top of it, and try it
1903zooko@zooko.com**20090804123840
1904 Ignore-this: 49da654f19d377ffc5a1eff0c820e026
1905 http://allmydata.org/pipermail/tahoe-dev/2009-August/002507.html
1906]
1907[docs: relnotes.txt: reflow to 63 chars wide because google groups and some web forms seem to wrap to that
1908zooko@zooko.com**20090802135016
1909 Ignore-this: 53b1493a0491bc30fb2935fad283caeb
1910]
1911[docs: about.html: fix English usage noticed by Amber
1912zooko@zooko.com**20090802050533
1913 Ignore-this: 89965c4650f9bd100a615c401181a956
1914]
1915[docs: fix mis-spelled word in about.html
1916zooko@zooko.com**20090802050320
1917 Ignore-this: fdfd0397bc7cef9edfde425dddeb67e5
1918]
1919[TAG allmydata-tahoe-1.5.0
1920zooko@zooko.com**20090802031303
1921 Ignore-this: 94e5558e7225c39a86aae666ea00f166
1922]
1923Patch bundle hash:
19245b7a2c4d4e16e8207fef071529b80b2ec68b0f15