| 1 | Here's a basic plan for how to configure "managed introducers". The basic |
| 2 | idea is that we have two types of grids: managed and unmanaged. The current |
| 3 | code implements "unmanaged" grids: complete free-for-all, anyone who can get |
| 4 | to the Introducer can thus get to all the servers, anyone who can get to a |
| 5 | server gets to use as much space as they want. In this mode, each client uses |
| 6 | their 'introducer.furl' to connect to the the Introducer, which serves two |
| 7 | purposes: tell the client about all the servers they can use, and tell all |
| 8 | other clients about the server being offered by the new node. |
| 9 | |
| 10 | The "managed introducer" approach is for an environment where you want to be |
| 11 | able to keep track of who is using what, and to prevent unmanaged clients |
| 12 | from using any storage space. |
| 13 | |
| 14 | In this mode, we have an Account Manager instead of an Introducer. Each |
| 15 | client gets a special, distinct facet on this account manager: this gives |
| 16 | them control over their account, and allows them to access the storage space |
| 17 | enabled by virtue of having that account. This is stored in |
| 18 | "my-account.furl", which replaces "introducer.furl" for this purpose. |
| 19 | |
| 20 | In addtion, the servers get an "account-manager.furl" instead of an |
| 21 | "introducer.furl". The servers connect to this object and offer themselves as |
| 22 | storage servers. The Account Manager remembers a list of all the |
| 23 | currently-available storage servers. |
| 24 | |
| 25 | When a client wants more storage servers (perhaps updated periodically, and |
| 26 | perhaps using some sort of minimal update protocol (Bloom Filters!)), they |
| 27 | contact their Account object and ask for introductions to storage servers. |
| 28 | This causes the Account Manager to go to all servers that the client doesn't |
| 29 | already know about and tell them "generate a FURL to a facet for the benefit |
| 30 | of client 123. Give me that FURL.". The Account Manager then sends the list |
| 31 | of new FURLs to the client, who adds them to its peerlist. This peerlist |
| 32 | contains tuples of (nodeid, FURL). |
| 33 | |
| 34 | The Storage Server will grant a facet to anyone that the Account Manager |
| 35 | tells them to. The Storage Server is really just updating a table that maps |
| 36 | from a random number (the FURL's swissnum) to the system-wide small-integer |
| 37 | account number. The FURL will dereference to an object that adds an |
| 38 | accountNumber=123 to all write() calls, so that they can be stored in leases. |
| 39 | |
| 40 | |
| 41 | In this approach, the Account Manager is a bottleneck only for the initial |
| 42 | contact: the clients all remember their list of Storage Server FURLs for a |
| 43 | long time. Clients must contact their Account to take advantage of new |
| 44 | servers: the update traffic for this needs to be examined. I can imagine this |
| 45 | working reasonably well up to a few hundred servers and say 100k clients if |
| 46 | the clients are only asking about new servers once a day (one query per |
| 47 | second). |