Sounds like you might be having a lot of distributed filesystem fun! This was something I kept meaning to spend some time looking into at a previous job, but never really had the time. I did get as far as working out that the most promising candidates seemed to be:
HDFS
A Java-based distributed filesystem for Hadoop. TCP/IP and RPC-based, implements its own redundancy across machines (no need for RAID) but I don't think it's fully POSIX-compliant and is probably heavily optimised for larger files (many MB).
Ceph
Free software storage platform with libraries available for most major languages including Python. I believe it can also offer an S3-like interface. Offers seamless replication and designed to require low administrative overhead. A POSIX-compliant filesystem can be layered on top of the underlying object store, included in the Linux kernel as of 2.6.34 - I think a FUSE-based solution also exists. Looks like a clever system, but it looks like it might require a high number of servers - not sure if metadata and storage functions can be colocated, for example.
GlusterFS
Now owned by RedHat, formed of storage servers and clients which communicate with a custom TCP/IP protocol. Data can be accessed with a library, or there's a FUSE-based filesystem interface too. Not too sure on the technical details, but this system looks one of the simplest - I think it relies on shared configuration as opposed to a centralised metadata cluster.
Lustre
Frankly I don't know a lot about this, but it seems to be quite popular for supercomputing so its performance must be reasonable. Looks like it uses separate storage and metadata nodes, as many of these systems do. Looks like support uses third party kernel modules, which might be a pain.
I guess as well as these solutions, you could also consider whether a standard network filesystem (e.g. NFS) along with some low-latency synchronisation (e.g. BitTorrent Sync) might be good enough for practical purposes and less effort to maintain. I seem to remember that autofs can handle multiple hosts, presumably failing over to other hosts if the connection to the first one fails. This isn't something I've ever had to try, however - the only time I've worked with failing over to other NFS hosts, we were dealing with them at the raw NFS protocol level.