Skip to main content
Internet Archive's 25th Anniversary Logo

View Post [edit]

Poster: dunno Date: Jun 16, 2005 1:55pm
Forum: petabox Subject: Re: Filesystem

I assume the UDP broadcast system to find which nodes have what I assume is easier to implement, then say, a couple of small dedicated boxes with a database of all of the file locations... it seems that unless you have a small number of large files that the UDP system... well, I'll bust say that I'd say that it seems like a timebomb.

just a spur of the moment thought, but you could have a 2 tier data system, where the first tier is JBOD, and is generally the front end, and a second tier that has the same dataset as the first, except that it used RAID 5 at some level... maybe it could also be stagered time wise, the backup could be 2 days behind the first tier, with a rather reliable pool keeping the changelog between the first tier, and the 2 day old backup tier... that way you'd have all your information in two place, and you'd have some measure of protection against virii type corrpution that bypasses safeguards like redundancy... an well.

Reply [edit]

Poster: foundation Date: Jul 15, 2005 5:08am
Forum: petabox Subject: Re: Filesystem

At my company (not the archive) we're implementing a large storage system for image storage (almost entirely a write once, read many for some, and read almost never for the rest)
and we are looking at mogilefs. Mogilefs uses mysql to track file locations, and automatically replicates the number of copies required. So you can say I want at all times there to be 2 copies of this data and three copies of this other data. And when you lose a server, it detects a copy is inaccessible and starts replicating a new copy. It does the transfers over http or nfs. Because we have written the front end, we don't need a posix compliant file system, we can use the client libraries. Something to consider for people implementing large systems, and a way to avoid RAID (it's raid-ish over the network really).