Storage@home explained

Storage@home
Developer:Stanford University / Adam Beberg
Released:2009-09-15
Latest Release Version:1.05
Latest Release Date:2009-12-02
Operating System:Microsoft Windows, Mac OS X, Linux[1]
Platform:x86
Language:English
Genre:Distributed Storage
License:Proprietary

Storage@home was a distributed data store project designed to store massive amounts of scientific data across a large number of volunteer machines.[2] The project was developed by some of the Folding@home team at Stanford University, from about 2007 through 2011.[3]

Function

Scientists such as those running Folding@home deal with massive amounts of data, which must be stored and backed up, and this is very expensive.[2] Traditionally, methods such as storing the data on RAID servers are used, but these become impractical for research budgets at this scale.[3] Pande's research group already dealt with storing hundreds of terabytes of scientific data.[2] Professor Vijay Pande and student Adam Beberg took experience from Folding@home and began work on Storage@home.[3] The project is designed based on the distributed file system known as Cosm, and the workload and analysis needed for Folding@home results.[3] While Folding@home volunteers can easily participate in Storage@home, much more disk space is needed from the user than Folding@home, to create a robust network. Volunteers each donate 10 GB of storage space, which would hold encrypted files.[3] These users gain points as a reward for reliable storage. Each file saved on the system is replicated four times, each spread across 10 geographically distant hosts.[3] [4] Redundancy also occurs over different operating systems and across time zones. If the servers detect the disappearance of an individual contributor, the data blocks held by that user would then be automatically duplicated to other hosts. Ideally, users would participate for a minimum of six months, and would alert the Storage@home servers before certain changes on their end such as a planned move of a machine or a bandwidth downgrade. Data stored on Storage@home was maintained through redundancy and monitoring, with repairs done as needed.[3] Through careful application of redundancy, encryption, digital signatures, automated monitoring and correction, large quantities of data could be reliably and easily retrieved.[2] [3] This ensures a robust network that will lose the least possible data.[4]

Storage Resource Broker is the closest storage project to Storage@home.[3]

Status

Storage@home was first made available on September 15, 2009 in a testing phase. It first monitored availability data and other basic statistics on the user's machine, which would be used to create a robust and capable storage system for storing massive amounts of scientific data.[5] However, in the same year it became inactive, despite initial plans for more to come.[6] On April 11, 2011 Pande stated his group had no active plans with Storage@home.[7]

See also

Notes and References

  1. Web site: Storage@home Installation . Folding@Home web site . September 12, 2009 . October 28, 2016 .
  2. Web site: General Information about Storage@home . 2009 . 2011-09-17.
  3. Book: Adam L. Beberg and Vijay S. Pande . 2007 IEEE International Parallel and Distributed Processing Symposium . Storage@home: Petascale Distributed Storage . 2007 . 1–6 . 10.1109/IPDPS.2007.370672 . http://cs.stanford.edu/people/beberg/Storage@home2007.pdf . 978-1-4244-0909-9 . 10.1.1.421.567 . 12487615 .
  4. Web site: The plan for splitting up data in Storage@home . 2009 . 2011-09-17.
  5. Web site: First stage of Storage@home roll out . Vijay Pande . 2009-09-15 . 2011-12-14.
  6. Web site: Storage@home FAQ . September 12, 2009 . https://web.archive.org/web/20110815163149/http://en.fah-addict.net/articles/articles-6-18+faq.php . August 15, 2011 . October 29, 2016 .
  7. Web site: Re: Storage@Home . Vijay Pande . April 11, 2011 . October 29, 2016 .