Apache Hama
Apache Hama is a distributed computing framework based on bulk synchronous parallel computing techniques for massive scientific computations e.g., matrix, and algorithms. Originally a sub-project of Hadoop, it became an Apache Software Foundation top level project in 2012. It was created by Edward J. Yoon, who named it, and Hama also means hippopotamus in Yoon's native Korean language, following the trend of naming Apache projects after animals and zoology. Hama was inspired by Google's Pregel large-scale graph computing framework described in 2010. When executing graph algorithms, Hama showed a fifty-fold performance increase relative to Hadoop.
Retired in April 2020, project resources are made available as part of the Apache Attic. Yoon cited issues of installation, scalability, and a difficult programming model for its lack of adoption.
Architecture
Hama consists of three major components: BSPMaster, GroomServers and Zookeeper.BSPMaster
BSPMaster is responsible for:- Maintaining groom server status
- Controlling super steps in a cluster
- Maintaining job progress information
- Scheduling jobs and assigning tasks to groom servers
- Disseminating execution class across groom servers
- Controlling fault
- Providing users with the cluster control interface.
Each time the BSP master receives a heartbeat message, it brings up-to-date groom server status - the bsp master makes use of groom servers' status in order to effectively assign tasks to idle groom servers - and returns a heartbeat response that contains assigned tasks and others actions that a groom server has to do. For now, we have a FIFO job scheduler and very simple task assignment algorithms.