Quick Start Guide¶
Building¶
Build Servers¶
In CFS, the server consists of the resource manager, metanode and datanode, which are compiled to a single binary for deployment convenience.
Building of CFS server depends on RocksDB, build RocksDB v5.9.2+ . Recommended installation uses make static_lib .
CFS server is built with the following command:
cd cmd; sh build.sh
Build Client¶
cd client; sh build.sh
Deployment¶
Start Resource Manager¶
nohup ./cmd -c master.json &
Sample master.json is shown as follows,
{
"role": "master",
"ip": "192.168.31.173",
"port": "80",
"prof":"10088",
"id":"1",
"peers": "1:192.168.31.173:80,2:192.168.31.141:80,3:192.168.30.200:80",
"retainLogs":"20000",
"logDir": "/export/Logs/cfs/master",
"logLevel":"info",
"walDir":"/export/Logs/cfs/raft",
"storeDir":"/export/cfs/rocksdbstore",
"consulAddr": "http://consul.prometheus-cfs.local",
"exporterPort": 9510,
"clusterName":"cfs"
}
For detailed explanations of master.json, please refer to Resource Manager.
Start Metanode¶
nohup ./cmd -c meta.json &
Sample meta.json is shown as follows,
{
"role": "metanode",
"listen": "9021",
"prof": "9092",
"logLevel": "debug",
"metaDir": "/export/cfs/metanode_meta",
"logDir": "/export/Logs/cfs/metanode",
"raftDir": "/export/cfs/metanode_raft",
"raftHeartbeatPort": "9093",
"raftReplicatePort": "9094",
"consulAddr": "http://consul.prometheus-cfs.local",
"exporterPort": 9511,
"masterAddrs": [
"192.168.31.173:80",
"192.168.31.141:80",
"192.168.30.200:80"
]
}
For detailed explanations of meta.json, please refer to Meta Subsystem.
Start Datanode¶
Prepare data directories
Recommendation Using independent disks can reach better performance.
Disk preparation
1.1 Check available disks
fdisk -l
1.2 Build local Linux file system on the selected devices
mkfs.xfs -f /dev/sdx
1.3 Make mount point
mkdir /data0
1.4 Mount the device on mount point
mount /dev/sdx /data0
Start datanode
nohup ./cmd -c datanode.json &
Sample datanode.json is shown as follows,
{ "role": "datanode", "port": "6000", "prof": "6001", "logDir": "/export/Logs/datanode", "logLevel": "info", "raftHeartbeat": "9095", "raftReplica": "9096", "consulAddr": "http://consul.prometheus-cfs.local", "exporterPort": 9512, "masterAddr": [ "192.168.31.173:80", "192.168.31.141:80", "192.168.30.200:80" ], "rack": "", "disks": [ "/data0:107374182400" ] }
For detailed explanations of datanode.json, please refer to Data Subsystem.
Create Volume¶
By decault, there are only a few data partitions allocated upon volume creation, and will be dynamically expanded according to actual usage. For performance evaluation, it is better to preallocate enough data partitions.
curl -v "http://127.0.0.1/admin/createVol?name=test&capacity=100&owner=cfs"
Mount Client¶
Run
modprobe fuse
to insert FUSE kernel module.Run
yum install -y fuse
to install libfuse.Run
nohup client -c fuse.json &
to start a client.Sample fuse.json is shown as follows,
{ "mountPoint": "/mnt/fuse", "volName": "test", "owner": "cfs", "masterAddr": "192.168.31.173:80,192.168.31.141:80,192.168.30.200:80", "logDir": "/export/Logs/cfs", "profPort": "10094", "logLevel": "info" }
For detailed explanations of fuse.json, please refer to Client.
Note that end user can start more than one client on a single machine, as long as mountpoints are different.
Upgrading¶
- freeze the cluster
curl -v "http://127.0.0.1/cluster/freeze?enable=true"
- upgrade each module
- closed freeze flag
curl -v "http://127.0.0.1/cluster/freeze?enable=false"