Thursday, June 2, 2016

s3fs-fuse

Maximum file size=64GB

s3fs is stable and is being used in number of production environments, e.g., rsync backup to s3.
s3fs works with rsync! (as of svn 43) as of r152 s3fs uses x-amz-copy-source for efficient update of mode, mtime and uid/gid.


enable_content_md5 (default is disable)

verifying uploaded data without multipart by content-md5 header.

fusermount -uz  /opt/oss1



$ ossfs anno-sge /opt/oss1 -ourl=http://vpc100-oss-cn-beijing.aliyuncs.com -o multireq_max=5,use_cache=/mnt/xvdb1/tmp
$ ossfs anno-sge /opt/oss1 -ourl=http://vpc100-oss-cn-beijing.aliyuncs.com -o nomultipart,use_cache=/mnt/xvdb1/tmp


s3fs has a caching mechanism: You can enable local file caching to minimize downloads
the folder specified by use_cache (optional) a local file cache automatically maintained by s3fs, enabled with "use_cache" option, e.g., -ouse_cache=/mnt/xvdb1/tmp


s3fs supports multiparts request(send some request as parallel), I think this problem is dependent on the number of parallel requests as possible.
If you can, please try to set small value for multireq_max and parallel_count options.


  • nomultipart
    • disable multipart uploads.
  • multireq_max (default="500")
    • maximum number of parallel request for listing objects.
  • parallel_count (default="5")
    • number of parallel request for downloading/uploading large objects. s3fs uploads large object(over 20MB) by multipart post request, and sends parallel requests. This option limits parallel request count which s3fs requests at once.


https://github.com/s3fs-fuse/s3fs-fuse/issues/94

https://github.com/s3fs-fuse/s3fs-fuse/issues/152




$cat s3fs-watchdog.sh

#!/bin/bash
#
# s3fs-watchdog.sh
#
# Run from the root user's crontab to keep an eye on s3fs which should always
# be mounted.
#
# Note:  If getting the amazon S3 credentials from environment variables
#   these must be entered in the actual crontab file (otherwise use one
#   of the s3fs other ways of getting credentials).
#
# Example:  To run it once every minute getting credentials from envrironment
# variables enter this via "sudo crontab -e":
#
#   AWSACCESSKEYID=XXXXXXXXXXXXXX
#   AWSSECRETACCESSKEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
#   * * * * * /root/s3fs-watchdog.sh
#

NAME=ossfs
BUCKET=anno-sge
MOUNTPATH=/opt/oss1
MOUNT=/bin/mount
UMOUNT=/bin/umount
NOTIFY=whg@anno.com
NOTIFYCC=whg@anno.com
GREP=/bin/grep
PS=/bin/ps
NOP=/bin/true
DATE=/bin/date
MAIL=/usr/bin/mail
RM=/bin/rm

$PS -ef|$GREP -v grep|$GREP $NAME|grep $BUCKET >/dev/null 2>&1
case "$?" in
   0)
   # It is running in this case so we do nothing.
   $NOP
   ;;
   1)
   echo "$NAME is NOT RUNNING for bucket $BUCKET. Remounting $BUCKET with $NAME and sending notices."
   $UMOUNT $MOUNTPATH >/dev/null 2>&1
   $MOUNT $MOUNTPATH >/tmp/watchdogmount.out 2>&1
   NOTICE=/tmp/watchdog.txt
   echo "$NAME for $BUCKET was not running and was started on `$DATE`" > $NOTICE
   $MAIL -n -s "$BUCKET $NAME mount point lost and remounted" -t $NOTIFYCC $NOTIFY < $NOTICE
   $RM -f $NOTICE
   ;;
esac

exit

$cat /etc/fstab

ossfs#anno-sge  /opt/oss1 fuse _netdev,url=http://vpc100-oss-cn-beijing.aliyuncs.com,uid=1001,gid=1001,max_stat_cache_size=100000000,nomultipart,use_cache=/mnt/xvdb1/tmp,allow_other,user,exec  0 0

请确保/etc/passwd-ossfs这个文件存在,且权限为640。并且user和该文件的owner在同一个group内

$mount /opt/oss1
fusermount: failed to open /etc/fuse.conf: Permission denied

fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

The problem can be easily fixed by adding the user to the fuse group then relogin:

sudo addgroup <username> fuse

No comments:

Post a Comment