Site Map - skip to main content

Hacker Public Radio

Your ideas, projects, opinions - podcasted.

New episodes every weekday Monday through Friday.
This page was generated by The HPR Robot at


hpr1944 :: sshfs - Secure SHell FileSystem

How to mount remote storage using sshfs

<< First, < Previous, , Latest >>

Thumbnail of FiftyOneFifty (R.I.P.)
Hosted by FiftyOneFifty (R.I.P.) on Thursday, 2016-01-14 is flagged as Explicit and is released under a CC-BY-SA license.
sshfs, shell commands. 5.
The show is available on the Internet Archive at: https://archive.org/details/hpr1944

Listen in ogg, spx, or mp3 format. Play now:

Duration: 00:31:01

general.

This is a topic Ken Fallon has been wanting someone to do for some time, but I didn't want to talk about sshfs until the groundwork for ssh in general was laid. Fortunately, other hosts have recently covered the basics of ssh, so I don't have to record a series of episodes just to get to sshfs.

From the sshfs man page: SSHFS (Secure SHell FileSystem) is a file system for Linux (and other operating systems with a FUSE implementation, such as Mac OS X or FreeBSD) capable of operating on files on a remote computer using just a secure shell login on the remote computer. On the local computer where the SSHFS is mounted, the implementation makes use of the FUSE (Filesystem in Userspace) kernel module. The practical effect of this is that the end user can seamlessly interact with remote files being securely served over SSH just as if they were local files on his/her computer. On the remote computer the SFTP subsystem of SSH is used.

In short, sshfs offers a dead simple way of mounting remote network volumes from another system on at a specified mount point on your local host, with encrypted data communications. It's perfect for at hoc connections on mobile computers or more permanent links. This is tutorial is going to be about how I use sshfs, rather than covering every conceivable option. I really think my experience will cover the vast majority of use cases without making things complicated, besides, I don't like to discuss options I haven't used personally.

There are other ways to mount remote storage, most noteably SAMBA, but unless you are trying to connect to a Windows share, sshfs is far less trouble to set up, escpecially since most distros come with ssh-server already installed.

The first thing to do when preparing to use sshfs is to create a mountpoint on your local computer. For most purposes, you should create a folder inside your home folder. You should plan to leave this folder empty, because sshfs won't mount inside a folder that already has files in it. If I was configuring sshfs on a machine that had multiple users, I might set up a mount point under /media, then put symlinks in every user's home folder.

The sshfs command syntax reminds me of many of the other extended commands based ssh, like scp. The basic format is: sshfs username@<remote_host>: mountpoint

To put things in a better perspective, I'll use my situation as an example. My home server is on 192.168.2.153. If you have a hostname set up,you can use that instead of an IP. For the sake of arguement, my mountpoint for network storage is /home/fifty/storage . So, I can mount the storage folder on my server using:

sshfs fifty@192.168.2.153: /home/fifty/storage

By default, your whole home directory on the remote system will be mounted at your mountpoint. You may have noticed the colon after the IP address, it is a necessary part of the syntax. Lets say you don't wish to mount your whole remote home folder, perhaps just the subdirectory containing shared storage. In my case, my server is an Raspberry Pi 2 with a 5Tb external USB drive which is mounted under /home/fifty/storage . Say, I only want to mount my shared storage, not everything in my home folder, I modify my command to be:

sshfs fifty@192.168.2.153:storage /home/fifty/storage .or. sshfs fifty@192.168.2.153:/home/fifty/storage /home/fifty/storage

Except that generally doesn't work for me, and I'll come to that presently. The 5Tb USB drive on the server isn't actually mounted in my home folder, it automounts under /media. The directory /home/fifty/storage on the server is actually a symlink to the actual mountpoint under /media. To make sshfs follow symlinks, you need to add the option '-o follow_symlinks', so now my sshfs command looks like:

sshfs fifty@192.168.2.153: /home/fifty/storage -o follow_symlinks

You may have noticed, the "-o" switch comes at end the end of the command. Usually switches come right after the command, and before the arguements.

This will allow sshfs to navigate symlinks, but I've discovered not all distros are comfortable using a symlink as the top levelfolder in a sshfs connection. For example, in Debian Wheezy, I could do:

sshfs fifty@192.168.2.153:storage /home/fifty/storage -o follow_symlinks

Other distros, Ubuntu, Mint, Fedora so far don't like to connect to a symlink at the top level. For those distros, I need to use:

sshfs fifty@192.168.2.153: /home/fifty/storage -o follow_symlinks

and walk my way down to storage.

Other related options and commands I haven't used but you may be interested in include -p , for Port. Lets say the remote server you want to mount is not on your local network, but a server out on the Internet, it probably won't be on the default ssh port. Syntax in this case might look like:

sshfs -p 1022 fifty@142.168.2.153:storage /home/fifty/storage -o follow_symlinks

Reading the man page, I also find "-o allow_root" which is described as "allow access to root" . I would expect, combined with a root login, this would mount all of the storage on the remote system, not just a user's home directory, but without direct expertience, Iwouldn't care to speculate further.

The mount can be broken with 'fusermount -u <mountpoint>'.

At this point, I could explain to you how to modify /etc/fstab to automatically mount a sshfs partition. The trouble is, /etc/fstab is processed for local storage before any network connections are made. Unless you want to modify the order in which services are enabled, no remote storage will ever be available when /etc/fstab is processed. It makes far more sense to encapsulate your sshfs command inside a script file and either have it autoloaded with your desktop manager or manually loaded when needed from a terminal.

One thing to watch out for, is saving files to the mountpoint when the remote storage is not actually mounted, i.e., you save to a default path under a mountpoint you expect to be mounted and is not, so all the sudden you have files in a folder that is supposed to be empty. To remount the remote storage, you have to delete/move the paths created at your designated mountpoint, to leave a pristeen, empty folder again.

Weihenstephaner Vitus - The label says it's a Weizenbock, so we know its a strong, wheat based lager

IBU 17 ABV 7.7%


Comments

Subscribe to the comments RSS feed.

Comment #1 posted on 2016-01-14 09:00:03 by Mike Ray

Using sshfs to mount Pi rootfs on faster machine for cross-compiles

Great show Fifty.

I use sshfs to mount the root file-system of a Pi on my fast quad-core desktop Linux machine for cross-compiling stuff.

I have tool-chains in /opt/toolchains and then I mount the Pi rootfs like this:

sshfs root@raspberrypi:/ /opt/mnt/pi -o follow_symlinks

Then I can specify that as -sysrrot when I compile.

Compiling a kernel on a Pi takes about fifteen hours, it takes my desktop machine eight minutes!

Comment #2 posted on 2016-01-15 19:16:39 by Frank

I just tested this out. Thanks, Fifty!

For Slackers, there's a build on slackbuilds.org.

Comment #3 posted on 2016-01-16 12:06:50 by 0xf10e

I'm pretty sure when using sshfs for multiple users would map everyone to the user you initiated the connection with.

To prevent yourself creating files under the mountpoint of your sshfs just make the dir r-x before mounting.
Should give you enough of a heads-up when you try to store you downloads there.

And btw: Mounting NFS at boot works fine and is just delayed until the network is configured.

Otherwise a nice introduction ;)

Comment #4 posted on 2016-01-18 17:27:58 by Ken Fallon

no multiple users

As far as I know mapping multiple users to a sshfs conncetion was not possible.

I created a new user and gave them the same group rights but after mounting neither the root or the test user were allowed to see the mounted connection.

Ken.

Comment #5 posted on 2016-01-21 17:01:44 by Kevin O'Brien

Great show

I'm delighted that my friend FiftyOneFifty was able to build on the earlier shows that klaatu and I did on ssh. That is how I always envisioned this series working.

Leave Comment

Note to Verbose Commenters
If you can't fit everything you want to say in the comment below then you really should record a response show instead.

Note to Spammers
All comments are moderated. All links are checked by humans. We strip out all html. Feel free to record a show about yourself, or your industry, or any other topic we may find interesting. We also check shows for spam :).

Provide feedback
Your Name/Handle:
Title:
Comment:
Anti Spam Question: What does the letter P in HPR stand for?
Are you a spammer?
Who is the host of this show?
What does HPR mean to you?