No passphrase for the key?
What about spotty connection? Doesn't WantedBy block startup on this starting properly? (I'm pretty sure I've been soft locked out of my computer when Comcast decides to do Comcast things.
This approach works very well. I've had dozens of extremely remote systems hooked up this way for about 8 years. The only problem I've seen is that occasionally the server ssh process will get stuck, so you have to log in to the server and kill it. It seems to happen when a remote goes offline and reconnects without closing the old connection first.
If I were doing it now, I'd probably use wireguard, probably. This is simpler to set up and works great.
14 years ago, i was using auto ash to keep SSH tunnels up; but at some point (quite far back - perhaps 2016?) ssh gained everything needed to do this internally except the restart.
At this point I configure all of the keep alive and retry options in ssh_config and sshd_config, and use
While true; do ssh user@host ; sleep 10; done
To get the same effect, but with much more flexibility - e.g. alternating connection addresses on a multihomed host, add logging, run from daemontools or systemd unit instead of a loop and let them track the process and restart, etc.
mosh is for interactive sessions, to keep them working when the connection is flaky.
autossh is for keeping unattended ssh tunnels alive, if the connection is flaky or one end is only intermittently available. So for using tunnels for the sort of thing you might otherwise use a VPN for.
The last time I used autossh it was on a client site to keep 2 layers of ssh tunnels open to jump through their network isolation hoops.
In general, when flexibility is possible, such a use-case nowadays would often be better served by deploying WireGuard. Grouchy, out-of-touch corporate net admins probably don't even know what it is and insist on their antiquated Cisco VPNs.
Not 100% the same use case as autossh was built for maybe, but I'm now simply throwing tailscale on every box i need to interact with. Does away with all the port forwarding stuff, it's absolutely delightful.
How much reliance on third party am I subjecting myself by using Tailscale? What happens if I make a local connection to a machine/service running on Tailscale, does it still go out of the local network? If so, is the bulk of the payload transferred locally? Is there any advantage on using it if the machine/service is easily accessible over ipv6?
It will route directly over the local network when possible.
It will be encrypted through the VPN, so there will be some overhead. But will be as direct as it can be. It only routes through tailscales servers as a last resort, when it can’t find a direct route at all (usually because NAT holepunching fails somehow). Their “DERP” relay servers just relay the encrypted connection. I think you can use your own relay servers, but I don’t know if that feature can be disabled entirely.
Headscale can be entirely self-hosted. It still uses the tailscale client applications- but is compatible.
That's what Tailscale is built for, when it can it sets up a P2P connection, it only ever sends data through Tailscales servers if you are in a restrictive network environment (i.e. corp network that controls all inbound and outbound traffic)
good questions, pretty well answered by other commenters. if you are happy with the level of encryption you have on your 'plain' ipv6 connection, sure, use that.
additionally the acl/auth system, their dns and service discovery thing is nice, though not essential.
Nice tool, but I'm getting tired of using port numbers for everything instead of more descriptive strings. My system has more than 10 tunnels and servers running, and since I only do sysadmin work once every half year or so, the port numbers are very cumbersome to deal with.
I believe these days SSH is willing to forward a UNIX domain socket to a remote TCP port, or a local TCP port to a remote UNIX domain socket, or any combination of the two families really. You could use names locally, if your client tools are willing to do AF_UNIX!
The nice thing about this is that, with filesystem permissions on one end and a check for SCM_CREDENTIALS or SO_PEERCRED on the other, you can effectively get user-based access control working between two machines.
I think this is the one remaining advantage of ssh tunnels over using a VPN.
NB if you're doing this sort of thing, you probably want to add `StreamLocalBindUnlink yes` to the ssh options.
Rather than using AutoSSH for port forwarding and such, I just create a systemd unit with a restart policy. Then you don't need autossh at all, just use ssh.
I used to use autossh to set up a SOCKS proxy to tunnel my web browser traffic via my home network and it worked really well. Also had a ControlMaster on the tunnel which made SSH connections to my server instantaneous.
Nowadays I use wireguard an a dedicated SOCKS proxy. The upside is that I can access everything on my home network directly without having to tunnel.
Way back redis didnt have passwords at all. That got added but there was no secure transport support.
So I ran redis in a higher memory box at rackspace separate from my db and my app server. I used autossh to forward 6379 from localhost on the app server(s) to the redis server. Worked like a charm and never caused any issues.
Other commenters are right in that wireguard is a great modern solution to this!
If you have systemd, you could do this:
No passphrase for the key? What about spotty connection? Doesn't WantedBy block startup on this starting properly? (I'm pretty sure I've been soft locked out of my computer when Comcast decides to do Comcast things.
This approach works very well. I've had dozens of extremely remote systems hooked up this way for about 8 years. The only problem I've seen is that occasionally the server ssh process will get stuck, so you have to log in to the server and kill it. It seems to happen when a remote goes offline and reconnects without closing the old connection first.
If I were doing it now, I'd probably use wireguard, probably. This is simpler to set up and works great.
This is quite clean and tidy
[dead]
[flagged]
Do we really still have to turn every conversation into systemd friction?
No. Some people use ssh while not running Linux, and not by running something exotic; macOS is widely popular.
[flagged]
14 years ago, i was using auto ash to keep SSH tunnels up; but at some point (quite far back - perhaps 2016?) ssh gained everything needed to do this internally except the restart.
At this point I configure all of the keep alive and retry options in ssh_config and sshd_config, and use
To get the same effect, but with much more flexibility - e.g. alternating connection addresses on a multihomed host, add logging, run from daemontools or systemd unit instead of a loop and let them track the process and restart, etc.Curious what advantages this has over mosh?
https://mosh.org/
mosh is for interactive sessions, to keep them working when the connection is flaky.
autossh is for keeping unattended ssh tunnels alive, if the connection is flaky or one end is only intermittently available. So for using tunnels for the sort of thing you might otherwise use a VPN for.
The last time I used autossh it was on a client site to keep 2 layers of ssh tunnels open to jump through their network isolation hoops.
In general, when flexibility is possible, such a use-case nowadays would often be better served by deploying WireGuard. Grouchy, out-of-touch corporate net admins probably don't even know what it is and insist on their antiquated Cisco VPNs.
I've used autossh to have a reverse tunnel open connection back to my work desktop, IT never found it and I had that in place for year
Not 100% the same use case as autossh was built for maybe, but I'm now simply throwing tailscale on every box i need to interact with. Does away with all the port forwarding stuff, it's absolutely delightful.
How much reliance on third party am I subjecting myself by using Tailscale? What happens if I make a local connection to a machine/service running on Tailscale, does it still go out of the local network? If so, is the bulk of the payload transferred locally? Is there any advantage on using it if the machine/service is easily accessible over ipv6?
It will route directly over the local network when possible.
It will be encrypted through the VPN, so there will be some overhead. But will be as direct as it can be. It only routes through tailscales servers as a last resort, when it can’t find a direct route at all (usually because NAT holepunching fails somehow). Their “DERP” relay servers just relay the encrypted connection. I think you can use your own relay servers, but I don’t know if that feature can be disabled entirely.
Headscale can be entirely self-hosted. It still uses the tailscale client applications- but is compatible.
That's what Tailscale is built for, when it can it sets up a P2P connection, it only ever sends data through Tailscales servers if you are in a restrictive network environment (i.e. corp network that controls all inbound and outbound traffic)
good questions, pretty well answered by other commenters. if you are happy with the level of encryption you have on your 'plain' ipv6 connection, sure, use that.
additionally the acl/auth system, their dns and service discovery thing is nice, though not essential.
Nice tool, but I'm getting tired of using port numbers for everything instead of more descriptive strings. My system has more than 10 tunnels and servers running, and since I only do sysadmin work once every half year or so, the port numbers are very cumbersome to deal with.
I believe these days SSH is willing to forward a UNIX domain socket to a remote TCP port, or a local TCP port to a remote UNIX domain socket, or any combination of the two families really. You could use names locally, if your client tools are willing to do AF_UNIX!
The nice thing about this is that, with filesystem permissions on one end and a check for SCM_CREDENTIALS or SO_PEERCRED on the other, you can effectively get user-based access control working between two machines.
I think this is the one remaining advantage of ssh tunnels over using a VPN.
NB if you're doing this sort of thing, you probably want to add `StreamLocalBindUnlink yes` to the ssh options.
How is this different from this
EDIT: Oh I think I know, autossh must be detecting when the connexion is closed but ssh does not automatically…Ssh does with the right settings and has for about a decade - see the systemd example someone posted above.
Rather than using AutoSSH for port forwarding and such, I just create a systemd unit with a restart policy. Then you don't need autossh at all, just use ssh.
I used to use autossh to set up a SOCKS proxy to tunnel my web browser traffic via my home network and it worked really well. Also had a ControlMaster on the tunnel which made SSH connections to my server instantaneous.
Nowadays I use wireguard an a dedicated SOCKS proxy. The upside is that I can access everything on my home network directly without having to tunnel.
I used autossh to do terrible things securing redis back in 2013. Fantastic tool.
Ohh tell me more?
Way back redis didnt have passwords at all. That got added but there was no secure transport support.
So I ran redis in a higher memory box at rackspace separate from my db and my app server. I used autossh to forward 6379 from localhost on the app server(s) to the redis server. Worked like a charm and never caused any issues.
Other commenters are right in that wireguard is a great modern solution to this!
autossh is nice but the default options suck. I have to do something like this for it to work well