3

Taking ssh too far

 3 years ago
source link: http://rachelbythebay.com/w/2013/06/27/ssh/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Taking ssh too far

ssh and shell scripts together represent one of those methods which seems to draw a lot of people in. It's a way of working which practically begs for you to build systems on top of it. It can do so many things. If you want to run a command on a distant system and you have done some basic setup work (keys and such), it'll do it for you. I think it's actually a little too tempting and it's easy to take it too far.

You don't have to actually log in, since you can tell it whatever needs to run and it will run it directly instead of starting a login session.

ssh host uname -a

So now you know that 'host' is running Linux x.y.z with the foobar patch. Good enough.

Things change. Now you want to run it on 20 systems. This might happen with a simple for loop or something like that. You also adjust the command a bit to keep the output from getting mixed up.

for i in `cat list_of_hosts`
do
  ssh $i 'echo `hostname` - `uptime`'
done

Of course, if you used "" instead of '' to wrap that command which is intended for the remote machines, you now have two problems: you probably just ran "hostname" and "uptime" locally.

That's going to run them in series. If any of the machines are particularly slow, then everything after it is going to suffer. ssh will eventually give up if the machine is down. You might still get stuck if the machine is up but isn't particularly responsive to userspace for whatever reason. Machines which are chewing swap will tend to do this.

So okay, you think, let's take these things out of series. Now it's time to run them in parallel. The command is adjusted to stick a "&" on the end and now these ssh commands run in the background. All 20 of them are launched at the same time.

Of course, the first time this is run, the script drops out just as soon as it can start all of the ssh sequences, and the results come back all willy-nilly. Some of them hit while the script is still running, and others return much later as they manage to succeed. Others just fail.

This can't stand, so now it's time to adjust the script to make it wait for the various child ssh processes to finish. This is simple: just a single "wait" at the bottom will make it do that, and as long as your machines are basically responsive, you're probably okay. Of course, if any of them are slow, then your entire script hangs at the bottom. If they are so wedged that running commands is impossible but ssh logins still nominally work (and this happens!), your script will sit forever.

Now what? I guess you have to change the script yet again to add some kind of timeout for each call. Unless your system already has some kind of "alarm" helper which will run a command with a timeout (enforced by SIGALRM, perhaps?) then you either get to write one or start getting really clever with your scripting.

Maybe you just punt and just put something really horrible at the bottom.

( sleep 15; kill $$ ) &
wait 

So now if it takes too long, the script will still manage to exit... by killing itself. Of course, all of those ssh processes are still in the background, and they could still manage to succeed somehow. Or they could stay around for a very long time. This means yet another change to make it kill the process group instead of just the process.

( sleep 15; kill -15 -$$ ) &
wait

This whacks any children which are still hanging around, and so okay, now you're probably not too likely to still have "klingons". In theory the ssh children could go off and become their own session leaders, but in practice you'll probably get away with it.

Once this "system" reaches this stage, it will probably be good enough to keep working for a while. Then, some day, someone will start adding more systems to the list, or will otherwise come up with a workload which is far bigger than 20 systems, and it will get ugly again.

Have you ever looked at what sort of resources a ssh connection consumes? It might be "only" 2 MB resident, but how far can you take that before it starts to be a problem? How much memory does your machine have, anyway? How about CPU time, or network bandwidth? Do you really want all of that stuff running in parallel?

Okay, forget about the steady-state requirements. How computationally intensive is it to bring up all of those connections at the same time? Do you really want to do that?

I imagine at this point, people in this situation frequently find themselves trying to write some kind of batch scheduler thing... in shell scripting languages... for ssh connections. This is so it can kick off a bunch of connections in parallel (so as to avoid the serialization problem from before), but not too many (so as to avoid melting down the controller machine).

Of course, all of this assumes the connections will be short-lived, like my example to run 'hostname' and 'uptime' shown above. What happens if these connections have to last a long time? Now you don't have the luxury of letting 20 run, then finish, and then starting 20 more. Now you somehow have to figure out how to run 40 in parallel. If 40 is okay, then you have to figure out 80, and so on.

Does this sound far-fetched? If you try to write a system which does cluster-scale testing of large software systems, you might find yourself trying to get something to bring up hundreds of ssh connections in parallel... just for one test. Meanwhile, another test will be running on another cluster with another few hundred machines, and it will also need a bunch of long-lived ssh connections.

Why long-lived ssh connections? Easy. They're running "tail -f" on a bunch of system files while the test does its thing. You know, "just in case". Maybe they're using ssh as a command channel for the testing framework. Whatever. The point is, you can't tear it down. It has to stay up.

This sort of thing never gets easier. It just gets more and more dark as you try to find new ways to work around the latest problem which has cropped up.

Maybe this is why I tend to think "oh, that's cute" whenever someone releases yet another system management framework which runs over ssh. To me, it's cute because it might work for a handful of well-behaved machines. It just seems likely to start dragging and eventually breaking once it's time to scale up beyond some point.

If you don't know where that point is, and don't know how to find it, that's okay. It'll find you.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK