I’ve been passively looking for a tip like this for a while. Often times, I’ll need to run a command on a server that takes a good 30 minutes to run (like loading a database dump for example). Normally, I’ll run the command in the background, followed by this:
while true; do sleep 60; date; done
This has been my timeout prevention to keep commands from dying – yes it’s a pretty stinking lame solution.
Especially lame when the server drops the connection anyways.
nohup saves the day
If you need to run a REALLY long command remotely that won’t die with an SSH time out, then “nohup” is your friend.
nohup shields the commands from “HUP” signals – a signal that is sent to all child processes upon disconnecting from a terminal, that normally will stop your process (gracefully).
For example:
nohup cat the_entire_internet.sql.bz2 | bunzip2 | mysql the_internet_production &
And then, nohup will probably say something like "nohup: ignoring input and redirecting stderr to stdout"
, but it’s of little consequence. You’re safe to quit your terminal and your process will be safely fostered by /sbin/launchd
after you take it’s parent away. Ya big mean lug.
2 comments:
Or just use screen :)
Yes, after looking into said "screen" program, it does look like a better choice. It's funny how you can go so long administering unix servers and never hear about a tool like that - or I probably did hear about it, but probably thought people were talking about using an actual screen. Not the most unique name for a tool :)
I feel another blog post coming on.
Post a Comment