We use rsync extensively all throughout our deployment pipeline. Here are a few pointers on how we use it though.
Don't rsync directly to the location you are running your application from. Instead, upload to a staging directory and then use a symlink to change from one version of your code to the next. Changing a symlink is an atomic operation.
We have a user called something like ~packages which has all the static code and assets in it. This users data should be read only from the users that run the actual services. Inside that user dir, we have version directories like tags/0.11.1/1, tags/0.11.1/2 and tags/0.11.2/1. These directories correspond to tags from our version control system.
Switching over to a new build just means stop service, change symlink, start. Some services don't need the stop and start part.
You can use hard links to make this process even better. Our build system uses the "--link-dest" option to specify the last build's directory when uploading a new build. This means that files that have not changed from the last build don't consume any extra space on the disk. Since the inodes are the same, they even stay in the file system cache after the deploy.
You can have lots of past versions sitting there on the server without taking up any space. If you have a bad deploy, and need to revert to a past version, just change the symlink again.
Rsync is just a file transfer tool with extra options. Deployment involves a lot more pieces. The file transfer component of your deployment could certainly use Rsync, assuming you aren't limited to a particular transport protocol (though rsync does support HTTP proxies!)
Here are some of the neat features of Rsync you can take advantage of for deployments:
* Fault tolerance: when an error happens at any layer (network, local i/o, remote i/o, etc), Rsync will report it to you. Trapping these errors will give you better insight into the status of your deployments.
* Authentication: the Rsync daemon supports its own authentication schemes.
* Logging: report various logs about the transfer process to syslog, and collect from these logs to learn about the deployment status.
* Fine-grained file access: use a 'filter', 'exclude' or 'include' to specify what files a user can read or write, so complex sets of access can be granted for multiple accounts to use the same set of files (you can also specify specific operations that will always be blocked by the daemon)
* Proper permissions: force the permissions of files being transferred, so your clients don't fuck up and transfer them with mode 0000 perms ("My deploy succeeded, but the files won't load on the server! Wtf?")
* Pre/post hooks: you can specify a command to run before the transfer, and after, making deployment set-up and clean-up a breeze.
* Checksums on file transfers for integrity
* Preserves all kinds of file types, ownership and modes, with tons of options to deal with different kinds of local/remote/relative paths, even if you aren't the super-user (including acls/xattrs)
* Tons of options for when to delete files and when to apply the files on the remote side (before, during or after transfer, depending on your needs)
If your current deployment procedure is "I just scp this directory or zip file up", then yes, rsync may be slightly better. It ultimately depends on how much actually changed between your new build artifact and whatever is actually on your server. If you're deploying using just scp though, I'd strongly suggest looking at a deploy tool (e.g., capistrano).
Where I've found rsync really valuable is for good ol' regular file copying ("I just need to stick this one file or directory on a server"). I've pretty much stopped using scp and replaced it with rsync. rsync is awesome because:
1) you can resume interrupted transfers
2) it's much faster than scp when sending lots of small files
3) it's actually you know, a sync tool, as opposed to just a copy tool
If you miss the little progress bar that scp gives, you can also use --progress with rsync and then it's basically a drop in replacement.