6

How we do things now

We have a file server (using NFS) that multiple web servers mount and use these mounts as the web root. When we deploy our codebase, we SCP an archive (tar.gz) to the NFS server and unarchive the data directly in the "web directory" of file server.

The issue

During the deploy process we are seeing some i/o errors, mostly when a requested file cannot be read: Smarty error: unable to read resource: "header.tpl" These errors seem to go away after the deploy is finished, so we assume that it's because unarchiving the data directly to the web directory isn't the safest of things. I'm guessing we need something atomic.

My Question

How can we atomically copy new files into an existing directory (the web server's root directory)?

EDIT

The files that we are uncompromising into the web directory are not the only files that are in the directory. We are adding files to the directory, that already has files. So copying the directory or using a symlink is not an option (that I know of).

mmattax
  • 27,172
  • 41
  • 116
  • 149
  • 1
    rename is atomic(mv), also maybe its best to use soft links and the actual web directory is just a link to /storage/www.revision.3282378 for example – jackdoe Dec 16 '11 at 19:18

5 Answers5

1

I think rsync is a better choise instead of scp, only the changed files would be synchroned. but deploying code by script is not convenient for deveopments in a team, and the errors in deployment is not humanize.

you can think about Capistrano, Magallanes, Deployer, but they are script too. I may recommend you have a try walle-web, a deployment tool written in PHP with yii2 out of the box. I have hosted it in our company for months, it works smoothly while deploying test, simulate, production enviroment.

it depend on groups of bash tools, rsync, git, link, but a web ui generally well for operation, have a try:)

  • Welcome to Stack Overflow! I've noticed that all five of your answers so far on this site (including the two that were recently removed) promote the same tool, walle-web. Please take a moment to read our [guidelines on self-promotion](http://meta.stackexchange.com/a/59302/253560). Importantly, if you are affiliated with this tool, you need to disclose that in each answer. Further you really shouldn't be promoting your tool in all your answers on this site. – josliber Oct 08 '15 at 17:37
1

Here's what I do.

DocumentRoot is, for example, /var/www/sites/www.example.com/public_html/:

cd /var/www/sites/www.example.com/
svn export http://svn/path/to/tags/1.2.3 1.2.3
ln -snf 1.2.3 public_html

You could easily modify this to expand your .tar.gz before changing the symlink instead of exporting from svn. The important part is that the change is the atomic application of the symlink.

ghoti
  • 45,319
  • 8
  • 65
  • 104
0

I like the NFS idea. We do deploy our code to NFS server that is mout on our frontends. In fact we run a shell script when we want to release a new version. What we do is using a symlink current to the last release dir, like this:

/fasmounts/website/current -> /fasmounts/website/releases/2013120301/

And apache document root is:

/fasmounts/website/current/public 

(in fact apache document root is /var/www which is a symlink to /fasmounts/website/current/public)

The shell script updates the current symlink to the new release AFTER everything has been uploaded correctly.

0

Why don't you just have 2 dirs with 2 different versions of the site. So when you finished deploying in site_2 you just switched site dir in your webserver config (for example apache) and copy all files to site_1 dir. Then you can deploy in site_1 dir and switched to it from site_2 with the same method.

Ximik
  • 2,435
  • 3
  • 27
  • 53
  • I guess something like mentioned here is the fastest way. Copy your new stuff to a temp folder, then rename the original and the new/temp folder afterwards (which will not take much time). – djot Dec 16 '11 at 19:10
  • See my Edit - We have other files in the web root that must remain (marketing site) and lots (> 40GB of user generated content), I don't want to duplicate that all the time... – mmattax Dec 17 '11 at 02:56
0

RSync was born to run... er... I mean to do this very thing

RSync works over local file systems and ssh - it's very robust and fast - sending/copying only changed files.

It can be configured to delete any files that have been deleted (or are simply just missing from the source), or it can be configured to leave them alone. You can set up exclude lists to exclude certain files/directories when syncing.

Here's a link to a tutorial.

Re: atomic - link to another question on SO

Community
  • 1
  • 1
Tim G
  • 1,812
  • 12
  • 25
  • for what it's worth, I've used this on multiple web sites to deploy code for the last 2 years without any errors (I log and email errors) - no missing file notices/errors - just easy, single command, command-line scripted deployments. – Tim G Dec 16 '11 at 19:33
  • This sounds like a good option, any examples of running it as a "one off" on a local directory, or must rsync always run as a daemon? – mmattax Dec 17 '11 at 02:58
  • I run rsync locally all the time as part of a script that packages up google code projects. I'm pretty sure I never configured rsyncd - I'm on a mac. – Tim G Dec 17 '11 at 05:29
  • 1
    All network admin's I've spoken with say they've seen firsthand rsync destroy entire file systems (with deletions turned on); And no matter how fast rsync is, if you ever operate on more than a single file, it is not an atomic deployment. – ThorSummoner Nov 18 '14 at 18:44
  • 1
    ^ rsync with delete flag operating on (through) symlinks specifically is not safe. – ThorSummoner Nov 18 '14 at 19:02