9

This question may not relate specifically to Azure Virtual Machines, but I'm hoping maybe Azure provides an easier way of doing this than Amazon EC2.

I have long-running apps running on multiple Azure Virtual Machines (i.e. not Azure Web Sites or [Paas] Roles). They are simple Console apps/Windows Services. Occasionally, I will do a code refresh and need to stop these processes, update the code/binaries, then restart these processes.

In the past, I have attempted to use PSTools (psexec) to remotely do this, but it seems like such a hack. Is there a better way to remotely kill the app, refresh the deployment, and restart the app?

Ideally, there would be a "Publish Console App" equivalent from within Visual Studio that would allow me to deploy the code as if it were an Azure Web Site, but I'm guessing that's not possible.

Many thanks for any suggestions!

Ruben Bartelink
  • 59,778
  • 26
  • 187
  • 249
Hairgami_Master
  • 5,429
  • 10
  • 45
  • 66

3 Answers3

4

There are number of "correct" ways to perfrom your task.

If you are running Windows Azure Application - there is simple a guide on MSDN. But if you have to do this with a regular console app - you have a problem.

The Microsoft-way is to use WMI - good technology for any kind managent of the remote Windows servers. I suppose WMI should be ok for your purposes.

And the last way: install Git on every Azure VM and write simple server-side script scheduled to run every 5 minutes to update the code from repository, build it, kill old process and start new one. Publish your update to repository, thats all. Definitely hack, but it works even for non-windows machines.

Christo
  • 2,330
  • 3
  • 24
  • 37
Evgeny Gavrin
  • 7,627
  • 1
  • 22
  • 27
3

One common pattern is to store items, such as command-line apps, in Windows Azure Blob storage. I do this frequently (for instance: I store all MongoDB binaries in a blob, zip'd, with one zip per version #). Upon VM startup, I have a task that downloads the zip from blob to local disk, unzips to local folder, and starts the mongod.exe process (this applies equally well to other console apps). If you have a more complex install, you'd need to grab an MSI or other type of automated installer. Two nice thing about storing these apps in blob storage:

  • Reduced deployment package size
  • No more need to redeploy entire cloud app just to change one component of it

When updating the console app: You can upload a new version to blob storage. Now you have a few ways to signal my VM's to update. For example:

  • Modify my configuration file (maybe I have a key/value pair referring to my app name + version number). When this changes, I can handle the event in my web/worker role, allowing my code to take appropriate action. This action could be to stop exe, grab new one from blob, and restart. Or... if it's more complex than that, I could even let the VM instance simply restart itself, clearing memory/temp files/etc. and starting everything cleanly.
  • Send myself some type of command to update the app. I'd likely use a Service Bus queue to do this, since I can have multiple subscribers on my "software update" topic. Each instance could subscribe to the queue and, when an update message shows up, handle it accordingly (maybe the message contains app name and version number, like our key/value pair in the config). I could also use a Windows Azure Storage queue for this, but then I'd probably need one queue per instance (I'm not a fan of this).
  • Create some type of wcf service that my role instances listen to, for a command to update. Same problem as Windows Azure queues: Requires me to find a way to push the same message to every instance of my web/worker role.

These all apply well to standalone exe's (or xcopy-deployable exe's). For MSI's that require admin-level permissions, these need to run via startup script. In this case, you could have a configuration change event, which would be handled by your role instances (as described above), but you'd have the instances simply restart, allowing them to run the MSI via startup script.

David Makogon
  • 69,407
  • 21
  • 141
  • 189
  • thanks for your well-thought answer. There are a couple of good concepts in there... never thought of rebooting as a component of code deployments but if we have enough machines running, I suppose it could work. I like it! – Hairgami_Master Jul 13 '12 at 13:58
1

You could

  1. build your sources and stash the package contents in a packaging folder
  2. generate a package from the binaries in the packaging folder and upload into Blob storage
  3. use PowerShell Remoting to host to pull down (and unpack) the package into a remote folder
  4. use PowerShell Remoting to host to run an install.ps1 from the package contents (i.e. download and configure) as desired.

This same approach can be done with your Enter-PSSession -ComputerName $env:COMPUTERNAME to have a quick deploy local build strategy that means you're using an identical strategy for dev, production and test a la Continuous Delivery.

A potential optimization you can do later (if necessary) is (for a local build) to cut out steps 2 and 3, i.e. pretend you've packed, uploaded, downloaded and unpacked and just supply the packaging folder to your install.ps1 as the remote folder and run your install.ps1 interactively in a non-remoted session.

A common variation on the above theme is to use an efficient file transfer and versioning mechanism such as git (or (shudder) TFS!) to achieve the 'push somewhere at end of build' and 'pull at start of deploy' portions of the exercise (Azure Web Sites offers a built in TFS or git endpoint which makes each 'push' implicitly include a 'pull' on the far end).

If your code is xcopy deployable (and shadow copied), you could even have a full app image in git and simply do a git pull to update your site (with or without a step 4 comprised of a PowerShell Remoting execute of an install.ps1).

Ruben Bartelink
  • 59,778
  • 26
  • 187
  • 249