5

How is performance in the current version (4.7) of Accurev?

  • time to checkout per 100mb, per gb?
  • time to commit per # of files or mb?
  • responsiveness of gui when 100+ streams?

I just had a demo of Accurev, and the streams look like a lightweight way to model workflow around code/projects. I've heard people praising Accurev for the streams back end and complaining about performance. Accurev appears to have worked on the performance, but I'd like to get some real world data to make sure it isn't a case of demos-well-runs-less-well.

Does anyone have Accurev performance anecdotes or (even better) data from testing?

Benjol
  • 63,995
  • 54
  • 186
  • 268
Peter Kahn
  • 12,364
  • 20
  • 77
  • 135
  • Anecdotal answer: Slow. Do not use unless you have to. The GUI is almost unusable. This is anecdotal only, I don't have any imperative numbers to back up what I am saying. – Martin Feb 05 '15 at 14:54
  • I've just migrated a Perforce project to the latest Accurev 6.2.3?, and a lot of the horror stories I'd heard have yet to manifest in any way that I could tell, BUT the GUI is incredibly slow and I don't recommend doing any moves while offline. I like that I don't have to check things out, but there is a performance cost with that. The biggest issue is any operation in the GUI including clicking a file takes a really long time compared to P4. I suggest learning the CLI. – Novaterata Nov 17 '16 at 21:11

2 Answers2

8

I don't have any numbers but I can tell you where we have noticed performance issues.

Our builds typically use 30-40K files from source control. In my workspace currently there are over 66K files including build intermediate and output files, over 15GB in size. To keep AccuRev working responsively we aggressively use the ignore elements so AccuRev ignores any intermediate files such as *.obj. In addition we use the time stamp optimization. In general running an update is quick, but the project sizes are typically 5-10 people so normally only a couple of dozen files come down if you update daily. Even if someone made changes that touched lots of files speed is not an issue. On the other hand a full populate of all 30K+ files is slow. I don't have a time since I seldom do this and on the rare occasion I do, I run the populate when I'm going to lunch or a meeting. I expect it could be as much as 10 minutes. In general source files come down very quickly, but we have some large binary files, 10-20MB, that take a couple of seconds each.

If the exclude rules and ignore elements are not correctly configured, AccuRev can take a couple of minutes to run an update for workspaces of this size. When I hear of other developers complaining about the speed I know something is miss-configured and we get it straightened out.

A year or so ago one of the project updated boost with 25K+ files and also added FireFox to the repository (forget the size but made boost look small.) They also added ICU, wrote a lot of software and modified countless files. In all I recall there were approx 250K+ files sitting in a stream. I unfortunately decided that all their good code should be promoted to the root so all projects could share. This turned out to be a little beyond what AccuRev could handle well. It was a multi hour process getting all the changes promoted. As I recall once FireFox was promoted the rest went smoothly - perhaps a single transaction with over 100K files was the issue?

I recently updated boost and so had to keep and promote 25K+ files. It took a minute or two but not unreasonable considering the number of files and the size of the binaries.

As for the number of streams, we have over 800 streams and workspaces. Performance here is not an issue. In general I find the large number of streams hard to navigate so I run a filtered view of just my workspaces and the just streams I'm interested in. However when I need to look at the unfiltered list to find something performance is fine.

As a final note, AccuRev support is terrific - we call them the voice in the sky. Every now and again we shoot ourselves in the foot using AccuRev and wind up clueless on how to fix things. Almost always we did something dumb and then tried something dumber to fix it. Eventually we place a support request and next thing we know they are walking us through the steps to righteousness either on the phone or a goto meeting. I've even contacted them for trivial things that I just don't have time to figure out as I'm having a hectic day and they kindly walk me through it rather than telling me to RTFM.

Stephen Nutt
  • 3,258
  • 1
  • 21
  • 21
  • A bit off topic, but is it possible to exclude specific files and folders directly in the user interface now, so that AccuRev ignores these files (such as obj etc)? The last time I used AccuRev (middle 2008), I had to set up a global environment variable where I listed all the files I wanted AccuRev to ignore, which was a pain. – Nitramk Nov 24 '09 at 09:54
  • We are using 4.7 and with that you still need to set an environment variable. There is a newer version available that we have yet to upgrade to so it may have changed but I suspect not. – Stephen Nutt Nov 24 '09 at 13:24
  • You can use .acignore files per-directory, at least that way they travel with the code but suck not being recursive. – Andy Dent May 25 '10 at 06:13
0

Edit 2014: We can now get acceptable X-Windows performance by using the commercial version of RealVNC.

Original comment:This answer applies to any version of Accurev, not just 4.7. Firstly, GUI performance might be OK if you can use the web client. If you can't use the web client and if you want GUI performance then you'd better be using Windows, or have all your developers in one place, i.e. where the Accurev server is located. Try to run the GUI on X-Windows over a WAN ? Forget it : our experience has been dozens of seconds or minutes for basic point and click operations. This is over a fairly good WAN about 800 miles distant, with an almost optimal ping time. This is not a failing of Accurev, but of X-Windows, and you'll likely have similar problems with other X applications over a WAN. So avoid basic X if you possibly can. Currently we cannot, and our WAN users are forcibly relegated to command-line only. The basic problem is that Accurev is is centralized and you can't increase the speed of light. I believe you can get around WAN latency by running Accurev Replication Servers, but that still does not properly address the problem if you have remote developers at single-person offices over VPN. It is ironic that the replication servers somewhat turn this centralized VCS into a form of DVCS. If you don't have replication servers then a horrible but somewhat workable work-around is to use a delta-synchronization tool such as rsync to sync your source tree between your local machine where you can run the GUI (i.e. GUI running directly on your Windows or Linux laptop), and the machine where you're actually working (e.g. UNIX machine 1,000 miles away). Another option is to use something like VNC which works better over a WAN than X, connecting to a virtual desktop at the Accurev server's location, and use X from there. At my workplace more than one team has resorted to using Mercurial on the side and promoting to Accurev only when it's strictly necessary. As Stephen Nutt points out above, other necessary work is to use time-stamp optimization and ignores. We also have our Accurev admins (yes, it requires you employ people to baby sit it) complain when we need to include large numbers of files, despite the fact they form a core part of our product and MUST be included and version controlled. Draw your own conclusions.

Tim
  • 953
  • 7
  • 11