5

I work mostly with embedded applications (FPGAs and microcontrollers) and I'm fairly new to git and version control in general. I've come to learn the power of it and want to set up some of my projects for co-workers and friends to collaborate with me. These projects are typically developed in an integrated development environment (IDE) such as MPLAB-X, Code Composer Studio, Libero, or Quartus, which generate binaries, provide background bugging, and other features. I've had some trouble setting up the project in a repository where someone else can clone it and get to working on it. I've found that most recommended .gitignore settings have you ignore the main project file as well as all of the extra binary file and bi-products such as .tcl scripts and text reports. By ignoring these, I find that I am removing all of the information that a collaborator would need to set up the development environment with the same configurations. However, if I track them in the repository, then my repository gets bogged down with extra files (often large ones) that aren't important. Is there a better solution to this problem?

kjgregory
  • 656
  • 2
  • 12
  • 23

2 Answers2

2

My rules are:

  • there needs to be a single command that sets up the environment after a checkout
  • files under version control may not change during execution of the setup command or a build.

The weaker rules are:

  • there should be no redundant information in version-controlled files
  • making changes leads to a minimal difference

I have per-file-type filters configured that normalize the files before check-in. For QuartusII, these are:

  • .qpf and .qsf have the datestamp removed. These load just fine, and Quartus just writes a new one, which is then removed again on the next checkin.
  • .qsf is run through a normalization step (Quartus has an explicit "save normalized project file" command).
  • .vhd files containing IP megafunctions are reduced to the recovery information comments, and regenerated by the setup script.
  • .qip files are ignored (these are regenerated along with the megafunction)

Granted, there is an initial overhead, and this is difficult to set up, but this allows me to review individual commits as diffs easily.

Introducing these filters later on is possible by git filter-branch, so I wouldn't let that impede development and just check in everything until filters are in place.

Simon Richter
  • 28,572
  • 1
  • 42
  • 64
  • Excellent answer. I will have to learn how to use these per-file-type filters. If there is any tutorial you can point me to, I would appreciate it. – kjgregory Apr 11 '14 at 13:55
0

The general solution is to:

  • include anything able to rebuild those extra large (often binary) files
  • include a reference to an artifact repository (like Nexus) or any other referential you want able to store multiple versions of those large files

Tracking rebuild scripts or references are the two ways to keep your repo a source management tool.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • To clarify, are you suggesting that I either A) enter into tracking the minimal set of project files necessary to open the project from a fresh clone or B) archive the entire project (e.g. zip it up) and provide it as a starting point for new designers where they would then sync to the latest commit via git? – kjgregory Apr 09 '14 at 15:50
  • @user2635036 most certainly A: for a source management tool, zip isn't very compatible. – VonC Apr 09 '14 at 19:33
  • I'm having a hard time understanding the reference to an artifact repository then. The link provided is using a lot of terminology that is not familiar to me. – kjgregory Apr 09 '14 at 20:37
  • @user2635036 The key concept to understand how the referential is structured in Git vs. Nexus (ie in source control vs. artifact management): see my older answer http://stackoverflow.com/a/13490800/6309 (and its associated link). – VonC Apr 10 '14 at 05:23