0

I'm working on automating deployment for dev and prod with jobs that have to be scped onto specific servers for each type. With these jobs, the scripts associated for each sqoop job needs to change based on dev vs prod. Currently, I have a git repo containing a dev and prod folder where approved dev changes are put onto the prod folder but with the variables (references to dev database vs prod database) changed. Then I have two jenkins pipelines that associate with each and have independent triggers. This is incredibly hacky.

My current plan is to consolidate into a single folder and replace all the variables with a pseudo variable such as %DBPREFIX% and then having each associated pipeline regex and replace all matches with its associated database prefix on compilation.

The files that need to be changed are shell scripts and hive scripts, so I can't just define a environment variable within the Jenkins node shell.

Is there a better way to handle this?

tl;dr: I need to set variables in different files that can be automatically changed through a jenkins pipeline.

StephenKing
  • 36,187
  • 11
  • 83
  • 112
Cyrusc
  • 163
  • 1
  • 10

1 Answers1

1

You can actually reference environment variables in shell scripts and in hive scripts.

in a shell script to reference $HOT_VAR:

echo $HOT_VAR

in a hive script to reference $HOT_VAR:

select * from foo where day >= '${env:HOT_VAR}'

i'm not sure if that is an example of a hive script. maybe you want to see https://stackoverflow.com/a/12485751/6090676. :)

if you are really unable to use environment variables for some reason, you could use command line tools like awk, sed, or perl (why do people always suggest perl instead of ruby?) to search and replace in the files you need to configure (based on environment variables, probably).

burnettk
  • 13,557
  • 4
  • 51
  • 52