0

Scenario: I have a Power BI database/report that uses incremental refresh. I am trying to automate the update of this DB schema (the .pbit file) to include in our CI/CD pipeline. I've looked at using the MicrosoftPowerBiMgmt PowerShell module, followed the docs, and I'm stuck.

Ultimate Goal: Whenever someone updates the dev branch for our PowerBI Report, I want to push schema-only updates to the DEV and TEST Power BI workspaces.

Background: I have read the MS docs on Dataset Refreshes, incremental refreshes, troubleshooting incremental refreshes, and advanced incremental refreshes. The latter link points to using the ALM Toolkit to perform metadata-only updates, rather than using the PowerShell module. Reading through the ALM Toolkit documentation (it refers to itself as BISM Normalizer rather than ALM Toolkit, somewhat confusingly; go to either alm-toolkit.com or bism-normalizer.com to see they are the same) I found the command-line syntax to perform the task at hand on page 31.

Syntax: bismnormalizer.exe /SKIP:<MissingInTarget|DifferentDefinitions|MissingInSource> /LOG:logfile /SCRIPT:scriptfile

Problem: We use the ALM Toolkit (or BISM Normalizer Tool in VS) to perform this operation manually, and we manually select certain columns to Skip (like the DB name, environment, rangestart, rangeend) when we do the compare/update. The command-line appears to limit me to 3 options for the SKIP parameter: MissingInTarget, MissingInSource, and DifferentDefinitions. This appears to apply to all columns/measures, and therefore I cannot skip certain changed fields while including others.

Is there a way to do this? Maybe by generating the script and then modifying it? Does anybody to this in their CI/CD pipeline?

jalopy67
  • 3
  • 3
  • I can use the /SCRIPT parameter to generate a XMLA script, and I see the "expressions" section where I could update those expressions to match the target environment. How do I then apply the xmla script? This might be my solution...? – jalopy67 Oct 07 '21 at 00:21
  • I do agree that writing a separate process to generate the XMLA script and then modifying the script to remove the commands you don't wish to run is likely the best option. XMLA commands are not overly complicated, so it should not be hard to write something that could modify the XMLA script. In order to run it, you would need to use the [XMLA endpoint](https://learn.microsoft.com/en-us/power-bi/admin/service-premium-connect-tools) associated to the Power BI Workspace that your dataset is currently deployed to. – trenton-ftw Oct 07 '21 at 04:04
  • That being said, I would recommend no longer using a .pbit file to maintain your dataset. The .pbit file is much harder to version control (IMO), and has lots of unneeded metadata that has to be ignored constantly once you move to more advanced incremental refresh strategies. I have had a much easier time using Tabular Editor to maintain a .bim file. ALM Toolkit can also use a .bim file to diff to your deployed dataset the same way that it works with .pbit and .pbix files. – trenton-ftw Oct 07 '21 at 04:08
  • The other advantage to this is that you can use Tabular Editor to retrieve a .bim file for a deployed dataset, even when it is stored in a large dataset format. Something that cannot be done with .pbix or .pbit files. – trenton-ftw Oct 07 '21 at 04:08
  • If you are interested in the .bim file approach then I can write this in to an answer and remove my comments. – trenton-ftw Oct 07 '21 at 04:19
  • That is excellent information. Thank You! I'm going to review your comments with my team (I'm more of an automation guy than a Power BI guy) and see what they say. It's great input. I'll return when I know a bit more. – jalopy67 Oct 07 '21 at 17:07
  • Certainly. People tend to think of this process as Power BI specific. And that is how they get lost. Approaching this as maintaining a tabular data model within Azure Analysis Services will result in a much clearer insight in to what is going on. My org has designers working away constantly on large Power BI datasets (tables with 100M rows and rapidly growing) that I have to maintain the source data warehouses and the subsequent Power BI datasets refresh strategies for. Happy to move this discussion to chat as well. – trenton-ftw Oct 07 '21 at 18:27
  • [This](https://www.dataunlock.com.au/how-to-implement-source-control-and-ci-cd-for-power-bi/) link provides good steps to portions of what I am describing. Even if your designers are working within Power BI for the dataset design (.pbit/.pbix), even they can convert their dataset to a .bim within Power BI for publishing/automation purposes. – trenton-ftw Oct 07 '21 at 18:38
  • Awesome. Thank you, again! It's a bit for me to chew on (pardon the pun) and I'm still talking with the team. – jalopy67 Oct 07 '21 at 20:26

0 Answers0