Skip to main content

This article describes how to setup Prepare instances with Fabric storage.

Fabric Prepare instance storage is available as part of the Standard, Premium or Enterprise Package

The following functionality is currently supported when using Fabric Prepare instance storage when using the TimeXtender Data Integration 6814.1 release or later:

  • Data extraction from Ingest instances using Fabric storage
  • Simple Mode tables

Prerequisites

  • Your Ingest Instance must also use Fabric Lakehouse storage. Currently, Fabric Prepare Instances can only use data from Ingest instances with Fabric storage. Using Fabric Prepare instances in combination with non-Fabric Ingest Instances is currently not supported.
  • You must setup an Azure App Registration as described here
  • In Fabric/Power BI Admin Portal, enable “allow service principals to use Power BI APIs” as described here, in order to grant the app registration access to the Fabric workspace.
  • Create a workspace, or navigate to an existing workspace, in the Fabric portal and select Manage access. Grant the App Registration account Member access, and the non-MFA user Contributor access, to the Fabric workspace.
  • Runtime version of the Fabric workspace needs to be set to 1.2. Navigate to your Fabric workspace and click Workspace settings, and under the Data Engineering/Science, click Spark settings and select Runtime version 1.2 and click Save.

Setup Fabric Storage for a Prepare instance

  1. Add a Prepare instance with Fabric storage in the TimeXtender Portal
  2. Enter the workspace name for the existing Fabric workspace
  3. Provide a name for the Lakehouse

    Warning: Make sure to use different Lakehouses for your Ingest and Prepare instances to avoid table name clashes

    Note: You can connect to an existing Lakehouse that has been created directly in the Fabric Portal, or you can choose to create the Lakehouse within TimeXtender Data Integration (TDI).

  4. Enter the user name and password for the non-MFA user that was setup as an owner in the App Registration
  5. Enter the Tenant ID for the tenant associated with Fabric
  6. Enter the Application ID for the App Registration
  7. Enter the Application Key (i.e. the client secret value) associated with the App Registration
  8. Open TimeXtender Data Integration and open your Fabric Prepare instance
  9. Right-click on your Fabric Prepare Instance and select Authenticate, login with the non-MFA user and accept the permissions it wants to add in order to give your App Registration the correct Scopes
  10. If you haven't created the Lakehouse already, you can do it now by right-clicking on the Ingest Instance in the Solution Explorer and selecting Edit Instance, and then Create Storage 
  11. Keep the Use Lakehouse schemas property checked (recommended as it allows for different schemas by organizing tables into sub-folders in the Lakehouse, rather than creating all tables in the dbo schema which presents the risk of overwriting tables if there are multiple data areas, containing tables with the same name) and click OK

Objects deployed within Fabric Lakehouse

Upon deployment of a table in a Fabric Prepare instance, a Spark-based Fabric Notebook is created in the workspace and named using the following format: TimeXtender_<Lakehouse Name>_<TABLE/VIEW>_<Data Area>_<Table Name>. A notebook is created for each table that is deployed. To view the notebooks, navigate to your Fabric Lakehouse and click on Open notebook and select Existing notebook. Search for the notebook and click on the relevant notebook.

When a table is executed within a Fabric Prepare instance, these notebooks that were created on deployment are run. The execution of the notebook results in the creation of a Lakehouse table and population of the Lakehouse table with data from the Ingest instance Fabric storage.

Note: Unlike other storage types, tables are not created in Fabric Lakehouse until you execute the table

If a table is dragged into the data area without any transformation, selection rules, incremental rules or any other modifications, then a shortcut to the table (in the Ingest instance Fabric lakehouse) is created in the Prepare instance Fabric lakehouse, rather than a delta parquet table.

Views and stored procedures are deployed to the SQL Analytics endpoint for the lakehouse. To review the views and stored procedures deployed behind the scenes, navigate to your Fabric lakehouse, click Settings and select SQL analytics endpoint. Copy the connection string and paste in SSMS and connect with Microsoft Entra MFA authentication option.

 

This is amazing guys! Good work!


This link is behind (other)login;

  • You must setup an Azure App Registration as described here

@FlorisW thanks for letting us know, I have corrected the link now 😊


Reply