Skip to main content

 This article describes how to setup Prepare instances with Fabric Lakehouse storage.

Fabric Lakehouse Prepare instance storage is available as part of the Standard, Premium or Enterprise Package

A public preview of this feature is currently available that supports the following functionality:

  • Data extraction from Ingest instances using Fabric Lakehouse storage
  • All standard functionality except related records and hierarchy tables.
  • If you use the data in a Delivery instance, PowerBI endpoints can be used.

When a Prepare instances uses Fabric Lakehouse storage, the supported features work the same as on any other storage with the following exceptions:

  • 'nchar' columns always have the predefined length with spaces being added to the value as padding. For example, a 'CustomerName' column with the datatype 'nchar(20)' would store TimeXtender as "TimeXtender ". When you create selection rules on 'nchar' columns, remember to add the spaces to the rule.

Prerequisites

  • Your Ingest Instance must also use Fabric Lakehouse storage. Currently, Fabric Prepare Instances can only use data from Ingest instances with Fabric storage. Using Fabric Prepare instances in combination with non-Fabric Ingest Instances is currently not supported.
  • Create an App Registration in the Azure Portal - It is recommended to use a dedicated app registration to ensure this account is the only one with access to the client credentials.
  • In Fabric/Power BI Admin Portal, enable “allow service principals to use Power BI APIs” as described here, in order to grant the app registration access to the Fabric workspace.
  • Create a workspace, or navigate to an existing workspace, in the Fabric portal and select Manage access. Grant the App Registration account Member access to the Fabric workspace.
  • Runtime version of the Fabric workspace needs to be set to 1.2. Navigate to your Fabric workspace and click Workspace settings, and under the Data Engineering/Science, click Spark settings and select Runtime version 1.2 and click Save.

Add Prepare Instance with Fabric Lakehouse Storage

Note: You can connect to an existing Lakehouse that has been created directly in the Fabric Portal, or you can choose to create the Lakehouse within TimeXtender Data Integration (TDI).

  1. Add a Prepare instance and select the storage type Microsoft Fabric Storage.
  2. Enter the workspace name for the existing Fabric workspace
  3. Provide a name for the Lakehouse

    Warning: Make sure to use different Lakehouses for your Ingest and Prepare instances to avoid table name clashes

  4. Enter the Tenant ID for the tenant associated with Fabric
  5. Enter the Application ID for the App Registration
  6. Enter the Application Key (i.e. the client secret value) associated with the App Registration
  7. Open TimeXtender Data Integration and open the instance you just created.
  8. If you haven't created the Lakehouse already, you can do it now by right-clicking the instance in the Solution Explorer and then clicking Edit Instance followed by Create Storage. Keep the Use Lakehouse schemas property checked (recommended as it allows for different schemas by organizing tables into sub-folders in the Lakehouse, rather than creating all tables in the ’dbo’ schema which presents the risk of overwriting tables if there are multiple data areas, containing tables with the same name) and click OK.

Objects deployed within Fabric Lakehouse

Upon deployment of a table in a Fabric Prepare instance, a Spark-based Fabric Notebook is created in the workspace and named using the following format: TimeXtender_<Lakehouse Name>_<TABLE/VIEW>_<Data Area>_<Table Name>. A notebook is created for each table that is deployed. To view the notebooks, navigate to your Fabric Lakehouse and click on Open notebook and select Existing notebook. Search for the notebook and click on the relevant notebook.

When a table is executed within a Fabric Prepare instance, these notebooks that were created on deployment are run. The execution of the notebook results in the creation of a Lakehouse table and population of the Lakehouse table with data from the Ingest instance Fabric storage.

Note: Unlike other storage types, tables are not created in Fabric Lakehouse until you execute the table

If a table is dragged into the data area without any transformation, selection rules, incremental rules or any other modifications, then a shortcut to the table (in the Ingest instance Fabric lakehouse) is created in the Prepare instance Fabric lakehouse, rather than a delta parquet table.

Views and stored procedures are deployed to the SQL Analytics endpoint for the lakehouse. To review the views and stored procedures deployed behind the scenes, navigate to your Fabric lakehouse, click Settings and select SQL analytics endpoint. Copy the connection string and paste in SSMS and connect with Microsoft Entra MFA authentication option.

This is amazing guys! Good work!


This link is behind (other)login;

  • You must setup an Azure App Registration as described here

@FlorisW thanks for letting us know, I have corrected the link now 😊


Hi,

is it correct that SQL Snippets are not supported (yet)?