Skip to main content
Tutorial

How to copy tables/fields between data areas


Thomas Lind
Community Manager
Forum|alt.badge.img+5

It is possible to create as many Data Areas as needed in your Prepare Instance. Each data area will share the same physical database but use different schemas to store its tables. Transfer between these Data Areas is done using the “Direct Read” stored procedure that copies the data from one area to another.

Copy a table from one data area to another

A table can be copied from one Data Area to as many alternate Data Areas as is needed, the only limit is that the data areas must be in the same Prepare instance.

The steps to copy a table are as follows.

  1. Open the source data area that contains the table to be copied.
  2. Open the destination data area.
  3. There is more than one way to copy a table, see the following for the specifics on each method.

First Method: Left-click on the table you want to transfer and drag it onto the Tables node of the destination data area.

Second Method: Right-click on the table you want to transfer and drag it onto the Tables node of the destination area. Note that this approach differs from left-clicking as it will display the “Add New Table” options menu, which allows the user to select from one of the following options.

  1. Add new table: This is in bold and is the option that is selected by using the left-click method above.

     

  2. Add new table with field selection: This option allows the user to select the specific fields to be added along with the table.
    1. Add new table: This is in bold and is the option being chosen by the left-click method.

       

    2. Add new table with field selection: This will give the option to choose what fields to add.

       

    3. Synchronize with other table: Provides the option to synchronize this table with a different table that already exists.

       

Third Method: Click on the table you want to transfer and drag it to the destination data area and drop the table onto an existing table, which will map this new source table to the existing table and the following options will be presented.

  1. Smart synchronize with table <table name>: This option compares the new field names with the already mapped source field names in the destination table and will add the new fields that match the existing fields from the other mapped tables. 
  2. Synchronize with table <table name>: This option will add any fields from the source table that match existing fields in the destination table, and will also add any other fields as new fields. 
  3. Synchronize with <table name> (Only existing fields): This option will add any fields from the source table that match existing fields in the destination table. 
  4.  

The Data Movement area in the right-side pane clarifies which fields have already been added from a previously transferred table by listing these out in Bold, see the GIF below. The other fields that were not previously transferred can also be added by dragging them from the Data Movement area and dropping them on the destination table in the data area

As many tables as needed can be mapped to a destination table. However, the mapped tables must share the same primary key field in order to implement incremental loading and to execute with history enabled.

Each data area includes other options that control how a table appears. Right-click on a data area and select “Edit Data Area” to display these options.

Enabling “Simple mode” will ensure that all new tables added to this data area will include this setting, and enabling this “Simple mode” on the data area may save time later, since it may require more effort to configure a table with this setting after the fact should that be needed.

Incremental load

Incremental data loading can be automated by configuring this on the data area as it is listed out in the Ingest Instance, or incremental data loading can be set up manually for a table.

The following items must be present in a data table in order to implement incremental data loading.

  1. Primary Key fields need to be set for all tables that are mapped, and the data types of these primary key fields all need to be the same if more than one source table is mapped to the destination table.
  2. A field must be configured that the incremental data loading can use to keep track of the start and stop points for each load. In general, the DW_TimeStamp field from the source table can be used as this tracking field and would need to be included in the transferred destination table.

Automate options

Automated incremental data loading can be implemented using either of the following two methods.

  • Add suggested constraints: This option will attempt to locate any primary key fields that are not already identified on the source tables, however this option will not normally be able to find additional key fields.
  • Add incremental load: This option automates the setup of incremental loading across all the tables in the data area. This approach may streamline an implantation, as no prior setup is required and the setup dialog is sufficient to configure the process.

Manually Configured Incremental Data Loading

If incremental data loading is not configured as part of the data area, then incremental data loading can be set on the individual tables in the data area. In this scenario, please note that the incremental data loading setting is not automatically carried over when adding a table from one data area to another. If incremental data loading was implemented on the source data table, then that setting will need to be reapplied on the transferred table in the new data area in order for incremental data loading to take effect.

The following steps can be used to implement incremental data loading for a table that does not have automated incremental data loading implemented as described above.

  1. Right-click on the table and select “Add Incremental selection rule” from the context menu.
  2. Reply Yes to the prompt about making changes to the table, which requires a confirmation that “Enable incremental load” should be enabled and that the “Truncate valid table before data cleansing” setting should be disabled.
  3. Select an appropriate field for the incremental selection rule from the Data Movement menu in the right-side pane.

SQL Database: What actions are being performed in the database

As previously stated, table and field transfers within a Prepare Instance are being executed by the Direct Read stored procedure, which is available in the Prepare instance storage.

To locate this stored procedure, open the Prepare instance storage in SSMS and expand the Stored Procedures area and then use the correct schema to find the stored procedure.

There is one procedure generated for each mapped table.

Simple mode

This is the ExchangeRates table from the GIFs above mapping two tables and running in simple mode.

/****** Object:  StoredProcedure [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]    Script Date: 2023-03-28 11:18:06 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]
@batchSize INT
AS
-- Copyright 2011 timeXtender a/s
-- All rights reserved
--
-- This code is made available exclusively as an integral part of
-- timeXtender. You may not make any other use of it and
-- you may not redistribute it without the written permission of
-- timeXtender a/s.


DECLARE @batchMaxId BIGINT, @batchFirstId BIGINT, @batchNextId BIGINT
SET @batchMaxId = (SELECT MAX([DW_Id]) FROM [Landing].[ExchangeRates])
SET @batchFirstId = 1
SET @batchNextId = @batchSize
WHILE @batchFirstId <= @batchMaxId
BEGIN
INSERT INTO [Present].[ExchangeRates] WITH (TABLOCK)
(
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[DW_Batch]
	,[DW_SourceCode]
)
SELECT
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[DW_Batch]
	,[DW_SourceCode]
FROM [Landing].[DM_Landing_Landing_ExchangeRates_Present_Present_ExchangeRates_T]
WHERE [DW_Id] BETWEEN @batchFirstId AND @batchNextId

SET @batchFirstId = @batchFirstId + @batchSize
SET @batchNextId = @batchNextId + @batchSize
END

The above shows how the stored procedure does the copy of data from the sources transfer table view to the valid source table.

Without Simple mode

This is the same ExchangeRate table above except that here the table is not configured to use simple mode.

/****** Object:  StoredProcedure [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]    Script Date: 2023-03-28 12:50:52 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]
@batchSize INT
AS
-- Copyright 2011 timeXtender a/s
-- All rights reserved
--
-- This code is made available exclusively as an integral part of
-- timeXtender. You may not make any other use of it and
-- you may not redistribute it without the written permission of
-- timeXtender a/s.


DECLARE @batchMaxId BIGINT, @batchFirstId BIGINT, @batchNextId BIGINT
SET @batchMaxId = (SELECT MAX([DW_Id]) FROM [Landing].[ExchangeRates])
SET @batchFirstId = 1
SET @batchNextId = @batchSize
WHILE @batchFirstId <= @batchMaxId
BEGIN
INSERT INTO [Present].[ExchangeRates_R] WITH (TABLOCK)
(
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[IncrementalTimeStamp]
	,[DW_Batch]
	,[DW_SourceCode]
)
SELECT
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[IncrementalTimeStamp]
	,[DW_Batch]
	,[DW_SourceCode]
FROM [Landing].[DM_Landing_Landing_ExchangeRates_Present_Present_ExchangeRates_T]
WHERE [DW_Id] BETWEEN @batchFirstId AND @batchNextId

SET @batchFirstId = @batchFirstId + @batchSize
SET @batchNextId = @batchNextId + @batchSize
END

The change is that it now adds the rows to the raw (_R) version of the table instead of directly in the valid one.

Incremental load

If the same ExchangeRates table is changed to use incremental load. Specifically the Automate feature was used to set up Incremental load.

/****** Object:  StoredProcedure [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]    Script Date: 2023-03-28 11:28:50 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [Present].[usp_CrDM_Present_ExchangeRates_Landing_Landing_ExchangeRates]
@batchSize INT
AS
-- Copyright 2011 timeXtender a/s
-- All rights reserved
--
-- This code is made available exclusively as an integral part of
-- timeXtender. You may not make any other use of it and
-- you may not redistribute it without the written permission of
-- timeXtender a/s.

-- fullLoadIncrementalRule1 begin
DECLARE @fullLoadIncrementalRule1 bit
SET @fullLoadIncrementalRule1 = 0

DECLARE @IncrementalRule1Field1 datetime
SET @IncrementalRule1Field1 = (SELECT TOP 1 [IncrementalTimeStamp] FROM [Present].[ExchangeRates_I] WHERE [DW_MappingId] = 'ec1a0441-a886-4c5f-9681-fee0734d9ea1' AND [DW_INC_Status] IS NOT NULL
ORDER BY [DW_INC_CreateTime] DESC)

IF (@IncrementalRule1Field1 IS NULL)
BEGIN
SET @fullLoadIncrementalRule1 = 1
END
-- fullLoadIncrementalRule1 END


DECLARE @batchMaxId BIGINT, @batchFirstId BIGINT, @batchNextId BIGINT
SET @batchMaxId = (SELECT MAX([DW_Id]) FROM [Landing].[ExchangeRates])
SET @batchFirstId = 1
SET @batchNextId = @batchSize
WHILE @batchFirstId <= @batchMaxId
BEGIN
INSERT INTO [Present].[ExchangeRates_R] WITH (TABLOCK)
(
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[IncrementalTimeStamp]
	,[DW_Batch]
	,[DW_SourceCode]
)
SELECT
	[CurrencyCode]
	,[ToDate]
	,[ExchangeRate]
	,[IncrementalTimeStamp]
	,[DW_Batch]
	,[DW_SourceCode]
FROM [Landing].[DM_Landing_Landing_ExchangeRates_Present_Present_ExchangeRates_T]
WHERE ((@fullLoadIncrementalRule1 = 1 OR ([IncrementalTimeStamp] > @IncrementalRule1Field1)))
AND [DW_Id] BETWEEN @batchFirstId AND @batchNextId

SET @batchFirstId = @batchFirstId + @batchSize
SET @batchNextId = @batchNextId + @batchSize
END

Now the complexity is slightly larger and the above shows how the timestamp field is being used to track and only load new data.

Did this topic help you find an answer to your question?

0 replies

Be the first to reply!

Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings