We are loading data from a SQL Server (onprem) to a Datalake (ingest instance). Data is stored as parquet, but TX is creating a huge parquet file for each table. We are using ADF to ingest the data into the ingest instance. Is there a way to load the data in such a way that when it gets into the Datalake the parquet can be divided in parts? table_xxx_0001.parquet, table_xxx_0002.parquet, etc. (I know this is not the parquet format it is just an example). Is this possible? We are having performance issues reading from parquet to AZURE SQL DB prepare instance. We are using ADO.NET from ingest to prepare.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.