Hi,
if your project is large - in the sense that it contains many objects - or your repository is large , it comes with extra memory use for both TimeXtender.exe processes as well as on the SQL Server instance hosting the repository databases.
If you are only hosting TimeXtender on the affected virtual machine and your database instance is elsewhere, you would likely need to increase memory to the VM.
What specs does your VM have? Are there many users working at the same time (or leaving their sessions open)?
Hi Rory
The Repository database in Production is just under 9GB. I guess this is considered large. The TimeXtender application and database are on the same server. This setup is the same for their Dev, QA and Production environment.
I will confirm with them the specs for the Production VM as I do not have access. We have up to 5 people developing but maybe on 2-3 active sessions at one time.
Is there a way to manage the Repository database size to make this more efficient? Compression? Shrinking? Or is it just best to increase memory to the VM?
Kerry
Hi @rory.smith
Do you know if there are any Tools or Scripts provided by TimeXtender to cleanup Version and Execution History from the Repository Database older than a year?
Kerry
Hi,
for VMs running SQL I aim at keeping memory use under 85% to prevent Windows to start swapping to virtual memory. Paradoxically it can be faster to configure SQL Server to use less memory if you are normally around / over 90% RAM consumption. This means you allocate such an amount to SQL Server that a normal load caused by developers would not lead to RAM consumption going over 80% . If that results in SQL needing to perform a lot of disk i/o, more RAM is required.
Each open TimeXtender instance will consume RAM, the scheduled executions also do. Certain operations consume substantial memory (environment transfer, lineage, export deployment steps)) which could push the system over the edge.
You can clean execution logs from the GUI (expect to be patient), there is a community post that also gives a query to clean those tables floating around. To remove versions from the repo, you need to export your projects and import to a new repository. If you have many Global Connections that can be quite some work.
I would not tinker with the repository database, most likely you just need to scale up your VM.
Hi @rory.smith I have been removing the execution logs in annual batches as we had logs dating back to 2019. I kicked off the final delete for 2023 logs last night at 9am and it is still running. Am I to be that patient or do I need to kill the process?
Kerry
Hi @kerry_bespoke ,
depending on the scaling of the database that sounds familiar. You can check in SSMS by running sp_who2 or the Query Store expensive query report / wait stats report what is going on in detail. If you kill the process it may roll back (which can also take a long time) if you are running in a tran.
Additonally, if your repository is in Full recovery mode but you never run backups you might have gigantic log files adding additional delay.
@rory.smith The Repository is in Simple mode.
No scheduled jobs are able to run with the Delete task still going.
Is there no way to see how much longer the task will take or relieve the server somehow to let a job run?
Kerry
Hi @kerry_bespoke ,
you could query the size of the log details table a few times from a session that does not lock the table to check how many rows it is deleting per 15 minutes or so. If you are running through the GUI, you can kill it and apply this script:
Do note that this is not a supported way of cleaning the logs. If you are already running this script, then you are somewhat stuck I think.