How Partitioning Data Cut DTU Costs in Half


October 23, 2019

Drastically Improve Efficiency with a Partitioning Process

We recently did some work for a customer whose website receives a ton of data – about 20 to 30 gigs every day. Legally, they have to keep 21 days of data in the system and relied on our SQL team to go in and delete the 22nd day of data. So every day, we were working to delete huge tables containing about 20 million records.

However, SQL Server would lock down a table if we tried to delete thousands/millions records at a time, so we had to delete 10,000 records at a time on a loop until it added up to the 20 million records that needed to be deleted. Needless to say, this took hours to complete, and ultimately caused DTU utilization issues for the customer.

After examining this cumbersome method, we realized we could introduce a partitioning process and drastically improve efficiency of this daily task. We worked with the architects the customer had on staff to consider unique indexes and began restructuring their tables to implement a partition every seven days.

By redesigning the indexes, we can truncate the whole partition in milliseconds, saving them a ton of money on the processing power they need and improving speed and performance.


Overwhelmed with the cost of data archiving?

{{cta(‘6d2db1f2-3cb9-4426-bd66-b3a1c91322e1′,’justifycenter’)}}