We ran into an interesting issue at work recently. Documenting the solution for my records..
BACKGROUND : We had a table in one of our databases that served as a “hopping” point for some jobs. Data was inserted into this table and at jobs get kicked off at periodic intervals to “process” the data and delete it.
CURRENT METHOD : Launch multiple jobs to process the data and delete the rows as soon as the data is processed. This is causing locks on the table because there are multiple delete operations occurring at the same time. Which in turn means that the jobs cannot complete processing the data causing the table to grow in size.
PROPOSED METHOD : Add a new column to the table called “PROCESSED_STATE” and modify the “processing” jobs to set a flag “Y” in this column as soon as the data is processed. Create a new job that will be launched periodically, which checks the PROCESSED_STATE column and if the flag is set to “Y”, deletes the row.
Morale of the story.. 🙂 .. Multiple deletes on a table are bad. Better way is to have multiple updates and one delete.