Friday, July 1, 2016

Creating a Singleton table-row

One of the nice design patterns from Java is the Singleton. I don't see it as much in new code, but I like to use it in my database designs. There is no SQL Server supported method for building a singleton, so I have manufactured a clumsy way to build one.

The idea is simply a one-row table, like DUAL in Oracle, that cannot be deleted, truncated, or appended to. It requires a seed value once.
Here is the SQL to build the object:

CREATE TABLE Loading._Batch_Run_Master_Key(
 batch_run_master_key_id AS ((1)) PERSISTED PRIMARY KEY,
 cur_batch_run_id int NULL,
 prev_batch_run_id int NULL
)
GO
Factors that make this a singleton:
  1. The master key is a computed column. It is not a column that can be set as a value, and so it will always be one. It is persisted so that we can apply a foreign key reference constraint against it.
  2. The application of a primary key constraint to this column is partly documentation, in that it tells us that this column value must be unique, and since it is always '1', then there can only ever be one row.

The listing above takes care of controlling rows exceeding the 1 row limit. It does not seed the initial row; this is not something SQL Server can do declaratively. In C#, we would simply say "int[] x = {1,2,3};"
Here we instantiate our class table with the minimal values. An INSERT command needs at least one value, which seems unnecessary to create placeholders.

INSERT INTO Loading._Batch_Run_Master_Key(cur_batch_run_id) 
 VALUES(DEFAULT);

This creates, somewhat clumsily, our Singleton row. If run this command again, I get the following error:
Msg 2627, Level 14, State 1, Line 32
Violation of PRIMARY KEY constraint 'PK___Batch_R__1C148BAAC381C4FC'. Cannot insert duplicate key in object 'Loading._Batch_Run_Master_Key'. The duplicate key value is (1).
The statement has been terminated.


This partially meets the requirements for our Singleton: More than one row cannot be inserted.
Our other use cases are:
  1. Delete
  2. Truncate
  3. Update

Let us examine the first case, deletion. Accidental deletion of singletons can and does happen, so we will put a guard on this record. A minimum row constraint would be nice, but we do not have such a device. We use a trigger to prevent deletion.
Here it is:
CREATE TRIGGER Loading._Batch_Run_Master_Key_Lock 
   ON  Loading._Batch_Run_Master_Key 
   AFTER INSERT,DELETE,UPDATE
AS 
BEGIN
 SET NOCOUNT ON;

 -- Prevent deletion of our singleton row.

    IF (
  (SELECT COUNT(*) FROM inserted)=0 
  AND 
  (SELECT COUNT(*) FROM deleted)=1
      )
 BEGIN
  PRINT 'Singleton cannot be deleted. Rolling back.'
  ROLLBACK
 END
END
Earlier today I had shown a way to determine if we were doing a delete action from within a trigger. I realized while I was coding this new trigger that I may want to ensure that a delete has actually happened.
Take the following case:

DELETE FROM Loading._Batch_Run_Master_Key WHERE 1=0;

I tested this, and the trigger actually does fire when no rows are selected, which I had not known. By verifying that deleted count is one, and that inserted count is zero, we know that a delete action got past the where filter, and we need to roll it back. For cases where no rows are deleted, it does not seem right to rollback when nothing occurred.
The second case, truncate, is more complicated to control. This is not a permissions issue, but rather a definition of the row itself. Even a DBA is not supposed to delete a row. He could instead drop the table.
To prevent truncation of one table, we must have another table referencing it. Here is the code:
CREATE TABLE Utility.BLOCK_TRUNCATE_INT(k int not null primary key)

ALTER TABLE Utility.BLOCK_TRUNCATE_INT WITH CHECK ADD CONSTRAINT [FK Master Key has truncate command blocked] FOREIGN KEY(k)
REFERENCES Loading._Batch_Run_Master_key (batch_run_master_key_id)

Now, when I run a truncate command,
TRUNCATE TABLE Loading._Batch_Run_Master_Key

..we get the following error:

Msg 4712, Level 16, State 1, Line 60
Cannot truncate table 'Loading._Batch_Run_Master_Key' because it is being referenced by a FOREIGN KEY constraint.

The third case, the update, is there to apply business rules to the various attributes you add to your Singleton, to make sure they are set properly and not improperly nulled.
As I always consolidate trigger actions into a single trigger, I will update my one trigger on my master key Singleton, to where it now reads as follows:
ALTER TRIGGER Loading._Batch_Run_Master_Key_Lock 
   ON  Loading._Batch_Run_Master_Key 
   AFTER INSERT,DELETE,UPDATE
AS 
BEGIN
 SET NOCOUNT ON;

 -- Prevent deletion of our singleton row.

    IF (
  (SELECT COUNT(*) FROM inserted)=0 
  AND 
  (SELECT COUNT(*) FROM deleted)=1
      )
 BEGIN
  PRINT 'Singleton cannot be deleted. Rolling back.'
  ROLLBACK
 END

 -- Our conditional non-nulls. We cannot update a column to null
    IF ((SELECT COUNT(*) FROM inserted)=1 AND (SELECT COUNT(*) FROM deleted)=1)
 BEGIN
  IF (UPDATE(cur_batch_run_id) AND ((SELECT inserted.cur_batch_run_id FROM inserted) IS NULL))
  BEGIN
   PRINT 'Cannot nullify cur_batch_run_id'
   ROLLBACK
  END
 END

END
The section at the bottom detects when update statements are occurring. The way almost all databases logically enact an update is a delete followed by an insert. Internally, the database implementation may not actually do that, and probably doesn't, but logically we get both counts, 1 deleted and 1 updated.

In the case where an update is being done, we apply our business rule. the batch run # cannot be null. This is a simplistic rule, but it demonstrates a different behavior in SQL systems that cannot be implemented with a simple NULL/NOT NULL constraint. It may be difficult to see, but the original INSERT of the hollow record allows for nulls in all the attributes. It is only after a legitimate batch id has been added that it cannot be removed. One reason for this is that, upon creation of the Singleton, I cannot possibly know a valid batch run #, since none have yet occurred. Most developers would create a dummy value, say -1, and if referential integrity was applied, then an actual dummy row would be in the main Batch Run table, with a key of -1. This is a fine way to do it, and there are cases where it is necessary, but I think I like the elegance of this technique, where no actual dummy value is required. A query against the batch run table and the master key table would only ever return real batch run records.

To clarify, the columns cur_batch_run_id and prev_batch_run_id were just example payloads I added to this structure. You would of course add your own.

To use the singleton, you simply reference it with a CROSS JOIN:
SELECT ep.* FROM Loading.Event_Profiles ep
JOIN Loading._Batch_Run_Master_Key mk on ep.batch_run_id = mk.cur_batch_run_id
Before any batches have run, cur_batch_run_id will be null, but after that it will always point to the most recent run and the run before that.
One issue that might occur is that a run can never be deleted if it is referenced in the singleton. The trigger logic would probably not suffice for anything more complex, or it may simply have to be disabled as far as allowing nulls in the current batch id. It's more of a demonstration rather than anything else.

Another way to use the Singleton is to store global variables that are used against another columns as constants. This would simply be a CROSS JOIN to pick up these values.

No comments:

Post a Comment