“Hey Doc, I have salvaged the MDF file of an important database. Can I attach it to SQL Server in order to use it?” was the question from an old colleague.
Yes, you can…
“And oh! I don’t have the log file. And I renamed the MDF file by mistake too.”
This is how you do it…
When you bring up the Attach Databases dialog and select the salvaged and renamed file, you would see the following on it.
The second grid lists the original database files followed by the Not Found message. This is obviously since you do not have them in the original location. At this stage you cannot move ahead by clicking on OK. You will have to remove both the entries using the Remove button below that grid.
Then you have to add the new data file; click on the Add Catalog button, then browse and select the new MDF file (the one which you had salvaged and renamed). You can then optionally change the name of the new (to be attached) database from the Attach As textbox.
Click on OK and Voila!, you’ve got your old database back.
As we all know, SQL Server 2008 comes with a
time data type. Obviously, as the name itself denotes, this data type is capable of storing the time sans the date component. So, how would one populate a
time field or variable with the current system time?
One would think of using the
getdate() function and do some manipulations on it using some more functions to pull out the time component out of it. At least, that is what I thought.
But, the answer is much more simpler:
1 DECLARE @CurrentTime time
2 SET @CurrentTime = GETDATE()
3 SELECT @CurrentTime
This is but one scenario where and how we use an intermediate fact table. Intermediate fact tables are used in cases where you have a many-to-many relationship between two dimensions (also known as Many-to-many dimensions). To introduce a simple example:
Imagine a bank. Account holders. Bank accounts. A bank account can be held by multiple account holders (joint account), whereas an account holder may hold multiple accounts. Transactions are usually recorded based on accounts. However, when business users need to analyze the business data they would like to analyze from the Account holder point of view as well, hence the data model may have to be put up to look something like this:
Check out these links to know more about many-to-many dimensions; The Many to Many Relationship, Many-to-Many dimensions in Analysis Services or try binging it.
My scenario is this. The source system has Groups and Users. Both of them have been identified as dimensions (DimGroup and DimUser). ETLs already created and tested. Since a user can exist in multiple groups and also obviously a group can have multiple users in them, makes this an ideal many-to-many scenario. Something simple and noteworthy in this scenario is that historical tracking is not necessary. Which means I need not store the history of a user belonging to a particular group 3 months ago and now does not.
This is how the data population was done (image contains detailed steps):
Reasoning: The data to be stored in the target is made up of only two columns that are composite (GroupID and UserID), hence using the slowly changing dimension would not do. The records that come in from the source after getting looked up for the appropriate keys in their respective dimension tables (i.e. DimGroup and DimUser) either shall be inserted if not matched with the destination (i.e. DimGroupDimUser) or be sent on their way (matching records). These matching records are right-outer-joined against records from the destination (i.e. DimGroupDimUser) in order to be deleted (where matching records’ UserID == NULL and matching records’ GroupID == NULL). There are no updates that can happen here. If an update happens at the source; a record will be deleted and another inserted at the destination (This is more or less why the Slowly Changing Dimension cannot be used in this scenario).
Of course, I feel that this could be done in other (maybe even better ways). I would love to hear your comments on this.
This has nothing to do with SQL Server, save a teeny weeny part. Me gets this requirement from above: Do this report thingy with Excel using macros. It shall be a temporary solution. And it shall be parallel to the permanent reporting solution that you are already developing, but with a higher priority (That’s like building a super saloon car while also building a rickety lorry with a high priority to do somewhat the same thing as the car, ‘cept that the rickety lorry should be shipped first. – That’s how one colleague put it). Ye shall extract the data from Good ole SQL Server to the Excel workbook, and Mr. Senior Whiner shall do the macros in such a way that it’ll create some nice looking reports on more Excel sheets. Anyways, with a lot of whining and I-dunnos and I-cannots the macro ball was passed by Mr. Senior Whiner to scape-goat builder. R n’ Ds by meself and scape-goat builder led to the discovery of array formulae.
I have been working with Excel for a long time, not on an extensive scale though, but quite a power user. Therefore, array formulae just came out as a big surprise. I never knew something like that ever existed.
So what’s an Array Formula, you ask?
It’s a formula which can be used in place multiple ‘normal’ formulae. And it works on arrays of values. And in order to enter the formula, you need to first type it in and then press Ctrl + Shift + Enter on the keyboard. Otherwise you’ll end up with the #VALUE! error. Once you enter the formula successfully, you will notice that the whole formula is contained within a pair of curly braces which disappear when you click on the formula bar.
In the screenshot you could see that in order to get the total value for each region I use an array formula as follows:
This is just a simple example. There’s a whole dimension of complex formulae that can be written in a simpler and powerful form using array formulae. Just google ‘Excel Array Formula’, or perhaps you could even bing it (if you dare)