Over the past eighteen months I have worked on four separate projects for customers that wanted to make use of Visual Studio 2010 Database projects to manage their database schema.* All the while I have been trying to take lots of notes in order that I can write the blog post that you are reading right now – a compendium of experiences and tips from those eighteen months. I should note that this blog post should not necessarily be taken as a recommendation to actually use database projects in Visual Studio 2010 – it is intended to be useful for those of you that have already made the decision to use them; having said that, I do make recommendations as to actions I think you should take if you have made that decision.
First let’s be clear what we’re talking about here. Visual Studio Database Projects have been known by a few different names down the years, some of which you may be familiar with. If you have ever heard the terms datadude, DBPro, teamdata, TSData or Visual Studio Team System for Database Professionals then just know that all of these terms refer to the same thing, namely the project type highlighted below when starting a new project in Visual Studio:
![image image]()
From here onwards I am going to refer to Visual Studio Database projects and all the features therein simply as datadude because that’s a popular colloquial name (and is also much quicker to type). Know also that at the time of writing the features that I am talking about here are currently undergoing some changes ahead of the next release of SQL Server (i.e. SQL Server 2012) in which these features are mooted to be delivered under a new moniker - SQL Server Developer Tools (SSDT).
OK, with all those preliminaries out of the way let’s dig in.
Continuous Integration
Continuous Integration (CI) is a development practise that has existed for many years but in my experience has not been wholly embraced by the database community. The idea behind CI for databases is that every time a developer checks-in a piece of code be it a stored procedure, a table definition or whatever, the entire database project is built and then deployed to a database instance. Microsoft provide a useful article An Overview of Database Build and Deployment that goes some way to explaining how to setup your CI deployment.
CI is one of the fundamental tenets that underpins a lot of the things I talk about later in this blog post and hence gives rise to my first recommendation when using datadude:
Recommendation #1: Use Source Control and implement a Continuous Integration deployment
In an earlier draft of this blog post I outlined in detail the CI configuration from one of the aforementioned projects. Its not suitable for inclusion at this point in the current draft but I still think there is some useful information to be gleaned so I have included it below in “Appendix – An example CI configuration”.
Composite Projects
Each of the four aforementioned projects were brownfield projects meaning that each already encompassed some established, deployed, databases and they wanted to bring those databases under the control of datadude. Each project had thousands of objects across multiple databases and in this situation it is very likely that some of the stored procedures, views or functions will refer to objects in one of the other databases. The way to resolve those references is to use database references however once you have applied all of your database references it is still very possible that you will run into a situation where code in database A refers to an object in database B while at the same time database B refers to an object in database A. This is depicted in the following figure:
![image image]()
Here we have a view [DB1]..[View2] that selects data from [DB2]..[Table1] and a view [DB2]..[View1] that selects data from [DB1]..[Table1]. Datadude does not allow a database reference from [DB2] to [DB1] if there is already a database reference from [DB2] to [DB1] and hence will return an error akin to:
- SQL03006: View: [dbo].[View1] has an unresolved reference to object [DB1].[dbo].[Table1].
![image image]()
We have the proverbial chicken-and-egg problem, [DB1] can’t be created before [DB2] and vice versa. This problem is solved by using Composite Projects (not to be confused with Partial Projects) which allow you to split objects that are intended to be in the same database over multiple datadude projects. I could go over how you set one of these things up but there’s really no need because there is a rather excellent walkthrough on MSDN at Walkthrough: Partition a Database Project by Using Composite Projects; the reason for me mentioning it here is to make you aware that composite projects exist and of the problem that they solve. If you are introducing datadude into a brownfield project then it is highly likely that you are going to require composite projects so learn them and learn them good.
One important last note about composite projects is to answer the question “How does the tool know that the multiple projects refer to the same database?” The answer is given at the walkthrough that I linked to above; namely, it says:
![Composite projects - referring to the same database Do not specify server variables and values or database variables and values when you define references in a composite project. Because no variables are defined, the referenced project is assumed to share the target server and database of the current project.]()
“Do not specify server variables and values or database variables and values when you define references in a composite project. Because no variables are defined, the referenced project is assumed to share the target server and database of the current project.”
So now you know! To put it another way, if you reference one project from another and don’t tell datadude that the two projects refer to different databases then it assumes they refer to the same database.
Code Analysis
Datadude provides the ability to analyse your code projects for code in stored procedures and functions that it considers to be inferior and highlight it – this feature is called Code Analysis. Note that Code Analysis will not highlight code that is syntactically incorrect (datadude does that already, which may well be considered its core feature), it highlights code that is syntactically correct but may be considered defective when executed. Specifically Code Analysis will highlight the following perceived code defects (click through on the links for explanations of why these are considered code defects):
In my opinion the best aspect of Code Analysis is that it can be run as part of your Continuous Integration deployment meaning that if anyone checks in some deficient code, BOOM, your CI deployment fails and the developer is left red-faced. Nothing else has increased the code quality on my projects quite like running Code Analysis as part of a CI deployment.
Hopefully I have convinced you that turning on Code Analysis is a good idea. If you agree then head to the project properties and check the box labelled Enable Code Analysis on Build. I also recommend checking the Treat warnings as errors boxes otherwise you’ll find that undisciplined developers will simply ignore the warnings.
![Enable datadude code analysis Enable datadude code analysis]()
N.B. Incidentally if you have time I highly recommend that you go and read the blog post I linked to there – Discipline Makes Strong Developers by Jeff Atwood. I’ve read many thousands of blog posts in my time and that is the one that has influenced me more than any other.
Turning on Code Analysis on a greenfield project is a no-brainer. On a brownfield project its not quite so easy – on a recent engagement I moved a single database into datadude and turned on Code Analysis which immediately found over two thousand perceived code defects. I generally abhor the use of that famous maxim if it aint broke, don’t fix it in our industry but on occasions like this you may be well advised to heed that advice and leave well alone for fear of breaking code that does what it is supposed to (no matter how inefficiently it does it). Instead you do have the option to suppress Code Analysis warnings/errors:
![Suppress datadude code analysis Suppress datadude code analysis]()
I advise using Code Analysis suppression sparingly. Recently I discovered that one of the developers on my team had decided it was OK to simply suppress every error that was thrown by Code Analysis without first investigating the cause. I was not amused!
Recommendation #2: Turn on Code Analysis
Realising the value of idempotency
An operation is considered idempotent if it produces the same result no matter how many times that operation is applied; for example, multiplication by a factor of one is an idempotent operation – no matter how many times you multiple a number by one the result will always be the same.
Idempotency is a vital facet of database deployment using datadude. Datadude tries to ensure that no matter how many times you deploy the same project the state of your database should be the same after each deployment. The implication here is that during a deployment datadude will examine the target database to see what changes (if any) need to be made rather than simply attempting to create lots of objects; if all the objects already exist nothing will be done. In my opinion this is the single biggest benefit of using datadude – you don’t have to determine what needs to be done to change your database schema to the desired state, datadude does it for you.
If I have convinced you about the value of idempotency within datadude then you should also realise that the same rigour should be applied to data as well. Datadude provides Post-Deployment scripts that allow you to deploy data to your schema however there is no inbuilt magic here – datadude will simply go and run those scripts as-is, it will not try and comprehend the contents of those scripts. What this means is that you, the developer, are responsible for making your Post-Deployment scripts idempotent and the easiest way to do that is to employ the T-SQL MERGE statement.
T-SQL’s INSERT is not sufficient as it will work once and thereafter fail as it will be attempting to insert already inserted data; this gives rise to my third recommendation:
Recommendation #3: When running your deployment in a test environment, run it more than once.
No-brainer Recommendations
I consider Code Analysis and Idempotency to be so important that I called them out as dedicated headlines. In this section I’ll outline some additional simple measures that you can undertake and which will, if employed correctly, have a profound effect on the success of your datadude projects.
Putting a build number into the DB
I find it is very useful to maintain a log of deployments that have been made to a database and my chosen method is to use a Post-Deployment script to insert a value into some table. Here’s the definition of the table I use for this:
CREATE TABLE [dbo].[DeployLog]
(
[BuildId] NVARCHAR(50)
, [DeployDatetime] SMALLDATETIME
, CONSTRAINT PK_dboDeployLog PRIMARY KEY ([DeployDatetime])
)
In my Post-Deployment script I will use:
INSERT [dbo].[DeployLog]([BuildId],[DeployDatetime])
VALUES ('$(BuildId)',GETDATE());
to insert a row into that table during every deployment. $(BuildId) is a variable defined in the .sqlcmdvars file of my project:
![image image]()
Here is what we see inside that file:
![image image]()
The $(BuildId) variable has been defined with a default value of UNKNOWN and hence subsequent deployments from Visual Studio will result in the following:
![image image]()
On first glance that might not seem particularly useful however it comes into its own if you are doing CI deployments (see recommendation #1) because each build in a CI environment will result in a new build identifier. The following command-line call to vsdbcmd.exe is how deployments are generally done using datadude, note the presence of the /p:BuildId switch:
..\Tools\VSDBCMD\vsdbcmd.exe /Action:Deploy /ConnectionString:"Data Source=.;Integrated Security=True;Pooling=False" /p:BuildId="some-value" /DeployToDatabase:+ /ManifestFile:.\FinanceDB\sql\release\FinanceDB.deploymanifest
Your CI tool should be able to replace “some-value” with an identifier for the current build (that’s outside the scope of this blog post but any CI tool worth its salt will be able to do this) – when the deployment executes that value will then make its way into your [dbo].[DeployLog] table and you will have a self-maintaining history of all the deployments (datetime & build identifier) that have been made to your database.
Recommendation #4: Maintain an automated history of your deployments
Use Schema View
It is natural to navigate through all of the objects in your database project using Solution Explorer however datadude provides a better mechanism for doing just that – the Schema View window.
![image image]()
Schema View provides a logical view of all the objects defined in your database project regardless of which file they may be defined in. That is very useful for many reasons, not least because it makes it easy to locate whichever object you are after – that’s advantageous if multiple objects are defined in the same file. Moreover if some files have property BuildAction=”Not In Build” (see later) they won’t show up in Schema View (this is a good thing by the way). Schema View is also the place that operations such as refactoring and dependency analysis are launched from.
Some people think that it is important that the name of each file in a datadude project should accurately reflect the object defined within. I disagree; object renames mean that maintaining the filenames becomes laborious and having the Schema View means you never have to use the filenames to navigate your project anyway.
One final reason to use Schema View is the External Elements button:
![image image]()
Toggling this button on means that objects in referenced projects show up in the project that they are referenced from (this is particularly useful if you are using Composite Projects). Note in the following screenshot how the object [dbo].[t1] in project Database2 appears in the [dbo] schema of Database3 – that’s because Database3 has a reference to Database2.
![image image]()
For those reasons my fifth recommendation is:
Recommendation #5: Use Schema View in preference to Solution Explorer
You will still need Solution Explorer to navigate files that do not contain database objects (e.g. Post-Deployment scripts) but ordinarily you should spend most of your time interacting with Schema View.
Make liberal use of PRINT statements in Pre/Post-Deployment Scripts
When you deploy a datadude project datadude will take care of telling you what it is up to. For example, the following screenshot shows the output from deploying the already discussed [dbo].[DeplogLog]:
![image image]()
Of course it only does this for objects that it knows about and that doesn’t include anything in your Pre or Post \deployment scripts so you need to take responsibility for outputting pertinent information from those scripts. Here I have amended the script that inserts into [dbo].[DeployLog]:
SET NOCOUNT ON;
INSERT [dbo].[DeployLog]([BuildId],[DeployDatetime])
VALUES ('$(BuildId)',GETDATE());
PRINT CAST(@@ROWCOUNT as NVARCHAR(5)) + N'rows inserted into [dbo].[DeployLog], BuildId=$(BuildId)';
This gives us much more useful output:
![image image]()
Adding PRINT statements to your Pre & Post Deployment scripts is so easy it really is a no-brainer and hence my next recommendation is:
Recommendation #6: Any action in a Pre or Post-Deployment Script should use PRINT to state what has been done
Output variable values in your Pre-Deployment script
This is in the same vein as the previous bullet-point – output as much information as is possible. In this case we’re talking about outputting the values of all variables that are stored in the .sqlcmdvars file; first, a reminder of what’s in that file:
![image image]()
Here is the contents of my amended Pre-Deployment Script:
PRINT 'DefaultDataPath=$(DefaultDataPath)';
PRINT 'DatabaseName=$(DatabaseName)';
PRINT 'DefaultLogPath=$(DefaultLogPath)';
PRINT 'BuildId=$(BuildId)';
And the resultant output:
![image image]()
This is the sort of simple amendment that will pay off in spades later in your project (especially if you are supplying many values from the command-line) and again, its so easy to do it there really is no reason not to. Just remember to update your Pre-Deployment script whenever you add new variables to .sqlcmdvars.
Recommendation #7: Output the value of all variables in your Pre-Deployment script
One Object Per File
Datadude doesn’t restrict what can go in a file, for example the following file, “t.table.sql”, defines three objects; a table, a primary key and a view:
![image image]()
Even though they’re all defined in the same file they show up in Schema View separately (one of the aforementioned benefits of using Schema View):
![image image]()
That said, just because you can doesn’t mean that you should. I prefer to go for one object per file for the simple reason that its easier to track the history of an object via Source Control. Moreover, if an object is no longer required then it is a simple change to just remove the file containing that object from the build (see “Don’t delete anything from your project” later) as opposed to editing a file to remove all traces of an object.
Recommendation #8: Each database object should be defined in a dedicated file
Time your Pre and Post Deployment Scripts
Its always useful to know where time is spent when doing deployments, in my experience the majority of time spent is in the Post-Deployment script (your mileage may vary of course). An easy win is to output the time taken to run your Pre and Post Deployment scripts. Adapt your Pre-Deployment script so that it looks something like this:
DECLARE @vPreDeploymentStartTime DATETIME = GETDATE();
PRINT '****************Begin Pre-Deployment script at ' +CONVERT(VARCHAR(30),GETDATE(),120) + '***********************';
/*Call other scripts from here using SQLCMD's :r syntax
Example: :r .\myfile.sql
*/
PRINT 'Pre-Deployment duration = ' + CONVERT(VARCHAR(5),DATEDIFF(ss,@vPreDeploymentStartTime,GETDATE())) + ' seconds';
PRINT '****************End Pre-Deployment script at ' +CONVERT(VARCHAR(30),GETDATE(),120) + '***********************';
then do similar for your Post-Deployment script. When you deploy your output will include the following:
![image image]()
Note the lines:
****************Begin Pre-Deployment script at 2011-12-31 20:00:34***********************
Pre-Deployment duration = 0 seconds
****************End Pre-Deployment script at 2011-12-31 20:00:34***********************
****************Begin Post-Deployment script at 2011-12-31 20:00:34***********************
Post-Deployment duration = 0 seconds
****************End Post-Deployment script at 2011-12-31 20:00:34***********************
In this particular case its not all that useful to know that the deployment took 0 seconds but if and when your deployments snowball to many minutes it will be useful to know how long your scripts are taking at which point you can investigate further by timing each individual step in your Pre and Post Deployment scripts.
Recommendation #9: Time your deployments and output the timings
Use sqlcmdvars and the command-line as much as possible
Hardcoding any value into a piece of code is a fraught practise; you should assume that values previously thought to be constant may not be so in the future. You can protect yourself from future changes by storing all literal values as variables in your .sqlcmdvars file. Sure, you can supply default values for those variables but you have the added advantage that they can be overridden from the command-line when deploying using vsdbcmd.exe. Moreover, if you have values that are hardcoded in multiple places in your code then specifying those values in .sqlcmdvars ensures that your code adheres to the principle of DRY. Lastly, if values are stored in the .sqlcmdvars file then you can output them at deploy time (see recommendation #7).
Recommendation #10: All literal values should be stored in your .sqlcmdvars file
Every developer gets their own development database
In most SQL Server development shops that I’ve been on all developers work against a single centralised development database. To me this is an antiquated way of working because its possible that work one person is doing can conflict with that of someone else, I find it much better for every developer to work in isolation and then use the CI deployment to check that one’s code is not in conflict with anyone else’s. Datadude supports (nay encourages) this way of working with the notion of an Isolated Development Environment:
![image image]()
Every developer should configure their isolated development environment which, typically, would be their local instance. And so to my next recommendation:
Recommendation #11: Every developer should use the Isolated Dev Environment settings in order to author their code
Incidentally, if every developer has their own development database and you are following my earlier recommendation to use a [DeployLog] table then you can track how often a developer is bothering to deploy and test their code. On a recent project we used this evidence in a (ahem) discussion with a developer who tried to convince us that he was testing his code sufficiently even though he was repeatedly causing the CI deployment to fail.
Don’t delete anything from your project
When projects are no longer required in your database then intuitively it makes sense to remove the file containing that object from the datadude project, I would however like to suggest a different approach. Rather than removing a file just change the Build property to “Not in Build”:
![image image]()
This has the advantage that your project maintains some semblance of history of what objects have been removed from your database – that can be useful to anyone inheriting your code in the future.
Recommendation #12: Use “Not in Build” to remove an object from your database
Build and Deploy your datadude projects outside of Visual Studio
Building and deploying your datadude projects within Visual Studio can become a real time hog; in my experience its not unusual for deployments to take many minutes and your Visual Studio environment will be unavailable for further development work during that time. For that reason I recommend investing some time in writing some msbuild scripts that will build and deploy your project(s) from the command-line. Here are some examples that you can adapt for your own use, firstly a script to build a solution:
<?xmlversion="1.0"encoding="utf-8"?>
<!--Execute using:
msbuild SolutionBuild.proj
-->
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
DefaultTargets="Build">
<!--Notes:
When doing .net development Visual Studio Configurations are particularly useful because they can affect
how the code is executed (i.e. under the Debug configuration
debug symbols can be used to step through the code (something like that anyway - I don't know too much about that
stuff).
In DBPro, Configurations are less relevant because there is no such thing as debugging symbols. Nonetheless, they
can still be usefulfor times when you want to do different things (e.g. you might want to run Code Analysis in
a debug situation but not in a releasesituation. There is a useful thread on this here:
"Debug vs Release" http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/a0ec0dc0-a907-45ba-a2ea-d2f0175261a7
Note that Visual Studio Configurations should not be used to maintain different settings per environment.
The correct way to do that is to maintain
seperate .sqlcmdvars files per environment and then choose which one to use at deployment time when using
vsdbcmd.exe (use syntax "/p:SqlCommandVariablesFile=$(ProjectName)_$(Environment).sqlcmdvars")
-->
<ItemGroup>
<!--List all the configurations here that you want to build-->
<ConfigInclude="Debug" />
<ConfigInclude="Release" />
</ItemGroup>
<TargetName="Build">
<MessageText="Building %(Config.Identity) configuration..."/>
<MSBuildProjects=".\Lloyds.UKTax.DB.sln"Properties="Configuration=%(Config.Identity)" />
</Target>
</Project>
and secondly a script that will deploy a datadude project:
<?xmlversion="1.0"encoding="utf-8"?>
<!--Execute using:
msbuild SolutionDeploy.proj /Target:Deploy
-->
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
DefaultTargets="Build;Deploy">
<PropertyGroup>
<!--At time of writing I don't see a reason for anything else to be used but that may change in the future hence why this
is a property and hence can be overriden.-->
<Configuration>Debug</Configuration>
<DevServer>Data Source=GBS0039182\GLDDEV01;Integrated Security=True;Pooling=False</DevServer>
</PropertyGroup>
<ItemGroup>
<ProjectToBuildInclude="SolutionBuild.proj" />
</ItemGroup>
<!--Notes:
Add a <DbProj> item for every database project (.dbproj) that needs to be deployed. They will get deployed in the
order that they are listed
thus it is your responsibility to make sure they are listed in the correct order (respecting dependency order).
%Identity is a metadata reference. It refers to the name of the item (i.e. Include="The bit that goes here is the
identity")
Note also that whatever you put for Include is important. Include="dev_thomsonj" means the project will only get
deployed if thedeployment is being executed by username=dev_thomsonj-->
<ItemGroup>
<DbProjInclude="username">
<DbName>MyDB</DbName>
<ProjectName>MySoln.MyDB</ProjectName>
<OutputPath>\%(ProjectName)\sql\$(Configuration)\\cf1 </OutputPath>
<DeployConnStr>Data Source=localhost;Integrated Security=True;Pooling=False</DeployConnStr>
</DbProj>
</ItemGroup>
<TargetName="Build">
<MSBuildProjects="@(ProjectToBuild)" />
</Target>
<TargetName="Deploy">
<!--Notes:
09 is the hex code for TAB, hence all of the %09 references that you can see. See http://asciitable.com/
for more details.
-->
<MessageText="USERNAME=$(USERNAME)" />
<MessageCondition="'%(DbProj.Identity)'==$(USERNAME)"Text="Deploying:
Project%09%09%09: %(DbProj.ProjectName)
DbName%09%09%09: %(DbProj.DbName)
From OutputPath%09%09: %(DbProj.OutputPath)
To ConnStr%09%09: %(DbProj.DeployConnStr)
By%09%09%09: %(DbProj.Identity)"
/>
<ExecCondition="'%(DbProj.Identity)'==$(USERNAME)"Command=""$(VSINSTALLDIR)\vstsdb\deploy\vsdbcmd.exe"
/Action:Deploy /ConnectionString:"%(DbProj.DeployConnStr)" /DeployToDatabase+
/manifest:".%(DbProj.OutputPath)%(DbProj.ProjectName).deploymanifest" /p:TargetDatabase=%(DbProj.DbName)
/p:Build="from cmd line"" />
</Target>
</Project>
Writing these scripts may appear to be laborious but they’ll save you heaps of time in the long run.
Recommendation #13: Build and deploy to your development sandbox using scripts
UPDATE: Upon reading this blog post Mordechai Danielov wrote a follow-up in which he published a useful script that builds a series of projects using Powershell. Its at building your database solutions outside of Visual Studio.
Useful links
Over the years I’ve collected some links to MSDN articles that have proved invaluable:
Datadude bugs
Like any substantial piece of software datadude is not without bugs. Many of the issues I have found are concerned with the datadude interpreter not correctly parsing T-SQL code, here’s a list of some bugs that I have found down the years:
Some of these bugs were reported a long time ago and may well have been fixed in later service packs.
Previous datadude blog posts
I have blogged on datadude quite a bit in the past:
Summing up
This has been an inordinately large blog post so if you’ve read this far – well done. For easy reference, here are all the recommendations that I have made:
- Use Source Control and implement a Continuous Integration deployment
- Turn on Code Analysis
- When running your deployment in a test environment, run it more than once
- Maintain an automated history of your deployments
- Use Schema View in preference to Solution Explorer
- Any action in a Pre or Post-Deployment Script should use PRINT to state what has been done
- Output the value of all variables in your Pre-Deployment script
- Each database object should be defined in a dedicated file
- Time your deployments and output the timings
- All literal values should be stored in your .sqlcmdvars file
- Every developer should use the Isolated Dev Environment settings in order to author their code
- Use “Not in Build” to remove an object from your database
- Build and deploy to your development sandbox using scripts
I really hope this proves useful because its taken a good long while to get it published
If you have any feedback then please let me know in the comments.
Thanks for reading!
@jamiet
* When I started writing this blog post the first sentence was “Over the past six months I have worked on two separate projects for customers that wanted to make use of Visual Studio 2010 Database projects to manage their database schema.” as opposed to what it is now: “Over the past eighteen months I have worked on four separate projects…” Yes, that’s how long its taken to write it! ![Smile]()
Appendix – An example CI configuration
As stated above, an earlier draft of this blog post included full details of the CI configuration from one of the projects that I have worked on. Although it may repeat some of what has already been said I have included that text below.
==============================================================
Introduction
This project has invested heavily in using a Continuous Integration (CI) approach to development. What that means, succinctly, is that whenever someone checks-in some code to our source control system an automated build process is kicked off that constructs our entire system from scratch on a dedicated server. CI is not a new concept but it is fairly rare that anyone applies the same rigour to their database objects as they do to so-called “application code” (e.g. the stuff written in .Net code) and on this project we have made a conscious decision to properly build our databases as part of the CI build.
Datadude employs a declarative approach to database development. In other words you define what you want database schema to look like and datadude will work out what it needs to do to your target in order to turn it into what you have defined. What this means in practice is that you only ever write CREATE … DDL statements rather than IF <object-exists> THEN ALTER …ELSE CREATE … statements which is what you may have done in the past.
Here’s our CI environment setup:
- SubVersion (SVN) is being used for source control
- Hudson, an open source CI server, is being used to orchestrate our whole CI environment. It basically monitors our SVN repository and when it spots a checked-in file, kicks off the CI build
- Our CI scripts (the stuff that actually does the work) are written using msbuild
- We have 2 msbuild scripts:
- Build.proj which is responsible for:
- Compiling all our .Net website code
- Building/Compiling our datadude projects (every datadude project file is a msbuild-compliant script)
- Deploy.proj which is responsible for:
- Restoring latest database backups from our production environment into our CI environment
- Deploying our built datadude projects on top of those restored backups
- Build a folder structure to hold all the artefacts that get deployed
- Creating folder shares
- Moving SSIS packages into folder structure
- Deploying SSRS reports to our SSRS server
- Deploy our Analysis Services cube definitions to our Analysis Server
- Both Build.proj and Deploy.proj get executed by our CI build
Building datadude projects
Datadude makes it very easy to build datadude projects in a CI environment because they are already msbuild-compliant; its simply a call to the MSBuild task, passing in the location of the solution file as an argument. We use the Release configuration (although there is no particular reason for you to do the same – purely your choice):
<TargetName="Database">
<!--Build database projects and copy output to staging -->
<MessageText="*****Building database solution" />
<MSBuildProjects="..\src\SQL\DatabaseSolution.sln"Properties="Configuration=Release" />
</Target>
That’s it! The output from a datadude build includes a number of files but the most important one is a .dbschema file which is an XML representation of all the objects in your database.
Deploying the output from a built datadude project
This is a little more difficult. We *could* simply use the MSBuild task to call our deployment script as we do for build script (see above) but the problem with that is that there are many pre-requisites (including datadude itself) and we don’t want to install Visual Studio and all the assorted paraphernalia onto our various environments. Instead we chose to make use of a command-line tool called VSDBCMD.exe to deploy datadude projects. VSDBCMD does basically the same job as what happens if you were to right-click on a datadude project in Visual Studio and select “Deploy” i.e. It compares the output of a build (A) to the target database (B) and works out what it needs to do to make B look like A. It then produces a .sql script that will actually make those requisite changes, then goes and executes it.
The difficulty comes in VSDBCMD.exe having its own list of file dependencies that are listed at MSDN article How to: Prepare a Database for Deployment From a Command Prompt by Using VSDBCMD.EXE, thankfully a much smaller list then if we were using the MSBuild task.
![image image]()
Some of those files, namely:
- Sqlceer35en.dll
- Sqlceme35.dll
- Sqlceqp35.dll
- Sqlcese35.dll
- System.Data.SqlServerCe.dll
get installed with SQL Server CE. We bundle along the x86 & x64 installers for SQL Server CE along with all the rest of our deployment artefacts and then, as part of Deploy.proj, install them like so:
<ExecCommand='msiexec /passive /l* "$(SetupLogDirectory)\SSCERuntime_x86-ENU.log" /i "$(BuildDir)\Vendor\SSCERuntime_x86-ENU.msi"' />
<ExecCondition="$(X64)"Command='msiexec /passive /l* "$(SetupLogDirectory)\SSCERuntime_x64-ENU.log" /i "$(BuildDir)\Vendor\SSCERuntime_x64-ENU.msi"' />
That takes care of some of the dependencies but we still have to take care of:
- DatabaseSchemaProviders.Extensions.xml
- Microsoft.Data.Schema.dll
- Microsoft.Data.Schema.ScriptDom.dll
- Microsoft.Data.Schema.ScriptDom.Sql.dll
- Microsoft.Data.Schema.Sql.dll
- Microsoft.SqlServer.BatchParser.dll
as well as the actual VSDBCMD.exe file itself. Quite simply we keep those files in SVN and then bundle them along with all our deployment artefacts (I won’t show you how we do that because its out of the scope of this post and besides if you’re at all proficient with msbuild then you’ll know how to do that and if you’re not, well, why are you reading this?)
==============================================================