SQLSaturday #411 – Cambridge 2015

For a while now I’ve been going to the Cambridgeshire SQL Server User Group, which I have found really useful especially as we are at moving towards a more self-service model across IT which requires IT to be a bit more cleverer. Beyond the informative speaker talks you get a really good networking opportunity. The group is run by Mark Broadbent (@retracement) and has recently merged with SharePoint Saturday Cambridge. Although Cambridge is about the same time on the train as London. I find Cambridge a nice place to visit, not just because it doesn’t involve me going on the underground.



Some how Mark has managed to put together an entire day SQL Saturday event with two precon days – I swear this guy has gotten hold of Mary Poppins bag. Don’t let the precon name fool you, these are (well were) two full days of very good training. After all, its not every day you get a full-days training with a Microsoft Most Valuable Professional (MVP) or the Data Warehouse Architect at UNOPS, the implementation arm of the UN!

One of the downsides of attending was having to get up a 5 am :(

Note to self: Next time, stay in a hotel – yes Mark, I should have read your deals on hotels (and looked at the back of the guide book for the map)

Day 1


My first day was with Penny Coventry – Automating business processes with SharePoint, Office 365 and Azure. This was a really good session where I got to see some of the power of SharePoint and cool new Azure features from the power users perspective rather then the overly technical developer perspective. I’ve often found users will shrug off ideas and solutions when explained by a “techy” as being too “technical” – and to be fair to them, we do get a big excited and go over the config line and into the development world all too often.

The out-of-the-box just config, no development features in SharePoint like Workflows are amazing. From a “techy” point of view, I’m amazed at how easy it is to do something that would be quite complex to create as a bespoke app. It was also good to see how Microsoft are expanding into the cloud, both with Office 365 and Azure. I’ve previously only really looked into Azure from the “techy” side, like Troy Hunt’s “Have I been pwned?” site but I hadn’t had a chance to have a look at things like Logic App. Logic App which looks like its another workflow type services where you can drag and drop connections (kinda like SSIS) and do things like pulling tweets from Twitter into a SharePoint list, or a MS-SQL database or whatever. Again, this is the sort of thing power users can do without IT or a developers. For me this is great news, creating custom connectors in SSIS is a pain, if the power users can get data into a MS-SQL database without me developing anything, great. Makes my like a lot simpler. Combine this with some of the SharePoint features, like Access Web Apps, I can give users access to the static data so they can keep it up-to-date without the manual process without fear of them wrecking it thanks to some (hopefully) clever business logic.

One last comment about the session, Penny completed the entire thing, including the live demos, using her Office 365\Azure subscription. I was amazed at this as normally I find Office 365\Azure professionals always have a “but” when it comes to them being fully cloud, they always spin up a virtual machine or such. Despite being fully cloud, she knew all the little “this isn’t on the on-prem version – yet” or “you can’t do that in the cloud” or “if you don’t see that option on on-prem you need to..”, as well as latest low down on SharePoint 2016.

Other session going on was A Day with Columnstore Indexes with Niko Neugebauer. To be honest, I wimped out of going to this one – I tend to leave the intimate details of indexes to our Senior DBA and just pick up the summary sheet at end and apply the base config and only update it when he says so. Lazy I know, but I can’t learn everything at once!

Day 2

The second day was with Rasmus Reinholdt Nielsen – Building tomorrows ETL architecture with BIML and MDS today and was the main reason for me attending, I spend most of my time in SSIS. Despite packing two laptops, I still found myself popping out to the shops to pick up a USB mouse as I forgot to pack one. I’ve previously heard about BIML and despite being recommended from a lot of folks I just hadn’t had a proper look, the main reason was the resentment towards Microsoft for not providing such a intermediate language – not that Scott Currie hasn’t done amazing job, its just it should be merged into core product. Rasmus started with a introduction to C# which covered the basics, which he quickly skipped through as we all ready had a working knowledge of C# and got onto BIML. Before too long I had created my first basic BIML script. He then went onto some of the more complex features of SSIS and then how to achieve the same in a BIML script. We then moved onto how we use some BIML features and those C# skills to auto generate a SSIS package, then using Master Data Services (MDS) as your meta library to, In short, automated staging.

Mean while in the other session, they were not only failing over servers…

Day 3

The actual SQLSaturday event was hosted at the very nice The Møller Centre, unfortunately this isn’t as close to the train station as the Cambridge City Hotel the precons were hosted at – still it does have free parking.

The day was amazing, the people were are friendly – I managed to go up to a few and ask a few questions which they kindly answered. They even had people handing out free chocolates :)

Being a free event, you expect the sponsors to be spamming you with marketing material – it just didn’t happen, they were all playing it cool and waiting for you to come to them. I did pop over to the Dell Spotlight stand who gave me a demo of some of the cool new features in Spotlight – something we already own but hadn’t had a chance to upgrade – needless to say I’ve been poking our DBAs to upgrade since I got back in the office. I also stopped by the Profisee stand, after Rasmus training I was starting to look more into Master Data Services (MDS) and the word on the street was Profisee was the MDS experts. Even after registering for their free online training, I haven’t had a single sales call.

The sessions were good – there was a nice range this was partly because they had joined up with the SharePoint group. For me personally this was very good as BI sits in the middle. The biggest problem was which one to pick! I managed to pick up a few tips from Chris Webb with my data modelling and MDX which was one of my big ones on my to-do list. I enjoyed Terry McCann session on Reporting on Reporting services, its one of those problem we all have – we have these reports, but are they actually being used? And he explained his journey on how he’s tried to answered that and some of the problems he came across. Its good that he’s shared his experience, its not a particular sexy problem to solve, in fact its a very unsexy, unrewarding, boring chore that all report writers must do and at least now I have a good head start.

Overall I think it was a very worthwhile event. I learnt a lot, my work got some very cheap training, I met a lot of very clever people and had fun doing it all. Can’t wait till the next one :)

You can see Rodney Kidd photos on Flickr







Change text direction – Visual Studio 2013 / SQL Server 2014

I’ve spent a while today trying to change the text direction (orientation) of a header on a SSRS report – the option was rather oddly placed in Visual Studio – the option is within the properties (for the text) Localization > Writing Mode > then change from Default to either Rotate270 or Vertical. I was hoping it would be within the Font options. Guess that would be too easy 😉

Missing SSIS toolbox in Visual Studio 2013 / SQL Server 2014

Stupidly today, I accidently closed the SSIS toolbox. After a bit of unsuccessful searching in the Visual Studio menus, I gave up and done the IT norm and Google it – hoping I wouldn’t have to reinstall.

Luckily I found James Serra had already blogged about it, the only difference is Microsoft has changed the icon, its still in the same place at least


So when is a Visual Studio Toolbox, not a Toolbox? When it’s a SSIS Toolbox.


One of the problems we have is documentation and how to keep it up-to-date. We tend to deliver a new batch of reports as part of a CR (Change Request) then make changes as part of BAU (Business as usual) unless its a particularly complex or time consuming change. The problem is often BAU changes happen rapidly often within a short time frame and time isn’t always allocated to updating documentation.

Over time a small, simple change and evolve into a major change and the documentation can significantly drift out-of-sync. Documentation is an important part of the communication between developer, analysis and the end-user, without it misunderstanding and confusing can occur and errors and mistakes happen.

Whilst investigating how other are documentation work I rediscovered the much unloved extended property MS_Description. As you might have guess, using the description field to enter, yes, you guessed it, a description of the tables, views, stored procedure and other SQL objects. Speaking of naming – it’s also important to name things correctly. A table named dbo.Table1 helps no-one. I tend to name any temporary tables MS_TMP – this then quickly identities me as the owner and the fact it can most likely be destroyed. Any dimensions are prefix with Dim and facts are, yes, you’ve guessed it again – Fact.

I have a TSQL script that pulls this data into a table on the staging database as part of a SSIS package, this is configured a SQL Job, however it is only execute post-deployment – we use TeamCity to build the project and then OctopusDeploy for deploying to the various environments.

As we move towards using Analysis Services, I needed a way to document the cubes, luckily Alex Whittles from Purple Frog has already come up with a solution using the Dynamic Management Views (DMVs) and blogged about it – http://www.purplefrogsystems.com/blog/2010/09/olap-cube-documentation-in-ssrs-part-1/.

I have however modified it, I’ve followed the standard ETL process, so I’ve extracted the data from the DMVs into a staging data, like SQL objects, this is only executed post-deployment by a SQL job that is executed by OctopusDeploy task. I didn’t like the idea of dynamically querying the data every time and as we use continue integration\deployment I don’t need to worry about the data being out of data.

All I have left to really look at is documenting SSIS. Speaking of SSIS, I found Jamie Thomson post on naming conventions very useful.

Failed RavenDB backup

I had a rather odd error with OctopusDeploy where the RavenDB backups were failing. Turns out the problem was because I had assigned 1 IPv4 address to OctopusDeploy and another IPv4 address for TeamCity. Oddly the RavenDB backup routine referenced the localhost address –, it appeared to be hardcode with no option other then changing port. This was especially weird as the OctopusDeploy service, was able to access it fine. To resolve – at least long enough to get a backup to migrate to v3 which uses SQL Server – was to setup a port proxy so accessing TCP port 10931 on the local loopback would resolve to the same port but on a the actual public IPv4

To do this run Command Prompt as Administrator (right-click, Run As Administrator) then type

interface portproxy
add v4tov4 listenaddress= listenport=10931 connectaddress={{your ip}} connectport=10931

Remember to replace {{your ip}} with the actual IP address

Swollen Business Objects database

Our new HR system uses SAP Business Objects for transactional reporting, as this was setup by our HR supplier we haven’t really got involved with it beyond writing a few custom reports, turns out we’ve written quite a few, so much so that the Business Objects CMS database has swollen to over 30 GBs. This, as you might have guessed, is causing a few performance problems. After a discussion with the supplier, we found out it was storing all the reports, even the ad-hoc one-off reports and we should have set a parameter to limit it.

After limiting it, the database still didn’t reduce the database size. A quick Google found someone saying that it is because it can’t handle over 1 million rows of old reports, which makes sense, running a SQL command that would delete over 30gb of data would have insane log file grow as well as massive performance problems.

The steps to resolve this, on MS-SQL, was to:

  1. Stop the Business Objects services on the app tier
  2. Ensure you have a backup of the database
  3. Set the recovery model to simple if it isn’t already
  4. Copy out all the required rows
  5. Truncate the table
  6. Re-insert the required rows back into the table
  7. Check everything looks ok, deal with the fact the database is now tiny
  8. Create another backup
  9. Start the app tier backup