SQL Server MS Blogs

A faster CHECKDB – Part III

SQL Server SQL CSS - Mon, 11/10/2014 - 08:33

Bob Ward introduced Part 1 and Part 2 of ‘A faster CHECKDB’ as highlighted in the following links.

Part 1: http://blogs.msdn.com/b/psssql/archive/2011/12/20/a-faster-checkdb-part-i.aspx 
Part 2: http://blogs.msdn.com/b/psssql/archive/2012/02/23/a-faster-checkdb-part-ii.aspx 

Recently,  Jonathan pointed out a memory grant issue in the following post.


I always enjoy my interactions with Jonathan and this is yet another positive experience for us all.  After digging into this I found there is a bug and it was corrected in the SQL Server 2014 release.

The heart of the matter is a cardinality problem for the estimated number of fact rows.  The cardinality estimation drives a large portion of the memory grant size calculation for the DBCC check commands.  As Jonathan outlines in his post the overestimate is often unnecessary and reduces the overall performance of the DBCC check operation.

The checkdb/checktable component responsible for returning the number of fact rows (cardinality) for each object mistakenly returned the size of the object as the number of rows.

The following example shows 10,000 rows, requiring 182,000 bytes on disk.

Prior to SQL Server 2014 the SQL Server code would return a cardinality estimate based on 182,000 instead of 10,000.  As you can easily see this is an significant, row estimate variance.

If you capture the query_post_execution_showplan (or pre) you can see the checkindex plan used by the DBCC check operation.

Shown in the table are plan excerpts from SQL Server 2012 and SQL Server 2014, using an EMPTY, table.  Notice the estimate is near 2 pages in size (8192 * 2) and for an empty table SQL Server only produces 3 total facts related to allocation state.

SQL 2012

<StmtSimple StatementEstRows="129.507" StatementOptmLevel="FULL"

          <QueryPlan DegreeOfParallelism="0" MemoryGrant="33512" NonParallelPlanReason="MaxDOPSetToOne" CachedPlanSize="24" CompileTime="0" CompileCPU="0" CompileMemory="128">

  <RelOp NodeId="1" PhysicalOp="Sort" LogicalOp="Sort" EstimateRows="16772             


                       < RunTimeCountersPerThread Thread="0" ActualRows="3" ActualEndOfScans="1" ActualExecutions="1" />

SQL 2014

<StmtSimple StatementEstRows="10" StatementOptmLevel="FULL"

          <QueryPlan DegreeOfParallelism="0" MemoryGrant="1024" NonParallelPlanReason="MaxDOPSetToOne" CachedPlanSize="24" CompileTime="0" CompileCPU="0" CompileMemory="128">

  <RelOp NodeId="1" PhysicalOp="Sort" LogicalOp="Sort" EstimateRows="9"             


                       < RunTimeCountersPerThread Thread="0" ActualRows="3" ActualEndOfScans="1" ActualExecutions="1" />

A more dramatic difference is shown from a test I ran against a 1.3 trillion row table, without the fix.  The estimated rows are 900 trillion with a memory grant size of 90GB.

Prior to SQL Server 2014 you can leverage Jonathan’s advice and limit the DBCC check using Resource Governor or move to SQL Server 2014 to execute your DBCC check operations faster.

Bob Dorr - Principal SQL Server Escalation Engineer

Categories: SQL Server MS Blogs

The Oracle and Teradata connector V3.0 for SQL Server 2014 Integration Service is now available for download

SQL Server Release Blog - Tue, 11/04/2014 - 13:37
Dear Customers, The Oracle and Teradata connector V3.0 for SQL Server 2014 Integration Service is now available for download at the Microsoft Download Center. Microsoft SSIS Connectors by Attunity Version 3.0 is a minor release. It supports SQL...(read more)
Categories: SQL Server MS Blogs

SQL Server MAX DOP Beyond 64 – Is That Possible?

SQL Server SQL CSS - Thu, 10/30/2014 - 11:14

I recently posted a blog outlining how the partitions of a table can be used in the calculation for the achievable max degree of parallelism (MAX DOP). http://blogs.msdn.com/b/psssql/archive/2014/09/04/a-partitioned-table-may-limit-the-runtime-max-dop-of-create-alter-index.aspx 

Discussing this with various peers I uncovered a perception that SQL Server was always limited to a max of 64 CPUs, even if the machine had more (128, 160, …)   This is not the case, instead the perception is semantic driven and once you realize how to leverage it maintenance operations can take advantage of more than 64 CPUs.

It is not hard to understand how the perception started or continues to propagate itself.

SQL Server Books Online states: “Setting maximum degree of parallelism to 0 allows SQL Server to use all the available processors up to 64 processors. “ and that is where most of us quit reading and assume the MAX DOP for SQL Server is limited to 64.

Instead if you read a bit further: “If a value greater than the number of available processors is specified, the actual number of available processors is used.”

Simply stated if you tell SQL Server to use more than 64 CPUs SQL Server will attempt to do just that.

Bob Dorr - Principal SQL Server Escalation Engineer

Categories: SQL Server MS Blogs

SQL CLR assembly fails verification with “Unable to resolve token”

SQL Server SQL CSS - Wed, 10/22/2014 - 13:05

Recently we worked with a customer has an SQL CLR assembly. This customer decided to upgrade from SQL Server 2008 R2 to SQL Server 2012. But this assembly failed to register with SQL Server and he received the following error:
Msg 6218, Level 16, State 2, Line 11
CREATE ASSEMBLY for assembly 'test3.5' failed because assembly 'test3.5' failed verification. Check if the referenced assemblies are up-to-date and trusted (for external_access or unsafe) to execute in the database. CLR Verifier error messages if any will follow this message [ : Test.Test::my_func][mdToken=0x6000001][offset 0x00000000] Unable to resolve token.

First of all, SQL Server 2008 R2 and SQL 2012 use different versions of CLR. SQL 2008 R2 and below uses CLR 2.0/3.5 but SQL 2012 was upgraded to use CLR 4.0 and above.
What's interesting for this customer is that if they compile the assembly using 4.0 compiler, then they could register the assembly by using CREATE ASSEMBLY.
When we compared the IL generated, there is just one difference dealing with a local variable.
For the assembly compiled for 2.0/3.5, you see ".locals init ([0] void& pinned pData)". But for the assembly compiled for 4.0, you see ".locals init ([0] native int& pinned pData)". See a screenshot below with ildasm:
IL generated by 2.0 compiler

IL generated by 4.0 compiler

The IL in question is generated for the code like fixed (void* pData = &buf[1024]). Basically, the intention is to pin the memory for native call.


There are two changes in CLR that cause CREATE ASSEMBLY to fail. First, CLR 4.0 compiler no longer generate IL "void& pinned" for code like fixed (void* pData = &buf[1024]). Instead, it generates IL like .locals init ([0] native int& pinned pData). Additionally, CLR 4.0 peverify code is updated and no longer recognize that particular IL generated by CLR 2.0 compiler. When you CREATE ASSEMBLY in SQL Server, it has to do peverify to ensure the assembly passes verification. In this case, SQL 2012 uses 4.0 peverify code to verify the assembly compiled with 2.0 compiler. Therefore, it fails.


There are two solutions for this.
First option is to compile your assembly using CLR 4.0 compiler targeting 4.0 framework. This should be the best option because SQL 2012 uses CLR 4.0.
If you need your assembly to continue to target 2.0/3.5 framework, you can use 4.0 compiler but link 2.0 version of mscorlib.dll. Here is an example command.
C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe /nostdlib+ /noconfig /r:c:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll -unsafe -optimize -debug:pdbonly -target:library -out:test.dll test.cs

Repro Step 1 Save the following code into test.cs

using System;
using System.Collections;
using System.Runtime.InteropServices;
using System.Text;
using System.Reflection;
using System.Reflection.Emit;
using Microsoft.Win32;
namespace Test
unsafe public class Test
unsafe public delegate void my_Delegate(ushort comp,
ushort func,
void* ptr,
uint length);
public static my_Delegate delegate1;

uint dataLength = 0;
public void my_func (String objectId,
uint component,
uint method,
ushort level,
String caption,
uint offset,
int length,
byte[] buf)
fixed (void* pData = &buf[1024])


Step 2 Compile the assembly

Compile using the following command
C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe -unsafe -optimize -debug:pdbonly -target:library -out:test3.5.dll test.cs


If you "create assembly asem from 'C:\repro2\test3.5.dll' with permission_set=unsafe", you will receive the above error.

Step 4: solution and workaround

But the following two commands won't result in errors
C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe -unsafe -optimize -debug:pdbonly -target:library -out:test4.0.dll test.cs
C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe /nostdlib+ /noconfig /r:c:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll -unsafe -optimize -debug:pdbonly -target:library -out:test.dll test.cs

Jack Li | Senior Escalation Engineer | Microsoft SQL Server Support

Categories: SQL Server MS Blogs

Cumulative Update #4 for SQL Server 2014 RTM

SQL Server Release Blog - Tue, 10/21/2014 - 13:31
Dear Customers, The 4 th cumulative update release for SQL Server 2014 RTM is now available for download at the Microsoft Support site. To learn more about the release or servicing model, please visit: CU#4 KB Article: http://support.microsoft...(read more)
Categories: SQL Server MS Blogs

Report Builder of SQL Server 2008 R2 Service Pack 3 does not launch.

SQL Server Release Blog - Thu, 10/16/2014 - 17:55
Dear Customers we have discovered a problem with Report Builder that ships with SQL Server 2008 R2 Service Pack 3. If you installed SQL Server 2008 R2, have upgraded it to Service Pack 2 and then applied Service Pack 3, then Report Builder will...(read more)
Categories: SQL Server MS Blogs

SQL Server 2008 Service Pack 4 has released.

SQL Server Release Blog - Tue, 09/30/2014 - 13:29
Dear Customers, Microsoft SQL Server Product team is pleased to announce the release of SQL Server 2008 Service Pack 4 (SP4). As part of our continued commitment to software excellence for our customers, this upgrade is free and doesn’t...(read more)
Categories: SQL Server MS Blogs

SQL Server 2008 R2 Service Pack 3 has released.

SQL Server Release Blog - Fri, 09/26/2014 - 13:41
Dear Customers, Microsoft SQL Server Product team is pleased to announce the release of SQL Server 2008 R2 Service Pack 3 (SP3). As part of our continued commitment to software excellence for our customers, this upgrade is free and doesn’t...(read more)
Categories: SQL Server MS Blogs

SQL Application Column Encryption Sample (Codeplex) available

SQL Server Security Team - Wed, 09/17/2014 - 16:36

To achieve many compliance guidelines on Azure SQL Database, the application needs to encrypt the data. The intent of this article is provide some guidelines and an example library for encrypting data at rest for relational databases.

We just published the source code for a library at “SQL Application Column Encryption Sample” in Codeplex (https://sqlcolumnencryption.codeplex.com/) that can help developers to encrypt data (columns) at rest in SQL Azure database. This library is intended to work as sample code and published as open source with the goal to allow the community to improve it while we make a better solution available for Azure SQL Server.

We will appreciate your comments & feedback on this library as it will help us make it better as well to make sure we can make future solutions better.

Please use the Discussion section on the Codeplex library or leave a comment in this forum for feedback & comments.

Categories: SQL Server MS Blogs

Cumulative Update #2 for SQL Server 2012 SP2

SQL Server Release Blog - Mon, 09/15/2014 - 20:57
Dear Customers, The 2 nd cumulative update release for SQL Server 2012 SP2 is now available for download at the Microsoft Support site. Cumulative Update 2 contains all hotfixes which have been available since the initial release of SQL Server 2012...(read more)
Categories: SQL Server MS Blogs

Cumulative Update #12 for SQL Server 2012 SP1

SQL Server Release Blog - Mon, 09/15/2014 - 20:56
Dear Customers, The 12 th cumulative update release for SQL Server 2012 SP1 is now available for download at the Microsoft Support site. Cumulative Update 12 contains all the SQL Server 2012 SP1 hotfixes which have been available since the initial...(read more)
Categories: SQL Server MS Blogs

A Partitioned Table May Limit the Runtime MAX DOP of Create/Alter Index

SQL Server SQL CSS - Thu, 09/04/2014 - 11:11

I was working with a 1.3 trillion row table in the Microsoft lab when I learned more about the ins and outs of this behavior.  This issue is alluded to in SQL Server Books Online but allow me to expand on the behavior a bit more. (http://msdn.microsoft.com/en-us/library/ms190787.aspx)

The lab machine is a 128GB, 64 CPU system running enterprise editions of Windows and SQL Server.  

  • When I built an index on a 25 million row table, non-partitioned the MAX DOP selected for the index build was 64.   
  • When I built an index on the 1.3 trillion row table, partitioned (27 total partitions) the MAX DOP selected for the index build was 27.

I spent some time looking at and tuning the maximum query grant and sp_configure index memory settings without seeing any change in MAX DOP = 27 behavior.

After reading over SQL Server Books Online and stepping through the SQL Server (CalculateDOP) logic the answer was clear.

There are certain operations against a partitioned table (create and alter index are a few of these) that leverage the partitioning when performing range operations.   The partition ranges are then used to drive the maximum possible, runtime DOP level.

First, SQL Server limits the maximum possible DOP = MIN(64, CPUs).  If you only have 32 CPUs the MAX DOP possible will be 32.  If you have 160 CPUs the MAX DOP possible will be 64.

Then for some operations, such as create index, the partitions are considered.   When performing a create/alter index with range partitioning: MIN(Partitions, MIN(64, CPUS)).

Note: The formulas presented here assume sp_configure max degree of parallelism=0 and no other resource governor or query option was established.   You may use the MAXDOP query or resource governor options to alter the runtime DOP selections.

In my test case the I had 64 CPUs so the MIN becomes the partitions = 27.   This is a very practical choice in many situations as the partitions usually line up with hardware and running DOP = partitions in this way is very efficient and inline with the DBAs database design decisions.

The specific index build I was doing was very CPU intensive (Spatial) activities and from testing I knew if I achieved additional runtime DOP I could build the index faster (knowing I consume more resources at the possible expense of other activity!)

Evenly splitting the partitioned table into at least as many partitions as I have MIN(64, CPUS) CPU resources allowed me to apply more CPU resources to the create index operation. 

In my specific scenario the 1.3 trillion row, spatial index builds in ~4.5 hours @ 27 CPUs and ~2.3 hours @ 64 CPUs. 

WARNING: Increasing the runtime DOP does not always provide improved performance.  The additional overhead may put pressure on memory, I/O and impact performance of other queries as the additional resources are consumed.    You should test carefully but consider your partitioned layout in order to optimize your DOP capabilities.

Specific Partition Index Builds

You should also be aware that the partition scheme and index may limit the MAXDOP when rebuilding a specific index on a specific partition.   For some indexes you are allowed to rebuild a partitioned index for a specific partition.   This may use the partition’s range and limit the index rebuild to MAXDOP=1 where a similar index build on a non-partitioned table may use an increased DOP level.

Bob Dorr - Principal SQL Server Escalation Engineer

Categories: SQL Server MS Blogs

Making a Data-Driven Decision

Bucky Woody - Wed, 09/03/2014 - 06:58

We make decisions all the time. Most are simple - diet or regular, paper or plastic, beach or mountains. But some decisions are complicated - and not because of the choice - but because of all the things you choose against when you make that choice. And in fact, the more choices we have, the more dissatisfied we are with our final choice. If you've ever had to make a choice whether to use a particular technology, make a purchase, change jobs, move across country or any other complicated decision, you might be paralyzed over what to do. 

For most of my life I've used a deceptively simple technique to make choices where I'm not sure if I'm considering all the ramifications. I can't claim that this is an original process, I probably read it somewhere or adapted it from something I've seen, but it's a lot like chess - easy to describe, and yet handles complicated implementations well. The beauty if the process is that it's driven by data - if you're honest and complete with it. Here's how it works:

  1. Instead of starting with the choices you have, start with what is important to you in the decision, making sure you're as agnostic as you can about the choices available. Using paper (or Excel, in my case, I know, I'm a geek) list those factors from left to right across the top. Be honest; this is the most important part
  2. Next, list the options you have down the column of the left-hand side. It really doesn't matter what order they are, in fact, it's probably best that it's random, or alphabetical, or some other non-deterministic criteria. 
  3. Now apply a "score" beneath each factor, for each option. I normally scale it from 1-3 for a simple, clear answer, and I'll use a 1-10 scale for more complicated "weighting" of the choice within the factor. 
  4. Add the scores across the left, and take the highest score. That's it. It's that simple.

Now you might think, "Yeah, I've done all this mentally already, I don't need to write it down." Wrong. Try it, and unless it's a very simple choice, you might be very surprised to see what you come up with. 

Let's take a concrete example. Most folks face the choice of taking one or more jobs. Let's say you have an opportunity to move to a startup, stay where you are, or move to a new role in your same company. There are lots of decision points for that, but here's a simple spreadsheet you might do:

So why would you "rank" the factors from left to right? Well, the factors aren't always equal. In this example, the young person I was working with told me that salary was the most important thing ever - until we questioned that a lot. Then I found that she really struggled with the job satisfaction part - it was second, but not by much. By using a spreadsheet, you can add a weighting factor to the final tally, which fine-tunes the result so that you're honest with yourself. And that honesty is super-important: If you just game your own spreadsheet you'll always get the answer you flipped a coin for in the first place. You have to lay the factors and their importance out first, and be brutally honest with yourself about that. Self-deception is your enemy here. 

The actual spreadsheet we used in her discussion was of course MUCH larger, and had far more detail. But that's the beauty of this system - you can use it for everything from this job example to buying a new car or house, selecting an database product, moving something to cloud computing, anything. And you'll find, as I do, that it's not necessarily the spreadsheet or the data that is important - it's spending the time to think it through.

Categories: SQL Server MS Blogs

Power View in Excel won’t render from SharePoint

SQL Server SQL CSS - Thu, 08/21/2014 - 13:20

I originally encountered this issue back in May with a customer.  We had another customer this month that had the same issue.  When you try to load an Excel Document with a Power View Report in it from SharePoint, you may encounter the default unable to load power view report image.

Before I get into specifics of the issue we had encountered with a few customers, I’d first like to say that you should validate that Silverlight is installed and working properly.  You may be able to do this by trying a standalone Power View (RDLX) Report outside of Excel. If you have check that and that is working, you may want to go through looking at the details presented in this blog post.

Unfortunately, the image above doesn’t really provide any real guidance as to what to do.  So, with any SharePoint issue, I go to the ULS log to see what is there.  Looking through the ULS log, there is error that stands out.  Nothing that shows an Exception of any kind.

08/21/2014 10:25:26.30    w3wp.exe (0x0C2C)    0x0254    Excel Services Application    Excel Calculation Services    ah3c5    Verbose    MossHost.InteractiveReportServiceUrl: Interactive report rervice URL is missing from the farm properties, property name: InteractiveReportServiceUrl    8ab5b09c-93d3-e002-33bc-206ec188ef95

You may or may not see the following entry:

w3wp.exe (0x16B8) 0x26FC Excel Services Application Excel Calculation Services ahgij Medium Not loading interactive reports since not enabled. 15028e9c-ad22-104d-0114-866c6d407dcd

Looking at a working system, we see the following:

08/21/2014 14:52:27.43    w3wp.exe (0x1330)    0x1508    Excel Services Application    Excel Calculation Services    ah3c7    Verbose    MossHost.InteractiveReportServiceUrl: Interactive report service URL: /_layouts/15/ReportServer/AdHocReportDesigner.aspx?ViewMode=Presentation&DeferredInitialization=true&Fit=true&PreviewBar=false&BackgroundColor=White&AllowSectionNavigation=false&EnableKeepAlive=true&AllowEditViewMode=false&AllowFullScreenViewMode=false    d2c4b09c-e336-e002-33bc-2fa5fdb18796

08/21/2014 14:52:27.43    w3wp.exe (0x1330)    0x1508    Excel Services Application    Excel Calculation Services    ah3da    Verbose    MossHost.InteractiveReportApiUrl: Interactive report API URL: /_layouts/15/ReportServer/InteractiveReportAPI.js    d2c4b09c-e336-e002-33bc-2fa5fdb18796

There are two Properties that will get added to the SharePoint Farm Configuration for Power View.

InteractiveReportServiceUrl – This is the URL Pointer for Excel Services to redirect to the Power View page and render the Power View report.  If this isn’t present, then Excel doesn’t know what to do with the Power View Report that is within the Excel Workbook.

InteractiveReportApiUrl – This is a helper Javascript file for use with Power View.

These properties are part of the SharePoint Farm Configuration.  We can view these via SharePoint by looking at the properties of the SPFarm from PowerShell.

You can also see these via SQL via querying the SharePoint Config Database.

FROM [SharePoint_Config].[dbo].[Objects]
Where Properties like '%InteractiveReportServiceUrl%'

You won’t get a result back if the values are missing from the farm.  This is a quick way to tell if they are missing.  The Properties output in my farm looks like the following, when those values are present. 

<object type="Microsoft.SharePoint.Administration.SPFarm, Microsoft.SharePoint, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c"><sFld type="Int32" name="m_PersistedFileChunkSize">4194304</sFld><sFld type="Int32" name="m_XsltTransformTimeOut">1</sFld><sFld type="Int32" name="m_cPasswordChangeGuardTime">45</sFld><sFld type="Int32" name="m_cPasswordChangeMaxTries">5</sFld><fld name="m_PasswordChangeEmailAddress" type="null" /><sFld type="Int32" name="m_cDaysBeforePasswordExpirationToSendEmail">10</sFld><sFld type="Boolean" name="m_bUseMinWidthForHtmlPicker">False</sFld><fld name="m_EncodedFarmId" type="null" /><fld type="System.Collections.Generic.HashSet`1[[System.Int32, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089]], System.Core, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_serverDebugFlags" /><fld name="m_AuthenticationRealm" type="null" /><sFld type="Boolean" name="m_userLicensingEnabled">False</sFld><fld type="System.Collections.Generic.Dictionary`2[[System.Guid, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Version, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089]], mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_Versions"><sFld type="Guid">3a9fdacb-c088-420d-9670-043c1417f6f2</sFld><sFld type="Version"></sFld><sFld type="Guid">66d56e1c-50b2-482e-af1a-7dd7ba0b72cc</sFld><sFld type="Version"></sFld><sFld type="Guid">00000000-0000-0000-0000-000000000000</sFld><sFld type="Version">15.0.4617.1000</sFld><sFld type="Guid">77e7f90e-1989-46c2-ad65-361a53dcb2e0</sFld><sFld type="Version"></sFld><sFld type="Guid">54d00007-0f81-42b1-8f06-fb9b981a617d</sFld><sFld type="Version"></sFld><sFld type="Guid">6ac833ea-3f8d-46b6-8b30-92ac4553a742</sFld><sFld type="Version"></sFld><sFld type="Guid">6371575d-8eae-41dd-903f-b9fbc2da7aad</sFld><sFld type="Version"></sFld><sFld type="Guid">c8a0b463-1852-4f3b-8fd3-216c4d19585a</sFld><sFld type="Version"></sFld><sFld type="Guid">42c6e513-ad52-4d28-93d6-d07d1afd7b14</sFld><sFld type="Version"></sFld></fld><fld name="m_UpgradeContext" type="null" /><fld type="System.Collections.Hashtable, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_UpgradedPersistedFields" /><fld type="System.Collections.Hashtable, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_Properties"><sFld type="String">WopiLicensing</sFld><sFld type="String">HostBIEnabled</sFld><sFld type="String">InteractiveReportApiUrl</sFld><sFld type="String">/_layouts/15/ReportServer/InteractiveReportAPI.js</sFld><sFld type="String">DisableIntranetCallsFromApps</sFld><sFld type="Boolean">True</sFld><sFld type="String">DisableIntranetCalls</sFld><sFld type="Boolean">True</sFld><sFld type="String">InteractiveReportServiceUrl</sFld><sFld type="String">/_layouts/15/ReportServer/AdHocReportDesigner.aspx?ViewMode=Presentation&amp;DeferredInitialization=true&amp;Fit=true&amp;PreviewBar=false&amp;BackgroundColor=White&amp;AllowSectionNavigation=false&amp;EnableKeepAlive=true&amp;AllowEditViewMode=false&amp;AllowFullScreenViewMode=false</sFld><sFld type="String">GuestSharingEnabled</sFld><sFld type="Boolean">True</sFld></fld><sFld type="String" name="m_LastUpdatedUser">BATTLESTAR\asaxton</sFld><sFld type="String" name="m_LastUpdatedProcess">psconfigui (3468)</sFld><sFld type="String" name="m_LastUpdatedMachine">ADMADAMA</sFld><sFld type="DateTime" name="m_LastUpdatedTime">2014-06-20T14:53:36</sFld></object>

So, what do we do if they are missing?  I was able to correct it by adding them back via PowerShell. Here are the PowerShell Commands for SharePoint 2013.

# Get the SPFarm object and list out the existing properties to verify
# the ones we are looking for are missing.
$farm = Get-SPFarm

# Set the Property Name and Value to add - InteractiveReportServiceUrl
$propName = "InteractiveReportServiceUrl"
$propValue = "/_layouts/15/ReportServer/AdHocReportDesigner.aspx?ViewMode=Presentation&DeferredInitialization=true&Fit=true&PreviewBar=false&BackgroundColor=White&AllowSectionNavigation=false&EnableKeepAlive=true&AllowEditViewMode=false&AllowFullScreenViewMode=false"

# Add the InteractiveReportServiceUrl Property to the Server Farm
$farm.Properties.Add($propName, $propValue);

# Set the Property Name and Value to add - InteractiveReportApiUrl
$propName = "InteractiveReportApiUrl"
$propValue = "/_layouts/15/ReportServer/InteractiveReportAPI.js"

# Add the InteractiveReportApiUrl Property to the Server Farm
$farm.Properties.Add($propName, $propValue);

# Propagate the changes back to the Database
# true to silently reinitialize the object with an existing object's data if the object already exists in the configuration store;
# false to throw an exception if the object already exists.

If the properties are there and are blank, or have the wrong value, you can do the following to update the existing properties.

$farm.Properties[$propName] = $propValue; 

instead of

$farm.Properties.Add($propName, $propValue);

Once that is done, you’ll want to do an IISRESET on your servers for the Config information to get loaded.  Then try your Power View Report again and it should come up.

How does the configuration get into this state?  I don’t know for sure, but my thoughts are that it has to do with the order in which the Service Apps were installed.  If the RS Service was configured before the Excel Calculation Service, we may get into this state.  Regardless, the commands above should get it back into working status.


Adam W. Saxton | Microsoft SQL Server Escalation Services

Categories: SQL Server MS Blogs

Log reader agent may fail to start on SQL Server 2012 Service Pack 1 Update version 11.0.3460.0

SQL Server Release Blog - Wed, 08/20/2014 - 11:18
Yesterday we have confirmed a SQL transactional replication problem in SQL Server 2012 version 11.0.3460.0 where running a second instance of the log reader agent for a different publication will fail with error “Another logreader agent for the...(read more)
Categories: SQL Server MS Blogs

I can't hear you over the sound of how small your fonts are

Bucky Woody - Tue, 08/19/2014 - 09:01

I've had it.

I sat through *another* presentation where the screen fonts and icons were so small I couldn't tell what was going on. No, it wasn't a Microsoft presentation, it was on a Linux box. But presenters from colleges to conferences routinely get poor marks because THEY DON'T MAKE THE SCREEN ELEMENTS BIG ENOUGH TO SEE.

So send them this. With my compliments. I'll make as easy as possible.

Windows Step 1:

Open whatever tool you're using, set the fonts to a large size. For instance, in Visual Studio/SQL Server Management Studio, etc, open Tools | Options | Fonts and set them to something you can see when you walk 7 feet from your laptop:

Step 2:

Hold down the Windows Key and the Plus Key (Windows +). This brings up the screen magnifier. Move your pointer, the screen follows. To exit, hold down the Windows Key and the Escape Key (Windows Esc):

or, Step 2:

Install Zoomit.exe from here (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx) and hold down Control and 1 (CTRL+1). The screen freezes and expands wherever your screen is. Press the Escape key (ESC) to exit:

Here's a video (http://windows.microsoft.com/en-us/windows/make-screen-items-bigger-magnifier#1TC=windows-7). Learn the tool. It takes like 2 minutes. Seriously, practice with this for two minutes and everyone will stop beaming hatred and fury at you.

  1. Click the Apple logo in the upper left corner.
  2. Click "System Preferences."
  3. Click the "Universal Access" icon.
  4. Click the "On" button under the "Zoom" category.
  5. Click the "Options" button.
  6. Select the "Maximum Zoom" setting.
  7. Press "Command, "Option" and the "=" keys simultaneously to zoom the screen around the mouse cursor.
  8. Press the "Command, "Option" and "-" keys simultaneously to return to normal view.
  9. Press the "Command" and "Option" keys while moving the scroll wheel on the mouse to zoom in and out, as an alternative method.

You can also find pay-for magnifiers in the iOS store.


For Ubuntu, use the CompizConfig Settings Manager, look under Accessibility for Magnifier, click on it to configure, use the hotkeys there
If you don't have it installed, you can install it from the Ubuntu Software Center, or press Ctrl+Alt+T on your keyboard to open a Terminal, and run this command: sudo apt-get install compizconfig-settings-manager

Categories: SQL Server MS Blogs

Cumulative Update #3 for SQL Server 2014 RTM

SQL Server Release Blog - Mon, 08/18/2014 - 15:46
Dear Customers, The 3 rd cumulative update release for SQL Server 2014 RTM is now available for download at the Microsoft Support site. To learn more about the release or servicing model, please visit: CU#3 KB Article: http://support.microsoft...(read more)
Categories: SQL Server MS Blogs

Public Data Sources – finding them and using them

Bucky Woody - Wed, 08/13/2014 - 08:00

A quick post here on some valuable data sources you can use in your HDInsight, Microsoft Excel, SQL Server, APS, and other products to enrich your data. Sometimes it’s helpful just to peruse through various sources to see what you can put together to gain more insight and answers. If you know of other sources, feel free to reply and I’ll update this post from time to time. Note that this isn't just a "link dump", the links here refer to pages of links and even tools you can use to find the sets of data you're looking for. I use these quite a bit in my work. 

Description Link List from Data Science Central


100+ Interesting Data Sets for Statistics


Microsoft Azure Datasets


Google's Datasets Search Engine


ImportIO - Make your own datasets from webpages


Database Format of Wikipedia articles


Journalistic Datasets


United States government data sets


United States government statistics sets


United States weather data


World Bank Data


USA Financial Analysis from New York University


Categories: SQL Server MS Blogs

VSS backup of AlwaysOn Secondaries

SQL Server SQL CSS - Tue, 08/12/2014 - 11:56

Hi Everyone,

Today I’m going to highlight one of the changes brought by SQL Server 2012 SP2, which is the way we handle VSS Backup requests on AlwaysOn Secondary Databases.

Until now, any request for a FULL database backup (VSS_BT_FULL) thru VSS against a DB that is an AlwaysOn secondary was failing by design. Our VSS Writer SQLWriter would return FAILED_AT_PREPARE_SNAPSHOT (0x800423f4 - VSS_E_WRITERERROR_NONRETRYABLE).

A copy-only VSS backup (VSS_BT_COPY) would work.

The rationale being the following: a FULL backup is actually updating the target DB (reset of differential bitmap mainly), which is not possible when the DB is read only. Furthermore, because of the failover possibilities introduced by AlwaysOn, the favored option was to use Native SQL Servers backup that could rely on SQL Server variable backup location (http://msdn.microsoft.com/en-us/library/hh245119.aspx) if needed, and be ‘alwayson –aware’.

So that could be the end of the story: against an AlwaysOn Secondary DB, either use Copy_only VSS backups or use native backups.

But of course that wouldn’t make for a very interesting blog post…

Enters HyperV…

Consider the following scenario:

Large Windows HyperV Servers, hosting many HyperV virtual Servers, some of them SQL Servers in Always On architecture.

In short: a Private Cloud.

In this context, the IT usually takes care of the infrastructure at host level, and lets users deal with whatever happens within the VMs. One of the key tasks of IT is to manage backups (eg. for disaster recovery at datacenter level, or to provide restore of single VMs).

And the mainstream way to do that is to take VSS backups of the Host Disk Volumes. Microsoft System Center DPM will do exactly that.

But VSS backups are all about taking backups that are consistent: in ‘standalone’ SQL Server context you may already know all the logic SQLWriter implements to make sure that IO against the Databases that are backed up are frozen during the snapshot operation. So, back to our HyperV context, collecting a point-in-time image of a VHD without bothering with what happens within the VM would be defeating that very purpose right?

So what happens is the following: the VSS backup is propagated to Guest VMs thru HyperV integration services. That propagation hardcodes the backup type to VSS_BT_FULL, and therefore all guest VMs are initiating a VSS backup/snapshot in their own context. The purpose is to make sure that all applications are quiesced within all running VMs at the time we take the snapshot at the host level. This will enable us to generate a consistent backup of running VMs.

But let’s now put this in the context where one of the VMs is running an AlwaysOn secondary DB: you guessed it, it’s not going to work:

The important thing to know here is that the error returned by SQLWriter in VM3 will actually bubble up all the way to the initial VSS backup command at Host level, and will make it fail as a whole.

So we ended up in a situation where the IT infrastructure folks would see their Host backups failing from time to time for an unknown reason, depending on whether one or more of the VM present on the Host Drive being backup up had a secondary AlwaysOn DB! It could be that the AlwaysOn AG spans different HyperV hosts and therefore that the presence of a Secondary DB on a given Host is not something static over time.

Because of the complexity of the whole call chain, and because infrastructure IT operators may not have any visibility (or understanding) of the VM content, you can imagine what kind of troubleshooting challenges this would offer… And even when the situation is understood, well, what do we do? If host level backup must be somehow manually synchronized to the applicative state of Guest VMs, the Private Cloud scenario becomes quite more complicated all of a sudden.

This is the reason why SQL Server 2012 SP2 ships a code change for SQLWriter that will implement the following:

As you can see, SQLWriter now detects this specific situation and changes the backup type to VSS_BT_COPYONLY. This will only happen for VSS_BT_FULL backups against AlwaysOn secondary Dbs. VSS_BT_FULL backups against primary DB will happen without change.

In this case, the VSS backup will now successfully complete in VM3 and the host-level backup success state will no longer be tied to guest VM’s AlwaysOn activity. Private Cloud scenario unlocked!

Important note: the fact that VSS backup of AlwaysOn secondaries now works does not make it the preferred solution to backup SQL Server AlwaysOn architectures. The main purpose of the SP2 change is to avoid a situation where a single SQL Server in a VM fails a complete host-level backup operation that encompassing dozens of VMs.

The resulting backup for the VM hosting SQL should be considered a Disaster Recovery one, where AlwaysOn will be removed at once at restore time, not as a way to rebuild a subset of the nodes for an AlwaysOn Availability group. And for regular databases within the VM, that backup is as good any regular VSS one.

Finally, SQL Server 2012 SP2 only contains a partial fix for this issue. Servers running case-sensitive sort orders will require SQL Server 2012 SP2 Cumulative Update 2.


Guillaume Fourrat
SQL Server Escalation Engineer
Microsoft France

Categories: SQL Server MS Blogs

Having performance issues with table variables? SQL Server 2012 SP2 can help!

SQL Server SQL CSS - Mon, 08/11/2014 - 13:45

In a previous blog, I talked about how table variable can impact performance of your query. The reason is that the statements referencing the table variable are compiled when the table variable has no data in it. Therefore, cardinality estimate for the table variable is always 1. If you always insert small number of rows into the table variable, it may not matter. But if you insert large number of rows into the table variable, the query plan generated (based on 1 row assumption) may not be that efficient.
As part of supportability improvement, SQL Server 2012 Service Pack 2 made an improvement. This improvement will help in situations where you have large number of rows inserted into a table variable which joins with other tables. A new trace flag 2453 is introduced to activate this improvement. When SQL Server detects enough rows inserted into the table variable, it will then recompile the subsequent statements referencing the table variable. SQL Server will detect row count of the table variable at the time the statement is recompiled and can produce a more efficient plan. "Enough rows" mentioned is determined by recompile threshold for temp tables in KB http://support.microsoft.com/kb/243586.
This behavior is documented in http://support.microsoft.com/kb/2952444 .
I want to emphasize that trace flag 2453 must be used in order to activate this feature. If you are on SP2 and experience slow performance on a query using table variable, you can give this trace flag a try to see if it helps.
Let's use the same demo which I used in the previous blog to demonstrate the behavior.
First, set up table

dbcc traceoff(2453,-1)
dbcc freeproccache
set statistics profile off
use tempdb
if OBJECT_ID ('t2') is not null
drop table t2
create table t2 (c2 int)
create index ix_t2 on t2(c2)
--insert 100,000 rows into the perm table
set nocount on
begin tran
declare @i int
set @i = 0
while @i < 100000
insert into t2 values (@i)
set @i = @i + 1
commit tran
--update stats
update statistics t2


I'm going to use the same query below to show you the estimate difference.
set nocount on
declare @t1 table (c1 int)
begin tran
declare @i int
set @i = 0
while @i < 100000
insert into @t1 values (@i)
set @i = @i + 1
commit tran
set statistics profile on
select * from @t1 inner join t2 on c1=c2

set statistics profile off

Without the trace flag (2453), the query uses nested loop and the table variable is estimated incorrectly with just one row.

After I enabled the trace flag and flushed plan cache with the following commands, the plan was changed to hash match and the table variable is estimated correctly with 100000 rows.
dbcc freeproccache
dbcc traceon(2453,-1)

Jack Li
Senior Escalation Engineer | Microsoft SQL Server Support

Categories: SQL Server MS Blogs